diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 9cd73b3ad2..c9a5c5397c 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -1,7 +1,7 @@ # See https://pre-commit.com for more information # See https://pre-commit.com/hooks.html for more hooks -default_stages: [commit, manual] +default_stages: [pre-commit, manual] repos: - repo: https://github.com/astral-sh/ruff-pre-commit diff --git a/docs/source/deployment/databricks/databricks_deployment_workflow.md b/docs/source/deployment/databricks/databricks_deployment_workflow.md index aebb1cb94f..4cc4b2c57b 100644 --- a/docs/source/deployment/databricks/databricks_deployment_workflow.md +++ b/docs/source/deployment/databricks/databricks_deployment_workflow.md @@ -10,7 +10,7 @@ Here are some typical use cases for running a packaged Kedro project as a Databr - **Data engineering pipeline**: the output of your Kedro project is a file or set of files containing cleaned and processed data. - **Machine learning with MLflow**: your Kedro project runs an ML model; metrics about your experiments are tracked in MLflow. -- **Automated and scheduled runs**: your Kedro project should be [run on Databricks automatically](https://docs.databricks.com/workflows/jobs/schedule-jobs.html#add-a-job-schedule). +- **Automated and scheduled runs**: your Kedro project should be [run on Databricks automatically](https://docs.databricks.com/en/jobs/scheduled.html#add-a-job-schedule). - **CI/CD integration**: you have a CI/CD pipeline that produces a packaged Kedro project. Running your packaged project as a Databricks job is very different from running it from a Databricks notebook. The Databricks job cluster has to be provisioned and started for each run, which is significantly slower than running it as a notebook on a cluster that has already been started. In addition, there is no way to change your project's code once it has been packaged. Instead, you must change your code, create a new package, and then upload it to Databricks again.