You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The goal for this issue is to build a linter for the Data Quality category to check whether and how a project is using tools to assert quality standards on its input data, e.g. GreatExpectations, TFDV and (maybe) Bulwark, the successor of Engarde.
Firstly, we should figure out, primarily for GreatExpectations and TFDV:
How to apply these tools to a ML project? What generally needs to change about a project in order to implement such a tool? How much effort does this take? The latest branch of the basic project in mllint-example-projects repo can be used as a base for this.
What constitutes effective use of these tools? What kind of checks should / would ML engineers want to implement on their data? Are there any default checks that should always be enabled, or should the user create a certain set of their own checks somehow?
How could mllint measure and assess whether a project is making effective use of these tools?
How could mllint measure and assess whether the checks made by these tools passed? This could entail running GreatExpectations or TFDV in a similar way to what we do for Code Quality linters (Pylint, Mypy, etc.) and parsing the output (bonus points if this output can be formatted in a machine-readable way such as JSON or YAML).
Then, to implement it:
Figure out the answers to the above questions.
Determine which linting rules mllint will use to check whether a project is using GreatExpectations correctly
Determine which linting rules mllint will use to check whether a project is using TFDV correctly
Implement the linter to check these rules (just copy the template linter and start editing that)
Implement tests for the linter
Write the documentation to go with those rules
Write the documentation for the category
The text was updated successfully, but these errors were encountered:
The goal for this issue is to build a linter for the Data Quality category to check whether and how a project is using tools to assert quality standards on its input data, e.g. GreatExpectations, TFDV and (maybe) Bulwark, the successor of Engarde.
Firstly, we should figure out, primarily for GreatExpectations and TFDV:
mllint
measure and assess whether a project is making effective use of these tools?mllint
measure and assess whether the checks made by these tools passed? This could entail running GreatExpectations or TFDV in a similar way to what we do for Code Quality linters (Pylint, Mypy, etc.) and parsing the output (bonus points if this output can be formatted in a machine-readable way such as JSON or YAML).Then, to implement it:
mllint
will use to check whether a project is using GreatExpectations correctlymllint
will use to check whether a project is using TFDV correctlyThe text was updated successfully, but these errors were encountered: