Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: No open issues :-) #16

Closed
baskervilski opened this issue Jun 19, 2021 · 2 comments
Closed

Issue: No open issues :-) #16

baskervilski opened this issue Jun 19, 2021 · 2 comments

Comments

@baskervilski
Copy link

Dear Bart,

I would like to contribute to your project, but I don't see what would be the most useful place to start. :-)

Suggestions would be highly welcome. :-)

I'm an ML engineer primarily using Python, btw. :-)

Regards,

Nemanja

@bvobart
Copy link
Owner

bvobart commented Jun 19, 2021

Hi Nemanja,

Thanks a lot for your interest in contributing to mllint! 😊

I've been developing mllint solely by myself up until now, so I have indeed not really been using issues, instead, I've been keeping track of the planning of features privately. I know Github has this 'Projects' feature (which I've never used before tbh, but should be fairly similar to a sprint / Kanban board), so I'll go play around with that soon to put the roadmap for mllint in there.

In the mean time, since you're a senior ML engineer primarily using Python, I could very much use your help in finding out the best / most recommended way of using some Python ML tools. More specifically, as you may remember from the knowledge sharing session yesterday, I'm planning on building a linter for the Data Quality category to check whether and how a project is using tools to assert quality standards on its input data, e.g. GreatExpectations, TFDV and (maybe) Bulwark, the successor of Engarde. However, while I know of these tools, I personally don't have any experience with using them. Do you happen to have experience with any of these tools? Would you be able to go play around with primarily GreatExpectations and TFDV to figure out for me:

  1. How to apply these tools to a ML project? What generally needs to change about a project in order to implement such a tool? How much effort does this take? You can use the latest branch of the basic project in mllint-example-projects repo as a base for this, but feel free to use an ML project of your own if you prefer :)
  2. What constitutes effective use of these tools? What kind of checks should / would ML engineers want to implement on their data? Are there any default checks that should always be enabled, or should the user create a certain set of their own checks somehow?
  3. How could mllint measure and assess whether a project is making effective use of these tools?
  4. How could mllint measure and assess whether the checks made by these tools passed? This could entail running GreatExpectations or TFDV in a similar way to what we do for Code Quality linters (Pylint, Mypy, etc.) and parsing the output (bonus points if this output can be formatted in a machine-readable way such as JSON or YAML).

Let's also have a 1-1 meeting next week to talk some more about this collaboration! I'll contact you about it privately within ING ;)

@bvobart
Copy link
Owner

bvobart commented Jun 21, 2021

I've created an issue for that Data Quality linter here: #19

More issues for other tasks are underway

@bvobart bvobart closed this as completed Jun 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants