Objective: Ethical AI / Responsible AI: Capture fundamentals around ethics of AI, responsible AI from principle, process, standards, guidelines, ecosystem, regulation/risk standpoint.
- Model Explainability and JRT AI - JRT (Justifiable, Responsible and Transparent) AI is becoming critical in model explainability while solving data driven business problems
- How do we apply Responsible AI in practical considerations - this includes checklist, practical toolkits, framework and principles for success.
Category | Description |
---|---|
Risk in deployment |
|
Regulatory aspects |
|
Provide clarity as much as possible |
|
How to approach Bias in AI |
|
Category | DOs (AI Should) | DON'Ts (AI Should Not) |
---|---|---|
|
|
|
The Ethical institute has recommended following principles
- Human Augmentation
- Bias Evaluation
- Explainability by Justification
- Reproducible Operations
- Displacement Strategy
- Practical Accuracy
- Trust by Privacy
- Security Risks
Please check here
As per HBR (Harvard Business review), the ethical frameworks for AI aren't enough. Check Here
- Focus on every stages in the ML journey
- Critical to highlight on detailing and what tasks are performed in a step wise manner
- Mckinsey podcast on ethical AI
- Awesome Production Machine Learning : A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
- Awesome AI Guidelines : This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond
- State of ML Operations 2020 - by Alejandro
- The Institute for ethical AI & ML
- Ethical Guidelines for Trustworthy AI from European Union
- Fair Machine Learning Book Set of references in PDF formats
- Australian Human Rights and Technology
- Some questions pertaining to Model bias, fairness, ethics
- Do we check causality of features? Does more data help here making better decision?
- Do we have ability to debug and know more specifics
- Are there any regulatory requirements associated with and needs to be understood in detail?
- Do we trust model's outcomes and to what extent?
- Do we have a segregation of critical domain vs non-critical domain that can be defined?
- Lex Fridman's lecture on Human-Centered Artificial Intelligence :MIT 6.S093
- Stanford Human-centered Artificial Intelligence research
- Google's People + AI research (PAIR) Guidebook
This research paper can be followed for 6 different types of Bias in AI.
- Historical Bias
- Representation Bias
- Measurement Bias
- Aggregation Bias
- Evaluation Bias
- Deployment Bias
- This is needed at CRISP-DM stages from a holistic point of view (Kind of a Checklist)
- Problem Formation
- Is an algorithm an ethical solution to the problem?
- Construction of Datasets / Preparation Process
- Is the training data representative of different groups so that we have diverse data representation for appropriate analysis of feature presence?
- Are there biases in labels or features?
- Does the data need to be modified to mitigate bias?
- Selection of Algorithms or Methods
- Do fairness constraints need to be included in the objective function?
- Training Process
- Testing Process
- Has the model been evaluated using relevant fairness metrics?
- Deployment
- Is the model deployed on a population for which it was not trained or evaluated?
- Are there unequal effects across users?
- Monitoring / HITL
- Does the model encourage feedback loops that can produce increasingly unfair outcomes?
Key Objectives could be as follows fro Policy / Governance / Regulatory related frameworks:
- Safeguard consumer interest in an AI solution
- Serve as a common, global, consistent reference point
- Foster innovation and more robust solutions
Frameworks:
- Singapore Model AI Governance Framework - by Personal Data Protection Commision, Singapore
- Scotland AI Strategy: Scotland launched its National AI Strategy and announced that the government is formally adopting the policy guidance. It is the first country to do so and a unique opportunity for advocacy and engagement with policymakers on this topic. The AI strategy also highlights the Scottish Government’s work with the Data for Children Collaborative, a joint partnership between UNICEF and the University of Edinburgh’s Data Driven Innovation Programme, which investigates ways of using data to improve the lives of children around the world. Please refer here for details
- OECD Framework for Classification of AI systems: A Tool for effective AI policies
- AI Ethics Framework from World Economic Forum
- Building an Organizational Approach to Responsible AI - MIT Sloan Review
- Responsible AI from Google Cloud - GCP
- Taking care of business with Responsible AI