Skip to content

Analyzed two datasets from UCI's ML repository to identify bias in Social Sciences data. Highlighted the importance of addressing bias and ethics in AI, emphasizing fairness in both academic and industry research.

License

Notifications You must be signed in to change notification settings

ramosv/Uncovering-Bias-in-ML

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Removing-Bias-from-ML-Models

In this project, I dissected and analyzed two datasets from the University of California Irvine Machine Learning Repository, specifically from the Social Sciences category, which often contains significant bias within certain population groups. When exploring data to uncover hidden patterns and insights, bias and ethical considerations are frequently overlooked, as they complicate the analysis and may yield different results. Despite this reluctance, addressing bias and ethics is crucial, as both academic and industry researchers are increasingly recognizing the importance of fairness in Artificial Intelligence.

About

Analyzed two datasets from UCI's ML repository to identify bias in Social Sciences data. Highlighted the importance of addressing bias and ethics in AI, emphasizing fairness in both academic and industry research.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published