Artificial intelligence is racist, sexist because of data it’s fed
Artificial intelligence is “shockingly” racist and sexist, a study has revealed.
Source: Alahna Kindred,
Researchers looked at a range of systems and datasets and found examples where AI had provided inaccurate information for women and minorities.
In one example, the team from Massachusetts Institute of Technology looked at an income prediction system and discovered it was twice as likely to misclassify female employees as low-income and male employee as high-income.
However, the team was able to adjust the system to make sure it was less biased.
When researchers increased the dataset by a factor of 10, they found the mistakes decreased by 40 percent.
Irene Chen, a Ph.D. student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson, said it comes down to using better data.
She said: “Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms.
“But algorithms are only as good as the data they’re using and our research shows that you can often make a bigger difference with better data.”
In another example, researchers found an AI system’s ability to predict intensive care unit mortality was inaccurate for Asian patients.
They warned that using existing methods to fix the system would make the predictions less accurate for non-Asian patients.
Typically researchers would just add more data to the system, but Chen said it is also the quality of the data that is important.
Instead, researchers should be getting more data from underrepresented groups.
Sontag said: “We view this as a toolbox for helping machine-learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions.”
The research team will present their paper in December at the Neural Information Processing Systems in Montreal.