Photo credit Flickr/smoothgroover22
Machine-learning models are notoriously susceptible to algorithmic bias, particularly when it comes to people of color. Just a few years back, software used by the US criminal justice system was shown to disproportionately suggest black people were more likely to commit crimes. Then there was the time that Google’s image-recognition system identified African Americans as gorillas.
from DZone.com Feed https://ift.tt/2UBfdZp
No comments:
Post a Comment