System biases are problematic as never before, and if you are lucky enough to not have been targeted yet, it doesn't mean that you wouldn't be in the near future.
Contrary to the common idea, it’s not even a small-scale issue of scrappy startups but a vast phenomenon corroding Big Tech companies inside-out. Among sounding cases of Google Maps’ mispronunciation, flawed Amazon recruiting engine, and Twitter image-cropping algorithm detecting animals more accurately than Black faces, it’s clear that the problem has crawled far beyond our reach. Whereas one might blame impoverished data sets, a societal component in AI biases poses a far more ominous concern as it reflects the already existing institutional discrimination and intolerance.
Now, when the AI systems are more and more used to make really high-stakes decisions, such as mortgage underwriting system and even criminal justice predictions, the question is: how do we build AI models that deal with systemic inequality more effectively?