From Siri and Alexa to algorithms on Facebook and beyond, artificial intelligence is becoming more commonplace in our daily routines than ever before. However, a general understanding of its implications is not as widespread.
“Artificial intelligence, you can think of it as software that continues to learn without being explicitly programmed,” David Karandish said on Monday’s St. Louis on the Air. “With AI you have algorithms that are designed to learn and continue to take on new data in order to make better decisions over time.”
Karandish, co-founder and CEO of Jane.ai, joined host Don Marsh along with implicit bias researcher Calvin Lai to discuss the crossover between artificial intelligence and unconscious biases.
“I think artificial intelligence has a lot of potential and in many ways it can be a powerful mechanism for reducing human biases because it leads to consistent decision making,” said Lai, who is a professor of psychological and brain studies at Washington University. “But at the end of the day, a lot of the initial parameters that are set on to AI are designed by humans, so there is still the worry of there being systematic biases on the basis of race, gender or other things like social class that we might not be aware of that we’re baking into these AI systems.”
Lai provided the example of early voice-recognition technology that was tested on primarily white American males not working as well for other types of people.
“It ended up being the case that this recognition software was really good at recognizing and responding to the voices of white American men and less useful for women or people with non-American accents,” Lai explained. “Inequality wasn’t the intention, but inequality was the outcome.”
Karandish pointed out a vital aspect to the problem presented in this situation.
“It’s not that the algorithms themselves are biased, that’s like saying the Pythagorean Theorem is biased against circles,” Karandish said. “It’s the fact that the training data is so paramount in order to get the right results.
“If you want to be able to design a system to reduce as many of these biases as possible, you ought to make sure the system is self-correcting, so you don’t just train it once and then you’re done,” Karandish said. “Second of all, you have a representative sample of people in your training set … and then lastly, it’s important that anyone can submit information to the AI to train it, but you want to be able to moderate it as well.”
In regards to the results being moderated, Lai added, “I think a lot of times these AI systems are best complemented with humans at the end of it [who] can take the AI outputs and kind of complement it with what they know.”
On that note, Karandish agreed with Lai.
“You want to keep people in the loop to keep basic guardrails in place to make sure that the AIs don’t come to the wrong decisions,” Karandish said.
He went on to bring attention to the primary demographic represented in the tech field – Caucasian males – stating that “you’re not going to get a representative sample of other races, other socioeconomic backgrounds, to come in and program from the beginning.
“So it’s almost just as important that we start early teaching kids computer science, giving them access to these tools so that that next generation continues to be filled with folks that have been coding since they were kids from every walk of life,” he said.
St. Louis on the Air brings you the stories of St. Louis and the people who live, work and create in our region. St. Louis on the Air host Don Marsh and producers Mary Edwards, Alex Heuer, Evie Hemphill and Caitlin Lally give you the information you need to make informed decisions and stay in touch with our diverse and vibrant St. Louis region.