Computers are smart. They store and process vast amounts of data in the blink of an eye, translate language instantaneously, and even operate machinery. But they have the same emotional intelligence as a brick wall.
Egyptian-American computer scientist Rana al-Kaliouby wants to change that.
The 43-year-old co-founded her company Affectiva while undertaking a Ph.D. at the Massachusetts Institute of Technology (MIT). Specializing in “Emotion AI,” a subsect of artificial intelligence, Kaliouby wants to teach computers how to recognize and quantify human emotions.
Technology has created an “empathy crisis,” Kaliouby says — and her mission is to humanize it.
Teaching computers to feel
One of the early applications of Emotion AI that Kaliouby explored was tech-enabled glasses to help children with autism read facial expressions. The glasses offered prompts to the wearer, telling them how to respond to various non-verbal cues like smiling or frowning. “We started to see a lot of really positive improvements in these kids; it was just really powerful,” says Kaliouby. After years of research and development, the technology was officially launched in 2017 for integration into smart glasses such as Google Glass.
Technology has always been a part of Kaliouby’s life. Born in Egypt, her parents were both computer programmers, and she studied computer science at the American University in Cairo before co-founding Affectiva in 2009 with Rosalind Picard.
The company has grown from a university spin-out into an international, multi-million dollar enterprise with headquarters in Boston. But Kaliouby maintains close ties with Egypt, and Affectiva employs about 60 people at its Cairo office, including software engineers and machine learning scientists.
One area where Kaliouby believes Emotion AI could have life-saving applications is the automotive industry, where driver monitoring systems that use AI-powered cameras in cars could potentially detect driver distraction and drowsiness, preventing accidents.
In May 2021, Affectiva was acquired by Smart Eye, a leader in eye-tracking technology for driver monitoring, and the two companies are now combining their technologies.
According to the European Commission, 90% of road accidents are down to human error — one reason the EU has introduced new regulations for all European vehicles to include this kind of technology by 2022, and legislation is being considered in the US. Car manufacturers such as Tesla and General Motors have incorporated driver monitoring tech into some of their vehicles, and Affectiva is already working with big automotive brands including BMW, Porsche and Hyundai.
There are other applications for cars too, says Kaliouby, including cameras that can view the full cabin to customize the driving experience. “If my baby in the backseat is falling asleep, it can dim the lights, or stop the music; it could change the temperature,” she says. “You can personalize the car based on who’s in it, what they’re doing, and how they’re feeling.”
Surveillance or safety?But using Emotion AI to monitor people can be controversial. Racial and gender bias is a recurring issue, along with privacy concerns. This year, Brazilian metro operator ViaQuatro was fined 100,000 Brazilian reals ($20,000) for capturing facial recognition data without users’ consent. ViaQuatro had installed an emotion-detecting camera system on the São Paulo Metro in 2018 to monitor people’s reactions to advertisements. The company claims the system did not carry out facial recognition, and complied with data protection legislation.
Even theoretically “positive” uses could have more sinister consequences, says Vidushi Marda, an AI researcher, policy advisor and lawyer at Article 19, a human rights organization that defends freedom of expression. For example, driver monitoring systems could double as workplace surveillance for truck or taxi drivers, she says, adding that regardless of the intended use, Emotion AI is “surveillance enabling.” Marda says there is “no easy way to mitigate” the harmful uses of Emotion AI.
Some scientists even dispute the effectiveness of using AI to read emotions. A 2019 meta-study of more than 1,000 research papers found insufficient evidence to support the concept of “universal facial expressions,” and according to a 2020 study, emotion-recognition AI can be less accurate than humans.
Aware of the many concerns surrounding Emotion AI, Kaliouby says Affectiva has a strict opt-in policy for data collection and transparency around how data is being used and stored.
“Anywhere where people’s data privacy considerations are not respected, we’re not going to do,” says Kaliouby, adding that the company turns down those wanting to use the software for surveillance, security or lie detection.
With 11 million facial responses from 90 countries around the world, Kaliouby says Affectiva is trying to create a diverse database that removes age, gender and racial biases from its system. Affectiva is working on a model which includes facial expression and tone of voice, and accounts for nuances such as culture and context.
In the years to come, Kaliouby hopes Emotion AI can help create a more human, empathetic digital experience.
“The mission is to transform what a human-machine interface looks like, because that’s not going to just improve our interactions with technology, but it’s going to make our own connections with each other in a digital world so much better,” she says.