fbpx

EI and AI

By Renée Adams

December 4, 2020

Daniel Goleman, EI and AI, emotional intelligence

Part 3 in a series on the past, present, and future of EI

Artificial Intelligence (AI) had been the stuff of Isaac Asimov stories and science-fiction films for decades, but recently, developments in AI have drastically impacted our daily life. AI is a system of “thought” that a program uses to make decisions. It is programmed. It is believed to be under our control. And it is becoming a part of our everyday lives. In order for future integration with AI to become seamless, we will need the people programming AI systems to not only have an understanding of EI, but a dedication to moral ethics and philosophy. 

EI and AI

Novelty to Necessity

Every day, it seems AI has replaced something else in our lives. How many of us have let Alexa/Echo and Siri into our homes willingly? From automated hiring programs to self-driving cars, whether we like it or not, AI is abundant. This trend seems to be increasing and it’s safe to assume that as AI is developed and refined, it will become even more abundant. But each of these examples of AI carries with it a lesson in EI and unfortunately, the results are overwhelmingly negative.  

In an example from Reuters.com, we have Amazon’s automated recruitment program, AMZN.O. This tool was meant to screen résumés to highlight the best candidates, but analysts quickly noticed a trend. Because of flawed AI, the program penalized résumés for any mention of women including “women’s chess club captain” and any women’s-only colleges. People quickly blamed the male-dominated programmers for allowing their own biases into their programs. Fixing the problem proved to be difficult and Amazon eventually scrapped the entire project rather than repair it.

Daniel Goleman, author of Emotional Intelligence: Why It Can Matter More Than IQ, takes a strong stance on EI and AI. He believes that we need more empathetic and self-aware programmers making AI. Interestingly, the root of these issues with AI is that AI will never be able to be emotionally intelligent. He says, “And for the foreseeable future, our relatively minimal understanding of consciousness will limit the consciousness we can program artificially.” But maybe, some day, we will have a greater understanding of the human brain, which will, in turn, allow us to create a more realistic facsimile of consciousness.

Driving in Circles

With the increasing abundance of self-driving cars, the impact of AI and moral decision-making has never been more tangible. When a human driver swerves into the oncoming lane to avoid hitting a pedestrian, they are making a moral decision that puts others and themselves at risk. We expect our vehicles to be able to make these moral decisions for us, even in situations where there is no clear “right answer.” Writing for nature.com, Amy Maxmen says,

“…many of the moral principles that guide a driver’s decisions vary by country. For example, in a scenario in which some combination of pedestrians and passengers will die in a collision, people from relatively prosperous countries with strong institutions were less likely to spare a pedestrian who stepped into traffic illegally.”

Every time someone gets behind the wheel, subtle and nuanced questions of morality are involved. These are questions that have different answers depending on the country and culture you ask them in and some don’t even have answers. So how can we expect AI and the people programming AI systems to be able to come up with satisfactory answers to these questions? It’s an impossible task to program a system that makes everyone happy. In life and death scenarios, there will always be the looming question; What if there had been a human at the wheel? This is exactly why we need more emotionally intelligent programmers, but even then, there may not be a be-all and end-all solution. 

Guiding Big Tech’s Morality

Who we trust to develop AI will determine the course our future takes. In a 2018 article from MIT Technology Review, a dozen Google employees quit and thousands more signed a petition in response to Google’s Project Maven. A Department of Defense initiative, Project Maven was supposed to improve drone strike accuracy along with a list of other things. This example shows what a communal response to abuses of AI can lead to. Google dropped out of Project Maven and similar military projects because of this.

In his writing, Daniel Goleman says, “putting AI in the hands of for-profit companies poses an ethical risk.” Expecting for-profit companies to program their AI’s morality based on anything except what is profitable for the company is naïve and unrealistic. It is going to be up to the emotionally intelligent employees and executives at these companies to hold their bosses and programmers accountable to their utopian ideals. In using AI to benefit society, we cannot leave morality up to machines. We have to be the ones to decide how and when AI is used and it has to be a collective understanding of the risks and rewards of using it.

Leave a Reply

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}