Home Artificial Intelligence Humans and AI: Organizational Change

Humans and AI: Organizational Change

0
Humans and AI: Organizational Change

[ad_1]

According to McKinsey, “Research shows that 70 percent of complex, large-scale change programs don’t reach their stated goals. Common pitfalls include a lack of employee engagement, inadequate management support, poor or nonexistent cross-functional collaboration, and a lack of accountability.” 

Last year I was doing some spring cleaning and looking for space in my home office for a digital piano. As I pulled books from my bookcase, packing them into boxes to go into storage, I found my Blockbuster Video membership card. I’d tucked it inside a book as a bookmark. That membership card must be more than 10 years old. In addition to the books, I also put my DVD player into storage. With the easy availability of streaming services like Netflix, Hulu, and Apple TV, I can’t remember the last time I rented a video or used a DVD.

When it was founded in 1998, Netflix offered a subscription model that replaced trips to rental stores with home delivery service. In 2000, Netflix offered Blockbuster a partnership, but the home movie provider turned it down. Seven years later, when Netflix transformed its business model to streaming content, it wasn’t long before Blockbuster eventually went out of business.

This was not a technology failure, but a failure to embrace organizational change.

Humans Are Here to Stay

AI won’t replace humans. AI will create jobs—and contrary to what you might expect, these jobs won’t just be for computer geeks.

The key reason is comparative advantage. David Ricardo developed the economic theory in 1817 to explain why countries engage in international trade even when one country’s workers are more efficient at producing every single good than workers in other countries. It isn’t the absolute cost or efficiency that determines which country supplies which goods or services. It is the relative strengths or advantages of producing each good or service in each country and the opportunity cost of not specializing in what you do best. The same principle applies to humans and computers.

Computers are at their best doing repetitive tasks, mathematics, data manipulation, and parallel processing. These comparative strengths are what propelled the Third Industrial Revolution, which gave us today’s digital technology. Many of our business processes already take advantage of these strengths. Banks have massive computer systems that handle transactions in real time. Marketers use customer relationship management software to store information about millions of customers. If a task is repetitive, frequent, or common, automate it. If it has a predictable outcome, and you have suitable data to reach that outcome, then automate that workflow.

Humans are strongest at communication and engagement, context and general knowledge, common sense, creativity, and empathy. We are inherently social creatures. Research shows that customers prefer to deal with humans, especially in situations when they experience a problem and want help solving it. Don’t replace human interactions with computers. Don’t force customers to use an automated system and press buttons when they just want to hear a human voice and talk to someone who will fix their problem.

As part of your organizational transformation to an AI-driven enterprise, you will need to redesign work tasks with the comparative strengths of humans and computers in mind. But how easy is it to evaluate the strengths of each? Can humans reliably self-assess whether AI outperforms them?

The Dunning-Kruger Effect

The Dunning-Kruger effect is a cognitive bias in which unskilled persons overestimate their capabilities. It is strongly related to the cognitive bias of illusory superiority and comes from people’s inability to recognize their lack of ability. In popular culture, people who exhibit this bias are sometimes described as “knowing just enough to be dangerous” or said to be on “Mount Stupid.” In “Why People Fail to Recognize Their Own Incompetence,” the authors indicate that the incorrect self-assessment of competence derives from a person’s ignorance of the standards of performance for a given activity. In other words, the person doesn’t know enough about the topic to understand how little they know.

dreamstime m 115961463

The Dunning-Kruger effect implies that as people learn more about a topic, they develop a more realistic understanding of its complexity and a more modest self-assessment of their capabilities. As Einstein said, “The more I learn, the more I realize how much I don’t know.”

Successful organizational change depends on buy-in from your employees. If they don’t trust the AI system, they won’t use it and won’t follow its decisions. If employees believe they know more than the AI, they will become blockers. 

There are three pillars of trusted AI:

  • Shared goals
  • An intuitive understanding of how the AI makes decisions
  • Reliability (AI that works as planned)

With limited resources but a mandate for AI transformation, how should organizations motivate their employees to embrace organizational change? Which employees are most likely to accept AI decisions? Which employees are most likely to reject AI because they overestimate their abilities?

Algorithm Appreciation

Recently published research sheds light on when people rely on algorithmic advice over human judgment. In “Algorithm appreciation: People prefer algorithmic to human judgment,” the authors report the results of six experiments.

In the first experiment, study participants were shown a photograph of a person and asked to estimate the person’s weight. After participants made their estimates, they received advice and were asked to make another estimate. Although all participants received the same advice, some were told the advice came from a person while others were told it came from an algorithm. After each estimate, participants used a score from 1 to 100 to indicate their confidence. After the participants received advice, the researchers measured two things: how much the participants changed their estimates and their level of confidence. They concluded that participants relied more on advice when they were told it came from an algorithm. Their self-reported confidence increased more when they believed they received algorithmic advice.

The second experiment was similar, except study participants were given the more subjective task of predicting the popularity of songs. Participants viewed a graph of each song’s ranking from previous weeks and made a forecast for the coming week by entering a predicted placing from 1 to 100. Subsequent experiments covered more decision domains, like predicting whether one person will appreciate another’s sense of humor. The results of each experiment confirmed the conclusion of the first experiment: people preferred algorithmic judgment over human judgment.

Subsequent experiments assessed the robustness of this preference for algorithmic advice. In one experiment, some participants were given a choice over the source of advice, human or algorithm. In another experiment, participants chose between their own estimate and that of an algorithm. Both experiments confirmed significant algorithmic appreciation, but the magnitude of that effect decreased when participants chose between their own estimates and those of an algorithm. This weakening of the algorithm appreciation effect might be due to cognitive biases related to self-identity, such as illusory superiority.

The last experiment examined whether the expertise of a study participant influences algorithm appreciation. It compared the advice preferences of experts in the field of geopolitical forecasting and those of laypersons. Unlike participants in the earlier experiments, the experts did not recognize the value of algorithmic advice. Experts trusted their own judgment over the information offered to them. Their cognitive bias ultimately lowered their accuracy relative to the layperson sample. Experts who ignored algorithmic estimates underperformed both the algorithm and the layperson participants!

In summary, people who know they aren’t experts will adhere more to advice when they think it comes from an algorithm rather than a person. This tendency is rational because the participants believed that unlike the layperson dispensing advice, the algorithm had been trained on the problem. Paradoxically, experienced professionals who regularly make forecasts relied less on algorithmic advice than laypersons, which hurt their accuracy. These results are the opposite of those expected from the Dunning-Kruger effect! Although the research paper did not attempt to discover the source of this cognitive bias, it might be due to psychological effects related to identity and self-esteem. One such effect is the cognitive bias known as the IKEA effect, in which people place a disproportionately high value on products they partially created.

Humans and AI Best Practices

Given the research results, your expert employees, including decision-makers, data scientists, and the analytics team, are the most difficult to convince to engage with AI transformation. After all, your experts are only human. Other employees, particularly those who follow processes or use advice from third parties, are more open to using AI.

For this reason, put more of your organizational change resourcing into developing engagement with expert employees. While we wait for reliable peer-reviewed research to show the best way to get these experts to trust AI, we can adopt behavioral strategies to reduce the effect of cognitive biases, including running trials overseen by a third party to objectively compare the accuracy and business value generated by AI decisions and human judgment. Do not rely on self-reporting, opinions, assertions about AI reliability, or a claimed lack of reliability.

To reduce the impact of cognitive biases, use data science tools that: 

Event

AI Experience Worldwide The Hunt For Transformational Growth

Register Now

About the author

Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here