Home Artificial Intelligence Do You Need an AI Impact Statement?

Do You Need an AI Impact Statement?

0
Do You Need an AI Impact Statement?

[ad_1]

Have you seen any of Joseph’s Machines’ viral YouTube videos demonstrating overcomplicated yet mesmerizing contraptions that do simple tasks? My favorite is a pizza making machine powered by a toy train, swinging sauce bottle, toppling dominos, a clockwork ballerina, and a ferris wheel.

Joseph’s contraptions are Rube Goldberg machines, named after the American cartoonist Rube Goldberg, whose cartoons often depicted devices that performed simple tasks in indirect, convoluted ways. Over the past century these machines have become part of popular culture, from the children’s game Mouse Trap to the movie Back to the Future. There are even fun contests to build fantastically complicated machines, complete with official rules. According to the Guinness Book of World Records, the largest Rube Goldberg consists of 412 steps and was achieved by Scandiweb in Riga, Latvia, on December 2, 2016. 

While Rube Goldberg machines are a fun way to waste time, our reaction to bureaucracy tends to be quite the opposite. We sigh with frustration at any overcomplicated, time-consuming, form-filling, or bureaucratic compliance processes that are forced upon us, especially when the original task and outcome were trivially simple in contrast. 

How can we avoid AI impact statements becoming another administrative compliance burden?

Proportionality

AI has been making front page news and not always for the right reasons. High profile AI failures have included sexist hiring algorithms and racist healthcare programs. But AI failures aren’t limited to ethical lapses. There are also business losses, such as underperforming investments, or even physical harm from dangerous healthcare advice.

Organizations don’t set out to fail. AI failures are due to unintended behaviors of AI systems that are the inevitable result of weak, or non-existent, AI governance. The time for science experiments is over, and rigorous AI governance needs to become the norm. Like any other business or IT project, AI systems need to be managed to deliver ROI and operate reliably.

Calls for improved governance are becoming louder. Recently, UN High Commissioner for Human Rights Michelle Bachelet called for a “moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place.”

But not all AI poses serious risks. Some of the most common uses for AI, such as product recommendation systems, are relatively benign. For example, there is no material harm if an application recommends the wrong song to me, or a website tries to sell shampoo to a bald guy. It hardly makes sense that such an algorithm should be subject to the same governance processes as a use case that could cause serious harm.

An AI driven organization is likely to deploy dozens or even hundreds of AI systems. Some systems deserve more attention and stricter governance than others. AI governance should be proportionate to the risk.

Start With a Short-Form Assessment

While a full AI impact statement will detail all the risks and mitigation controls in place, a more practical approach is to list all your planned and existing AI deployments and apply a short-form assessment to rank them from highest to lowest risk. Then deep dive into the high risk use cases at the top of that list.

Novelty: Things are more likely to go wrong when you are doing something unfamiliar. If your organization’s AI maturity level is low or i inexperienced, it is more likely to make unintended mistakes. Watch out for projects where AI is being used for new use cases, new products or business domains, or requires new technology.

Complexity: The more complex an AI project, the more potential failure points. Big bang AI projects rarely succeed. Split complex AI projects into smaller and simpler projects, each with its own deliverable. Flag use cases where the AI system will be making decisions that historically required human judgement or discretion, or the decision requires human interpretability. Take care when the operating environment is complex.

Decision-Making Autonomy: The lowest risk AI systems operate with human-in-the-loop, providing recommendations to a human employee or customer who makes the final decision. For example, this could be a system that alerts medical staff when a hospital patient has a high risk of sepsis. For use cases that need to operate at scale, human-in-the-loop is not practical, and the most common governance structure is human-over-the-loop, whereby humans sign-off on AI system behaviors and authorize the AI system to make individual decisions without intervention so long as it operates within a policy framework. Sometimes a hybrid approach is used, whereby straightforward decisions are made by the AI system, while problematic decisions are triaged to a human decision-maker. The highest risk approach is reinforcement learning, whereby an AI system is constantly updating what it knows and changing its behavior without any verification or sign-off from a human.

Sensitive Domains: Some business domains and use cases are subject to higher scrutiny than others. Greater care is required when handling sensitive data, such as personally identifiable information, data requiring national security clearances, or data that crosses national borders. Some use cases, such as recruitment, have legal and regulatory restrictions. Certain industries, such as financial services, are subject to greater regulatory scrutiny. You should also flag use cases that are politically sensitive, subject to intense public scrutiny, have ethical concerns (such as fairness, privacy, or dishonesty), or infringe on human rights.

Stakeholder Impact: A detailed AI impact assessment is required when there is a risk of severe impact to stakeholders and decisions are not reversible. The more severe the potential impact, the greater the need for a full assessment. Look out for AI use cases that could adversely affect a person’s physical health or materially impact their economic circumstances.

Conclusion

The term “boil the oceans” means to undertake an impossible task or project or to make a job or project unnecessarily difficult. Will Rogers, an American humorist, is said to have coined the phrase during World War I, when he jokingly suggested boiling the oceans to deal with German U-boats.

There is no need to boil the oceans with a detailed AI impact assessment for each of the dozens of AI systems you have deployed. List your AI projects and flag the highest risks, considering their novelty, complexity, autonomy, sensitivity, and impact. 

The next post in this series will explain the starting point for an AI impact statement, a description of the use case goals and constraints.

About the author

Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here