Home Artificial Intelligence Trusted AI Cornerstones: Key Operational Factors

Trusted AI Cornerstones: Key Operational Factors

0
Trusted AI Cornerstones: Key Operational Factors

[ad_1]

In an earlier post, I shared the four foundations of trusted performance in AI: data quality, accuracy, robustness and stability, and speed. Today I want to talk about another important aspect of trusted AI: the software and human management that facilitate the use of AI. We think of this as the operational environment of an AI system. 

To create an ideal operating environment for AI, you need compliance, security, humility, governance, and business rules. 

Meeting Regulatory Expectations

Before you put a model into production, you may first need to clear compliance hurdles. Industries such as banking and credit, insurance, healthcare and biomedicine, hiring and employment, and housing are often tightly regulated. Even digital advertising campaigns might have specific regulatory requirements.

To put your regulatory house in order, enlist stakeholders, from legal to information security to your customer. You should first identify potential compliance risks, with each additional step again tested against risks. 

Typically, these areas require the most attention: model development, implementation, and use. Managing risk for any model involves understanding which monitoring and risk-mitigation procedures apply.

DataRobot can assist you in satisfying regulatory requirements by automatically generating comprehensive and customizable compliance documentation for any of its modeling approaches applied to your data.

Protecting Sensitive Data

AI and machine learning are fields rife with potential security issues. Because revenue numbers, employee performance, salary or personal data, sales leads, client data, and patient records might be part of your training data, it’s vital that data is protected. 

Transparency is a key consideration with regards to security. At one extreme, a model might be a black box, into which a user supplies data and receives a prediction without insight into how the model reached that decision. At the other extreme, a white box model might expose the entire workings of the model.

In between, there are ways to share pertinent information, such as prediction intervals, which quantify the confidence of a prediction. Although this information can potentially expose some of the mechanisms of an otherwise secure model, it might also boost the user’s trust in and ability to interpret a prediction. In short, there are trade-offs to navigate when determining how much information to disclose to users.

Knowing When to Trust a Model

Recognizing and admitting uncertainty is a major step in establishing trust. Think of it like deciding what to wear to an outdoor event. Is rain 40% likely? 60? Such information can give you confidence you’ve made the right apparel choice. Like a weather forecast, AI predictions are inherently probabilistic.

A prediction might be less certain when confronting data significantly different from the data it was trained on. That might mean a piece of data is an outlier. It might also mean it includes a value the model hasn’t seen before.

Interventions to manage uncertainty in predictions vary widely. The least disruptive intervention is to simply log and monitor uncertain predictions, including their triggering conditions. A more disruptive but sometimes necessary measure is to tie errors to alerts that require attention or intervention from a human operator.

Deploying Good Governance

Governance refers to the human-machine infrastructure that oversees the development and operation of a machine learning model. It is fundamental to creating trusted AI.

AI governance involves monitoring, traceability, and version control to track errors in your system. Good documentation also makes it easier to retrain a model or update the process. Other important considerations for governing AI models include a broad perspective on accuracy, assessments of incoming scoring data, and a record of predictions that can be monitored for instability.

Monitoring accuracy and data drift can help you know when it is time to retrain a model on newer data. But changes can happen suddenly. When they do, solid backups and built-in redundancies help protect your sensitive processes in the case of a black swan event or system outage.

Deciding When and How to Deploy an AI Model

Lastly, you need to adapt your existing  business rules and expectations to guide your implementation of an AI model. You know your business best. Any AI model is, first and foremost, a tool at your disposal. Revisit the model frequently to determine if it continues to work well for you. Remember that events such as major holidays or the COVID-19 pandemic can send a model into untrustworthy territory.

Beyond predictions, you can choose to receive information on a model’s confidence and factors that influence its decisions. Like the standard practice with a credit score, a model can show how top features and their values influenced a prediction. Seeing those values confirmed—and getting some insight into the reasoning behind how the model used them—can go a long way to establishing trust in the prediction itself.

Creating trusted AI systems is essential to developing a world where human ingenuity is enhanced by the speed and precision of AI and machine learning. Using the right operational guidelines can help ensure solid, trustworthy results.

Demo

AI You Can Trust

Built-in expertise and guardrails to ensure you receive predictions you can trust

Request a Demo

About the author

Sarah Khatry
Sarah Khatry

Applied Data Scientist, DataRobot

Sarah is an Applied Data Scientist on the Trusted AI team at DataRobot. Her work focuses on the ethical use of AI, particularly the creation of tools, frameworks, and approaches to support responsible but pragmatic AI stewardship, and the advancement of thought leadership and education on AI ethics.

Meet Sarah Khatry

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here