Home Artificial Intelligence ‘Information gap’ between AI creators and policymakers needs to be resolved – report

‘Information gap’ between AI creators and policymakers needs to be resolved – report

0
‘Information gap’ between AI creators and policymakers needs to be resolved – report

[ad_1]

An article posted by the World Economic Forum (WEF) has argued there is a ‘huge gap in understanding’ between policymakers and AI creators.

The report, authored by Adriana Bora, AI policy researcher and project manager at The Future Society, and David Alexandru Timis, outgoing curator at Brussels Hub, explores how to resolve accountability and trust-building issues with AI technology.

Bora and Timis note there is “a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI’s development and deployment cycle.” As a result, the two add, this governance “needs to be designed under continuous dialogue utilising multi-stakeholder and interdisciplinary methodologies and skills.”

In plain language, both sides need to speak the same language. Yet while AI creators have the information and understanding, this does not extend to regulators, the authors note.

“There is a limited number of policy experts who truly understand the full cycle of AI technology,” the article noted. “On the other hand, the technology providers lack clarity, and at times interest, in shaping AI policy with integrity by implementing ethics in their technological designs.”

Examples of unethical AI practice, or where inherent bias is built into systems, are legion. In July, MIT apologised for, and took offline, a dataset which trained AI models with misogynistic and racist tendencies. Google and Microsoft have also fessed up to errors with YouTube moderation and MSN News respectively.

Artificial intelligence technology in law enforcement has also been questioned. More than 1,000 researchers, academics and experts signed an open letter in June to question an upcoming paper which claimed to be able to predict criminality based on automated facial recognition. Separately, in the same month, the chief of Detroit Police admitted its AI-powered face recognition did not work the vast majority of the time.

Google has been under fire of late, with the firing of Margaret Mitchell last week, who co-led the company’s ethical AI team, adding to the negative publicity. Mitchell confirmed her dismissal on Twitter. A statement from Google to Reuters said the firing followed an investigation which found Mitchell moved electronic files outside of the company.

In December, Google fired Timnit Gebru, another leading figure in ethical AI development, who claimed she was fired over an unpublished paper and sending an email critical of the company’s practices. Mitchell had previously written an open letter detailing ‘concern’ over the firing. Per an Axios report, the company made changes into ‘how it handles issues around research, diversity and employee exits’, following Gebru’s dismissal. As this publication reported, Gebru’s departure forced other employees to leave; software engineer Vinesh Kannan and engineering director David Baker.

Bora and Timis emphasised the need for ‘ethics literacy’ and a ‘commitment to multidisciplinary research’ from the technology providers’ perspective.

“Through their training and during their careers, the technical teams behind AI developments are not methodically educated about the complexity of human social systems, how their products could negatively impact society, and how to embed ethics in their designs,” the article noted.

“The process of understanding and acknowledging the social and cultural context in which AI technologies are deployed, sometimes with high stakes for humanity, requires patience and time,” Bora and Timis added. “With increased investments in AI, technology companies are encouraged to identify the ethical consideration relevant to their products and transparently implement solutions before deploying them.”

This could theoretically take care of hasty withdrawals and fulsome apologies when models behave unethically. Yet the researchers also noted how policymakers need to step up.

“It is only by familiarising themselves with AI and its potential benefits and risks that policymakers can draft sensible regulation that balances the development of AI within legal and ethical boundaries while leveraging its tremendous potential,” the article noted. “Knowledge building is critical both for developing smarter regulations when it comes to AI, for enabling policymakers to engage in dialogue with technology companies on an equal footing, and together set a framework of ethics and norms in which AI can innovate safely.”

Innovation is taking place with regard to solving algorithmic bias. In the UK, as this publication reported in November, the Centre for Data Ethics and Innovation (CDEI) has created a ‘roadmap’ to tackle the issue. The CDEI report focused on policing, recruitment, financial services, and local government, as the four sectors where algorithmic bias posed the biggest risk.

You can read the full WEF article here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: ethical AI, ethics, Google, google ethical principles, wef, world economic forum



[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here