[ad_1]
A study by BCS, The Chartered Institute for IT has found the UK can set the “gold standard” in ethical artificial intelligence.
The UK – home to companies including DeepMind, Graphcore, Oxbotica, Darktrace, BenevolentAI, and others – is Europe’s leader in AI. However, the country is unable to match the funding and support available to counterparts residing in countries like the US and China.
Many experts have instead suggested that the UK should tap its strengths in leading universities and institutions, diplomacy, and democratic values to become a world leader in creating AI that cares about humanity.
Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT and a lead author of the report, said:
“The UK should set the ‘gold standard’ for professional and ethical AI, as a critical part of our economic recovery.
We all deserve to have understanding, and confidence in, AI, as it affects our lives over the coming years. To get there, the profession should be known as a go-to place for men and women from a diverse range of backgrounds, who reflect the needs of everyone they are engineering software for.
That might be credit scoring apps, cancer diagnoses based on training data, or software that decides if you get a job interview or not.”
Current biases in many AI systems could lead to increasing existing societal problems including the wealth gap and discrimination based on race, gender, sexual orientation, age, and more.
“It’s about developing a highly-skilled, ethical, and diverse workforce – and a political class – that understands AI well enough to deliver the right solutions for society,” explains Mitchell.
“That will take strong leadership from the government and access to digital skills training across the board.”
Public trust in AI has been damaged through high-profile missteps including the crisis last summer when an algorithm was used to estimate the grades of students. A follow-up survey from YouGov – commissioned by BCS – found that 53 percent of UK adults had no faith in any organisation to make judgements about them.
In May last year, the national press reported that code written by Professor Neil Ferguson and his team at Imperial College London that informed the decision to enter a lockdown was “totally unreliable” and also damaged public trust in software. Since then, articles in science journal Nature have proved Professor Ferguson’s epidemiological computer code to be fit for purpose. From hindsight, people should now know this—but most people don’t read Nature and still believe the reports in the national press that the code was flawed.
The report found a large disparity in the competence and ethical practices of organisations using AI. One of the suggestions in the report is for the government to create a framework of standards to meet for the adoption of AI across both the public and private sectors.
In the UK government’s National Data Strategy, it states: “Used badly, data could harm people or communities, or have its overwhelming benefits overshadowed by public mistrust.”
BCS’ report, Priorities For The National AI Strategy, builds on the work of the AI Council Roadmap and National Data strategy. It has been published to complement the final version of the UK government’s plan, due to be released in a final version later this year.
A full copy of BCS’ report can be found here (PDF)
(Photo by Ethan Wilkinson on Unsplash)
Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.
[ad_2]
Source link