Home Artificial Intelligence The EU’s AI rules will likely take over a year to be agreed

The EU’s AI rules will likely take over a year to be agreed

0
The EU’s AI rules will likely take over a year to be agreed

[ad_1]

Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon.

Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.

In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.

The EU’s draft AI regulation classifies systems into three risk categories:

  • Limited risk – includes systems like chatbots, inventory management, spam filters, and video games.
  • High risk – includes systems that make vital decisions like evaluating creditworthiness, recruitment, justice administration, and biometric identification in non-public spaces.
  • Unacceptable risk – includes systems that are manipulative or exploitative, create social scoring, or conduct real-time biometric authentication in public spaces for law enforcement.

Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.

Organisations deploying high-risk AI systems would be required to have things like:

  • Human oversight.
  • A risk-management system.
  • Record keeping and logging.
  • Transparency to users.
  • Data governance and management.
  • Conformity assessment.
  • Government registration.

However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.

Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.

“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.

“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”

With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.

In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.

“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.

Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.

(Photo by Christian Lue on Unsplash)

Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, draft, ethics, eu, europe, government, laws, legal, Legislation, Politics, regulations, rules, Society, strategy, uk

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here