[ad_1]
The intelligence agency’s first-ever public report details how AI can be used “ethically” for cyber operations.
GCHQ (Government Communications Headquarters) is tasked with providing signals intelligence and information assurance to the government and armed forces of the United Kingdom and its allies.
Jeremy Fleming, Director of GCHQ, said:
“We need honest, mature conversations about the impact that new technologies could have on society.
This needs to happen while systems are being developed, not afterwards. And in doing so we must ensure that we protect our [citizens’] right to privacy and maximise the tremendous upsides inherent in the digital revolution.”
While the criminal potential of AI technologies receive plenty of coverage – increasing public fears – the ability to use AI to tackle some of the issues which have plagued humanity hasn’t received quite as much.
GCHQ’s paper highlights how AI can be used for:
- Mapping international networks that enable human, drugs, and weapons trafficking;
- Fact-checking and detecting deepfake media to tackle foreign state disinformation;
- Scouring chatrooms for evidence of grooming to prevent child sexual abuse;
- Analysing activity at scale to identify malicious software to protect the UK from cyberattacks.
The paper sets out how AI can be a powerful tool for good, helping to sift through increasingly vast amounts of data, but human analysts will remain indispensable in deciding what information should be acted upon.
“AI, like so many technologies, offers great promise for society, prosperity, and security. Its impact on GCHQ is equally profound. AI is already invaluable in many of our missions as we protect the country, its people, and way of life.
It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cybersecurity.”
GCHQ believes it’s not yet possible to use AI to predict when someone has reached a point of radicalisation where they might commit a terrorist offence, which many have raised concerns about “pre-crime” arrests similar to those depicted in the film Minority Report.
AI will have a major impact on almost every area of life in the coming years, for better and worse, and the rules are yet to be written.
Ken Miller, CTO of Panintelligence, commented:
“GCHQ detailing how it will use AI fairly and transparently is a crucial step in the development of the technology and one that companies must follow – not just when it comes to tackling crime, but for all of AI’s uses that affect our lives. As a society, we are still somewhat undecided whether AI is a friend or foe, but ultimately it is just a tool that can be implemented however we wish.
Make no mistake AI is here and it touches many aspects of your life already, and most likely has made decisions about you today. It is essential to build trust in the technology, and its implementation needs to be transparent so that everyone understands how it works, when it is used, and how it makes decisions. This will empower people to challenge AI decisions if they feel it necessary, and go some way to demystifying any stigma.
It will take some time before the public is completely comfortable with AI decision-making, but accountability and stricter regulation into how the technology will be deployed for public good will absolutely help that process.
We live in a world that is unfortunately full of human bias, but there is a real opportunity to remove these biases now. However, this is only possible if we train the models effectively, striving to use data without limitations.
We should shine a light on human behaviour when it displays prejudice, and seek to change opinions through discussion and education – we must do the same as we teach machines to ‘think’ for us.”
Despite flagrant abuses in recent years, much like there’s an international order around rules in warfare – such as chemical weapons cannot be used and prisoners of war must be treated humanely – many argue there needs to be such rules governing AI to decide what is acceptable conduct.
With the release of this paper, GCHQ plans to begin setting out what this ethical framework may look like. Fleming said:
“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ.
Today we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”
GCHQ also takes the opportunity to boast of how it’s supporting the rapidly-growing AI sector in the UK.
Some of the ways GCHQ has, or will, support UK AI developments include:
- Setting up an industry-facing AI Lab in their Manchester office, dedicated to prototyping projects which help to keep the country safe;
- Mentoring and supporting start-ups based around GCHQ offices in London, Cheltenham, and Manchester through accelerator schemes;
- Supporting the creation of the Alan Turing Institute in 2015, the national institute for data science and artificial intelligence.
Last year, GCHQ commissioned a paper from the Royal United Services Institute – the world’s oldest think tank on international defence and security – which concluded that adversaries “will undoubtedly seek to use AI to attack the UK” and the country will need to use the technology to counter threats.
GCHQ’s full ‘Ethics of AI’ paper can be found here (PDF).
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.
[ad_2]
Source link