[ad_1]
But drones have also highlighted a key vulnerability in Russia’s invasion, which is now entering its third week. Ukrainian forces have used a remotely operated Turkish-made drone called the TB2 to great effect against Russian forces, shooting guided missiles at Russian missile launchers and vehicles. The paraglider-sized drone, which relies on a small crew on the ground, is slow and cannot defend itself, but it has proven effective against a surprisingly weak Russian air campaign.
This week, the Biden administration also said it would supply Ukraine with a small US-made loitering munition called Switchblade. This single-use drone, which comes equipped with explosives, cameras, and guided systems, has some autonomous capabilities but relies on a person to make decisions about which targets to engage.
But Bendett questions whether Russia would unleash an AI-powered drone with advanced autonomy in such a chaotic environment, especially given how poorly coordinated the country’s overall air strategy seems to be. “The Russian military and its capabilities are now being severely tested in Ukraine,” he says. “If the [human] ground forces with all their sophisticated information gathering can’t really make sense of what’s happening on the ground, then how could a drone?”
Several other military experts question the purported capabilities of the KUB-BLA.
“The companies that produce these loitering drones talk up their autonomous features, but often the autonomy involves flight corrections and maneuvering to hit a target identified by a human operator, not autonomy in the way the international community would define an autonomous weapon,” says Michael Horowitz, a professor at the University of Pennsylvania, who keeps track of military technology.
Despite such uncertainties, the issue of AI in weapons systems has become contentious of late because the technology is rapidly finding its way into many military systems, for example to help interpret input from sensors. The US military maintains that a person should always make lethal decisions, but the US also opposes a ban on the development of such systems.
To some, the appearance of the KUB-BLA shows that we are on a slippery slope toward increasing use of AI in weapons that will eventually remove humans from the equation.
“We’ll see even more proliferation of such lethal autonomous weapons unless more Western nations start supporting a ban on them,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, an organization that campaigns against such weapons.
Others, though, believe that the situation unfolding in Ukraine shows how difficult it will really be to use advanced AI and autonomy.
William Alberque, Director of Strategy, Technology, and Arms Control at the International Institute for Strategic Studies says that given the success that Ukraine has had with the TB2, the Russians are not ready to deploy tech that is more sophisticated. “We’re seeing Russian morons getting owned by a system that they should not be vulnerable to.”
More Great WIRED Stories
[ad_2]
Source link