Home Artificial Intelligence AI Autonomous Cars As Drug Mules For Narco Dealers Is A Bad Prescription

AI Autonomous Cars As Drug Mules For Narco Dealers Is A Bad Prescription

0
AI Autonomous Cars As Drug Mules For Narco Dealers Is A Bad Prescription

[ad_1]


Author Lance Eliot considers whether AI self-driving cars could be used as narco mules in the illicit drug trade, and ponders ways to defeat the scheme. (Photo by the blowup on Unsplash)

By Lance Eliot, the AI Trends Insider

The number of ways to transport illegal drugs seems to be nearly endless. We all have heard about the use of airplanes to smuggle in illicit drugs. There are also tales aplenty about motorboats and sailboats loaded with banned narcotics that try to reach land.

Here’s a twist that you might not have considered. A recent news story described a narco mini-submarine that was scuttled in shallow waters after three men operating the vessel opted to evacuate and escape as authorities were closing in on them. Reportedly containing an estimated $100 million in cocaine, speculation is that the crew might not have known for sure that the gig was up, and chose to open the valves to sink the narco-sub as a precaution to try and hide the three metric tons of illegal drugs (figuring they could always come back to retrieve the loot, plus they might save their own lives by being able to show to the drug lord that the stuff was still intact and had not been siphoned off or pilfered).

I was asked by some readers whether we might ultimately have Autonomous Vehicles (AVs) that will be employed for criminal trafficking acts, doing so by removing the human element involved in driving or otherwise guiding a craft that contains an illicit drug shipment.

Sadly, yes, this can be expected and to some degree is already underway, including the nefarious use of autonomous submersibles, autonomous water surface craft such as sailboats and motorized ships, autonomous drones that fly in the air, and of course the use of ground-based autonomous transportation such as self-driving cars.

Not only will a particular mode of AV be used, such as in the air, in the water, or on the ground, you can bet your bottom dollar that there will be devious efforts combining those avenues, a trifecta as it were.

Though we all prefer to think about technological innovations such as self-driving cars as being built and fielded for beneficial purposes and striving toward the good of humanity, there is no sense in hiding from the inevitable fact that these marvels will be stridently used for untoward aims too.

Bad people will do bad things, even with the greatest advances in AI.

Let’s consider why the use of self-driving cars would be alluring to drug trafficking and then ponder ways that this might be mitigated or defeated.

On a related note, some argue that no one should discuss these matters as it will merely give dreadful thieves some new ideas of what to do. This is a classic dilemma that confronts the computer field (and other realms) all the time. For example, in the cybersecurity realm, some try to suggest that research about cracking computer systems should not be published and nor discussed at conferences. Keep it all under tight wraps, they say.

But the proverbial head-in-the-sand approach is essentially fatally flawed in that the crooks will one way or another discover or find out the latest break-in practices, and then we will all be caught flat footed by not having properly prepared for that eventuality. Generally, within reasonable limits, it tends to make sense to bring these topics into the open and thus increase awareness overall, aiming to effectively handle criminal behavior.

Here, then, is today’s intriguing question: Will the advent of AI-based true self-driving cars potentially be used in adverse ways including becoming narco mules in the illicit drug trade?

Let’s unpack the matter and see.

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). There is not yet a true self-driving car at Level 5. We don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Drug Trafficking Mules

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.

First, contemplate the presumed advantage of not having a human driver when attempting to transport illegal drugs.

When using human drivers, there is always the possibility that the driver will decide to abscond with the loot. They might drive the car to a secret rendezvous and sell the drugs to someone other than the intended receiver of the cache. Or maybe the driver does the big switch, whereby they replace the real drugs with some other fakery, and then appear to deliver the intended drugs to the designated destination.

Another possibility is that the driver gets taken over by someone else that wishes to steal the drugs, getting killed, or kidnapped in the process. Additionally, an alternative driver might take their place and either ditch the vehicle after removing the drugs or could pretend to be the original driver and go to the delivery location as planned. There are lots of double-crossing tricks that can be played.

There is also the everyday kind of chance that a driver might do something unwisely out-of-step. Suppose the driver decides to get drunk and then smashes the car into a telephone pole, bringing forth the cops and getting stupidly caught. One supposes that a driver could be sober and nonetheless still get into a car accident, once again likely exposing the plot. It seems like we frequently hear reports of drivers that roll through a stop sign, ignoring the lawful rules of driving, and end-up with a police officer pulling them over, leading to the discovery of tons of prohibited drugs in the vehicle.

In short, human drivers for partaking in illegal drug trafficking are a substantive problem for those in the business of drug trafficking. You see, it is a lot harder than it seems when trying to be a successful drug lord, though we ought not to shed a tear over such woes.

What can be done about those dastardly unreliable and double-crossing drivers? Replace them with AI.

Envision it this way. A self-driving car is stocked with forbidden drugs. The overseer then instructs the AI driving system to proceed on a journey to a particular destination. The AI dutifully starts driving the car and will indubitably arrive at the stipulated location. No human driver is involved.

Your initial reaction might be that the AI should be “smart enough” to realize that it is being used in a drug hauling operation, acting as a veritable drug mule.

Keep in mind that today’s AI is not sentient. We aren’t even close to achieving sentience. Furthermore, AI currently lacks any semblance of common-sense reasoning. All in all, if you are wishing that the AI will know right from wrong, that is a rather farfetched dream that regrettably does not yet exist and will be a long time coming before turning into reality (perhaps someday).

In this use case, the AI for a self-driving car is going to do as instructed, which would simply be the act of driving from point A to point B.

Plus, the contents housed within the vehicle is not something of significance for the AI.

It doesn’t especially matter whether people are inside the self-driving car, other than the need to try and maintain a smooth ride such as not taking tight turns if there are passengers on-board.

I’ve exhorted extensively that we are going to witness seemingly empty self-driving cars much of the time since there is likely to be a hotly competitive landscape of roaming self-driving cars. Fleet owners are hoping that their self-driving car will be more likely to get chosen for ride-sharing than some competing ones and thus will need to keep their autonomous vehicles wandering in anticipation of being in the right place at the right time for a paid lift request.

Hence, the point is that you might have been thinking that surely an otherwise empty self-driving car (having hidden drugs) will be readily spotted, and someone would detain the vehicle. Nope. The cruising around of empty self-driving cars will become a customary practice. Today, we all would certainly stare in awe and utter amazement but after thousands upon thousands of self-driving cars on our streets and highways, seeing them zipping back-and-forth with no one inside will be considered routine and ostensibly boring.

Admittedly, hard to imagine.

In any case, a self-driving car that is “empty” does not necessarily need to be indeed bereft of something inside it. Though there might not be people inside a self-driving car, there can be any kind of cargo or other elements (for example, you might send your beloved pet dog over to the vet, alone inside a self-driving car).

A self-driving car could be filled with birthday gifts that you got for your best friend that lives across town, and you want the AI to drive those over to your friend’s house.

In theory, the distance doesn’t make a difference either.

Suppose you have some precious items such as jewels, and you think it unsafe to send them via regular mail or overnight courier from Los Angeles to your aunt in New York. Presumably, you could place them into a self-driving car and merely give the AI an address in New York City as the destination. Away the vehicle goes, on a cross-country trek, and the only stops needed would be to get fuel. Note that there are no stops for getting food, no need for rest breaks, and so on.

Okay, so we’ve established that self-driving cars are undeniably handy as a drug mule since there is no human driver needed and the AI will not necessarily suspect that anything is afoot. The AI will, without any presumed hesitation, drive to the desired destination. It would seem that we’ve removed any qualms about double-crossing and other issues of the driver doing something stupid while driving.

The expectation is that self-driving cars will drive strictly by the book, as it were, and never step out of line in terms of driving illegally. This, then, shoots down the chances of the self-driving car getting pulled over for running a stop sign or rushing through a red light.

You could hide the drugs somewhere inside the self-driving car. Maybe place the drugs into boxes and put those inside the normal interior of the vehicle, though perhaps that is rather chancy and might attract undue attention. Instead, the drugs could be placed into the trunk or within the structure of the vehicle. Remember the famous scene of (spoiler alert) Gene Hackman searching for drugs in the movie French Connection?

The drugs could be completely out-of-sight while somewhere embedded within the self-driving car. One might view this as a handy precaution, in case somebody somehow decides to stop the self-driving car and take a quick look inside it.

That is a bit of a teaser.

Up until this point in the discussion, it sure seems like a self-driving car is a perfect way to transport illicit drugs. We need to consider how this might not be quite so perfect for a crime.

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/

Mitigating Factors Of The Mule Crime

Life is never easy, not even for lawbreakers. We’ll start with the perhaps biggest hurdle for someone using a self-driving car in this evil way. Do you trust the self-driving car?

I know that seems like an odd question. From a trusted perspective, I am not referring to whether the AI will drive the car safely. We are all pretty much making a base assumption that the only way that we’ll all be allowing self-driving cars on our streets is due to a presumption that they will drive as safely or even more so than human drivers. That’s a given. Those are the table stakes for self-driving cars.

Here’s why a drug lord might be queasy about using a self-driving car.

AI is running the vehicle. This is essentially unseen mechanization. For a human driver, the overseer presumably knows who the person is that is doing the driving. There can be pressure brought to bear on that person to make sure they drive the drugs to the right locale. Furthermore, to some degree, one can assume that the driver will try to protect the shipment (assuming they are following the letter of their orders).

In the case of the AI driving system, it does not have any such equivalent motives nor similar pressure points. With a human driver, someone sworn to loyalty might veer off the path and do something the overseer dislikes. This is not quite the same for AI.

Whoever is the fleet operator of the AI driving system can generally opt to divert the self-driving car or in some manner seek to take over the driving aspects (kind of). This can be done via electronic communications to the self-driving car, such as via the use of OTA (Over-The-Air) messaging between the cloud of the fleet owner (or an automaker or self-driving tech firm) and the AI driving system.

Suppose that a self-driving car is nearing an area that has become flooded by a hurricane. The fleet owner (or similar) might send a message to all their self-driving cars to avoid getting mired in the flooded streets, as such, this might include not being able to then proceed to a designated destination in that vicinity. Perhaps the self-driving cars will be diverted to a waiting area, or come to a halt on a side street and wait for the flood to recede, or maybe the sender will be asked where the vehicle should go as an alternative destination.

The point is that this kind of overarching control is something the overseer might be leery of.

Unless they somehow can get access to the master control point, there is always a chance that the self-driving car will not end-up going where they presume the AI is going to take the vehicle. That’s a loose end, for sure. The same could be said of a human driver, but as mentioned earlier, the human driver can be more readily bullied.

Another concern is the traceability aspects.

Presumably, the AI driving system is keeping track of where it is and where it is going. This is likely to be shared with the fleet owner cloud system (or similar).

Also, the sensors of the self-driving car include video cameras, radar, LIDAR, ultrasonic, thermal, and other such devices, all of which are likely recording whatever they detect around the vehicle. In prior columns, I’ve touted that this provides a handy roving eye for some useful purposes, though it also raises serious privacy intrusion concerns.

It is conceivable that this tracing capability could be used to capture the crooks involved in undertaking the illegal drug transport effort. That being said, they might try various ways to avoid getting snagged in that manner.

Yet another issue for those bad actors is that without a human driver there is essentially no chance of high-speed pursuit and escape. If the police ascertain that the self-driving car might have drugs, it will be a relatively easy matter of having the self-driving car brought to a stop, and no need for those wild and dangerous cars to chase.

Suppose too that someone else who is also a crook finds out about the self-driving car being a mule, and they want to steal the drugs. They could potentially wait along the route of the self-driving car, and when it comes to a stop sign or red light, attempt to break into the vehicle, perhaps switching off the engine and hauling away the deadened car or jumping inside to search for and then leap out with the drugs in-hand.

You might contend that the drug overseer could cope with those issues by putting their own crew inside the self-driving car to protect the hidden drugs within. Aha, if you did that, once again there are humans in the middle of the caper. As such, there is a tradeoff as to whether having humans present versus no humans present rears its head again.

Splitting hairs, you could argue that by having a non-driving crew that the situation is assuredly different from having a crew member that is doing the driving. Also, there is the complicating factor of whether the passengers are aware of what the vehicle is carrying, or they might be unwitting accomplices.

Yet another angle is the chance of someone hacking the AI to subvert what the AI is supposed to do. This might enable a cyber crook to merely alter the destination to their own preferred address. In theory, this is hopefully going to be very improbable since it also opens a can of worms for those of us that will be using self-driving cars on an ordinary everyday basis (see my column postings for details about the cybersecurity facets of self-driving cars, including robo-jacking).

Lots of permutations and combinations ensue.

Conclusion

Sometimes, people upon hearing about this drug mule scenario involving self-driving cars are quick to say that we ought to have the police or other authorities monitoring the AI driving systems. This would apparently somehow enable the authorities to attempt to identify these illicit traffic activities.

Now that’s a whale of a Pandora’s box.

There is a slew of AI ethics considerations that come along with that kind of permitted routine monitoring. In an era that has all of us regularly using AI-based true self-driving cars, will we feel comfortable that our every trip and move is potentially being seen and tracked by our government?

I’ve previously emphasized that there are nation-states that are going to welcome the advent of self-driving cars for that very reason. It remains to be seen whether a free nation would find it justifiable as a potential means to detect or curtail the adverse uses of self-driving cars.

Time will tell.

Copyright 2021 Dr. Lance Eliot. This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

http://ai-selfdriving-cars.libsyn.com/website

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here