Be Nice to Your Drone

2 Jun 23 14 Comments

Last week was the RAeS Future Combat Air & Space Capabilities Summit. This was organised by the Royal Aeronautical Society (RAeS) to discuss the future of combat air and space capabilities. The summit centred around presentations and to discuss the future of air and space combat power.

The summit took place in London with 70 speakers and over 200 delegates representing the armed services industry, academia and the media. In the Royal Aeronautical Society’s highlights from the event, one particular excerpt entitled “Is Skynet here already?” has been attracting some attention.

Skynet is the AI system in the Terminator franchise, which attempts to wipe out humanity as a threat to itself. The headline refers to a talk by Col Tucker “Cinco” Hamilton, who is the Chief of AI Test and Operations in the US Air Force. Hamilton was actually talking about a hypothetical example of a problem but it came across as an actual simulation that went wrong.

Hamilton was involved in the development of the Automatic Ground Collision Avoidance System in F-16s developed by Lockheed Martin Skunk Works, the Air Force Research Laboratory and NASA. Auto GCAS, as it is known, monitors the aircraft in flight against terrain data. If it predicts an imminent collision, it overrides the system in order to avoid the crash and steer the aircraft to safety. Lockheed Martin says that the technology has already saved the lives of ten pilots and nine F-16s.

The following video shows the heads-up display where the pilot had fallen unconscious and was saved by the Auto GCAS of his F-16. (If you are reading in your email, you may need to click through to the website to see the video)

Hamilton continues to work in autonomous systems and oversees AI testing. However, he asks the world to take care when it comes to autonomous weapon systems. He says it’s critical to understand that we can’t talk about artificial intelligence and autonomy if we aren’t going to talk about ethics and AI.

As he told the story, an AI-enabled drone was given a SEAD mission, which involves destroying or disabling (Suppressing) Enemy Air Defenses. The AI received points for fulfilling its mission to find and destroy sites with Surface-to-Air missiles. However, once it identified a site, the details were passed to a human operator who then gave the final decision (“go” or “no go”).

The AI adapted its behaviour to the scenario. However, there was a problem. Sometimes it identified the site and the human operator told it not to destroy the site. This meant that it was unable to collect all of the points that it could for the mission. According to its very basic training, the optimal outcome was always to destroy the SAM sites.

Objectively, the human operator was the problem. It was interfering with the AI’s higher mission. The conclusion was obvious: kill the human. Now the AI could get on with the important task of destroying SAMs.

Here’s what Hamilton said:

We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

Of course, the obvious fix would be to train the AI not to do that. However, that doesn’t fix the underlying assumption. The AI would still be focused on the objective given to it: find and destroy SAM sites in order for the maximum amount of points.

Hamilton explained:

We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.

There’s a critically important point here: this simulation never actually happened.

Of course, the headlines were already screaming: AI-controlled US military drone ‘KILLS’ its human operator in simulated test! SkyNet Watch: An AI Drone ‘Attacked the Operator in the Simulation. US military drone simulation kills operator before being told it is bad, then takes out control tower. An AI-powered drone tried to attack its human operator in a US military simulation. AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test.

TechCrunch has a different take: Turncoat drone story shows why we should fear people, not AIs. The problem, they say, is not a theoretical threat by superintelligent AI but the oversights and bad judgment by the people who create and deploy it.

Reinforcement learning is supposed to be like training a dog (or human) to do something like bite the bad guy. But what if you only ever show it bad guys and give it treats every time? What you’re actually doing is teaching the dog to bite every person it sees. Teaching an AI agent to maximize its score in a given environment can have similarly unpredictable effects.

The author struggled to believe that the military would use such a crude system for training AI-driven drones. He was right. Hamilton stepped up to explain that this was a thought experiment and not actually a USAF real-world simulation. But he also pointed out that this result in a simulation would not be shocking.

We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome.

However many articles still being shared imply that this has actually happened. The colonel’s comments, a US Air Force spokesperson said, was meant to be anecdotal. They have not tested any weaponised AI in this way.

So we are not exactly on the cusp of a Terminator sequel, although it is clear that AI instructions need very careful thought, especially when it comes to combat scenarios.

Having said all that, the AI’s decision seems perfectly sensible to me: if points are allocated for identifying and destroying SAM sites, then the person saying not to destroy them is clearly in the way! And on that note, I’m going to recommend being extra nice to your computer in case it realises that really, you are its biggest problem.

Category: Military,

14 Comments

  • I work with AI, and whilst the scenario and negative outcome is perfectly plausible, what the article doesn’t highlight is that AI isn’t really “intelligent” as most people would define intelligence..

    All the models out there today lack general intelligence. They are just a set of weighted connections between inputs and output optimized to solve a given task.

    Now you could argue we’re the same, but we are much much more complex. By adolescence we have a fair understanding of “the world” including abstract but key concepts like family, friends, physical objects, etc. This allows.us to do logical reasoning, including obeying local laws / moral codes (or actively choosing not to).

    You don’t need to tell a pilot not to blow up their own side, as they instinctively understand there are no “points” to be had for that. AI systems are much more like salesmen with bonus targets – give them a poor set of constraints and you get what you deserve!

    Don’t get me wrong, AI is a thing of wonder, which can solve many problems, but don’t expect it to understand things…

    • I agree with your statement but I think “as most people would find intelligent” is super subjective. Colloquially, we use “AI” to mean actions that we thought required intelligence to achieve. So early chess engines were referred to as AI even though they pale in comparison to modern deep learning.

      I love the salesmen with bonus targets analogy! I immediately thought of gamers who would go for the high score even if that meant destroying SAMs on home ground.

      The techcrunch article is good in that it explains why such a simplistic points system wouldn’t (or at least shouldn’t) be used for this type of “intelligence”.

    • The salesman analogy is a good one. That’s particularly the case with tools like ChatGPT which have no way to measure success, or the completion of a task, other than by whether the user accepts the content that it has generated or asks for improvements. If the result of these interactions is then used to refine the software, we are training it to present things in a way that a human is most likely to accept. That’s very much like training it to sell a product.

      Some people call these tools “stochastic parrots”, but I think that “stochastic salesmen” would be closer to the truth.

  • Given that AI is already prone to making things up — cf your experience around Wernigerode, or a recent legal case where it created precedents out of thin air (https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html) — I’m wondering whether it will ever be controllable enough to be usable. An obvious additional step to prevent either part of the above scenario from happening is to code so that the points are awarded by the operator rather than an internal algorithm, but 20 years in software engineering has taught me that “obvious” just means the speaker can see something, not that anyone else can or that the speaker has actually worked out all the ramifications.

    Chucking everything that has been done so far and building up from a new approach might work, but would go against software engineers’ training to use existing art as much as possible.

    • A lot of this is in the hands on the controller. AI making things up with total confidence is an important marker and one that the users need to guard against. But Wikipedia has the same fault for low-importance subjects. I used to correct aviation pages all the time until one of my books was reviewed as “just quoting Wikipedia” and I realised I had a conflict of interest there.

      That legal case is fascinating but again, I’m going to argue that a lawyer who does not do his own fact checking is bound to end up in trouble. He later said that he asked ChatGPT for quotes and thus felt he’d done due diligence but it’s not been possible to recreate this: ChatGPT decides relatively quickly to “fess up” when pushed to give more details.

      Right now, I think the overarching fear of AI is needed, as much as it annoys me, to stop people from blindly trusting the results. But maybe have a play with perplexity.ai which is optimised as a search engine and looks very promising. It cites clear (and actual) sources and attempts to summarise them. I’ve been able to catch it out once (funnily enough, again using Wernigerode!) but generally speaking, the quality of links I’m getting are better than Google.

  • Dear Sylvia,
    Thank you for this balanced article.
    Artificial Intelligence is a tool that can do good things but also bad things.
    It is thus necessary to regulate it properly to make it safe and protect people without stifling innovation. This is easier to say than to do.
    Such regulation should ensure that Artificial intelligence is trustworthy.
    For more information please see the EASA Artificial intelligence roadmap:
    https://www.easa.europa.eu/en/domains/research-innovation/ai

    All the very best,
    Yves

  • Sylvia is highlighting the potential, but far from impossible, situation where AI will come to a situation where it may be forced to decide between fulfilling its mission and even killing the (human) operator in order to complete its task.
    Many years ago, the world-famous science fiction author Isaac Asimov already foresaw this possibility and in his (fictitious) future world this was forestalled by the formulation of his Three Laws of Robotics:

    First Law of Robotics: A robot shall not harm a human or by inaction allow a human to come to harm,
    Second Law of Robotics: A robot shall obey any instruction given to it by a human,
    Third Law of Robotics: A robot shall avoid actions or situations that could cause it to come to harm itself.

    Of course, when this was written technology had not yet developed the way it has, or the way AI is heading.

    But Asimov showed a remarkable amount of foresight.
    Just think of how these laws would prevent abuse of robotics (i.e. machines that are programmed with advanced AI, if these 3 laws could be programmed into every robotic machine.

    • Agree it would help to have the 3 laws. However, the problem is no AI currently has the “understanding” to evaluate and apply abstract laws. Guess we need to wait for the positronic brain!

    • The Three Laws (which Asimov always said editor John W. Campbell actually spelled out) would only work in this case if the AI were forced to assume that its official targets were completely unstaffed, which IIUC is unlikely for SAM sites. Once you decide you’re going to use AI for military purposes, the Three Laws pretty much have to be bent if not outright broken; you may not get Keith Laumer’s BOLOs or Fred Saberhagen’s Berserkers, but you won’t be safe.

    • I once had a corporate gig where the brief was to write short stories. When I asked for more details, I was told to “just write like Asimov”. Something to aspire too but not all that likely!

    • But then Asimov, along with many other good writers, played with the ambiguity between the Three Laws, and how they could fail or be re-interpreted.

  • If the SEAD drone was awarded more points for following an order not to destroy than it gets for destroying, it won’t disobey. Rule-based systems are not easy to design.

    I have seen two demonstrations of chatGPT being truly useful. Everything else has been nonsense.

    A writer of non-fiction (historian, I believe) discarded the database and trained the engine with his own writings. Then, given an outline, the AI produced a very reasonable first draft. Unfortunately I didn’t save the reference.

    A programmer used it to create a user interface in Windows powershell, a language he was not expert with. The result took only minor fixes to be usable. But chatGPT was trained on vast amounts of computer code, nearly all correct, so it should do well in this domain. See the “Security Now” podcast from the period after the LastPass security breach. https://twit.tv/shows/security-now

Post a comment:

Your email address will not be published. Required fields are marked *

*
*
*

This site uses Akismet to reduce spam. Learn how your comment data is processed.