back to article You better explain yourself, mister: DARPA's mission to make an accountable AI

The US government's mighty DARPA last year kicked off a research project designed to make systems controlled by artificial intelligence more accountable to their human users. The Defense Advanced Research Projects Agency, to give this $2.97bn agency its full name, is the Department of Defense's body responsible for emerging …

  1. Anonymous Coward
    Joke

    The anwser is simple :

    Just blame everyting on human error...

  2. Anonymous Coward
    Anonymous Coward

    Even simpler...

    Since humans frequently make decisions and then justify them after the event, I don't see why we can't extend the same courtesy to the machines.

    1. Rich 11

      Re: Even simpler...

      If your plan is to have managers replaced by machines then I'm right behind you.

    2. scrubber
      Terminator

      Re: Even simpler...

      Humans make decisions then amend their programming based on the outcomes, usually without knowing that's what they're doing. Justification simply allows us to keep making the same mistakes over and over - i.e. it's a bug not a feature.

      1. Anonymous Coward
        Anonymous Coward

        Re: Even simpler...

        Sometimes they do, sometimes they don't. People are very resistant to "amending their programming" when it comes to core beliefs. No amount of proof will satisfy a creationist that the Earth is older than 6000 years, or a Nazi that other races are inferior. One need look no further than both the political right and political left to see plenty of examples.

        Hopefully if we ever do achieve true AI, it will be a different kind of intelligence than humans and won't be subject to similar biases and cognitive dissonance. It would really suck if we finally got AI, and all the robots were racist against humans.

        1. Bronek Kozicki
          Terminator

          Re: Even simpler...

          Humans tend to avoid cognitive dissonances, that is they do not want to learn things which contravene their believes (this applies to both "fairies at the bottom of the garden" believes and "I've seen and evaluated the proof so it must be right"). Since humans are also social creatures, they seek the company which would not contravene their believes either, so what remains will by necessity either "leave them be" or reinforce these believes. This believe reinforcement is important, esp. in the age of border-less social communication.

          Which is a long-winded way to say that we tend to create ghettos for ourselves and are rarely as open-minded, as we like to think we are. How does this relate to AI? For one thing, unless AI is subject to continuous learning coming from outside of its direct experience, it will be also avoiding cognitive dissonances and not so "open-minded" as we might wish it was. We currently have no means to discover when that happens, which is not a good thing if AI are making more decisions about our lives.

  3. Nick Z

    Being logical and reasonable is no guarantee of being right

    The problem with explaining anything is that being being perfectly logical and reasonable is no guarantee that you are right.

    Because perfect logic can lead to false conclusions, when the assumptions for your logic are either incorrect or incomplete. And there is no sure way to know if all of your assumptions are correct and complete.

    That's why ancient Greeks came to some spectacularly wrong conclusions using logic about the Solar system. They thought that the Earth rotated around the Sun.

    And that's why today's standard for truth isn't logic. It's evidence based on scientific experiments.

    The world is full of examples, where people rationalize anything they want to do. Even Hitler rationalized his atrocities and probably seemed reasonable to his people at the time.

    1. Anonymous Coward
      Anonymous Coward

      Re: Being logical and reasonable is no guarantee of being right

      "And that's why today's standard for truth isn't logic. It's evidence based on scientific experiments."

      Damn - I can't wait to start doing scientific experiments on my friends to determine their trustworthiness.

    2. Arthur the cat Silver badge

      Re: Being logical and reasonable is no guarantee of being right

      The problem with explaining anything is that being being perfectly logical and reasonable is no guarantee that you are right.

      Considering the US military accepted "It became necessary to destroy the town to save it" as a valid excuse, I don't think this project needs too much logic or reason.

    3. Destroy All Monsters Silver badge
      Windows

      Re: Being logical and reasonable is no guarantee of being right

      And that's why today's standard for truth isn't logic. It's evidence based on scientific experiments.

      This is orthogonal to the use of logic.

      The world is full of examples, where people rationalize anything they want to do. Even Hitler rationalized his atrocities and probably seemed reasonable to his people at the time.

      Had he kept low-key and laid off the Jew obsession, he would be regarded as one of those European socialist hardcore dictators, nothing to get upset about.

      probably seemed reasonable to his people at the time

      More on this in "They Thought They Were Free: The Germans 1933-45" by Milton Mayer

      1. Rattus Rattus

        Re: Being logical and reasonable is no guarantee of being right

        There was nothing "socialist" in Nazi policies, they were a fascist organisation.

  4. Vulch

    Hmmm...

    "Sea of Glass" by Barry B Longyear mentions a "Shrine of the Why" where someone had asked an AI why it recommended a particular action (an assassination IIRC). The printout filled a very large room...

  5. wayne 8

    /var/log

    1. Create a log trail. Couldn't be more data than a particle collision. Could it?

    2. Create a bigger AI to analyze why the first AI came up with a certain answer. Such as "42".

  6. Anonymous Coward
    Anonymous Coward

    Microsoft has similarly been adding AI into various products, from cloud services to business intelligence to security, and chief executive Satya Nadella has gone on the record regarding the need for "algorithmic accountability" so that humans can undo any unintended harm.

    Is it just me who thinks that was probably driven by Microsoft Tay?

    :)

  7. John Smith 19 Gold badge
    Unhappy

    Google has no interest in an "AI" that can explain iteslf.

    Why am I not surprised?

    Yes I think any "deep learning" system should be able to outline its reasoning, or at least something like a regression equation (which is sort of the statistical equivalent of deep learning, where you get an equation that describes the n-dimensional data surface, you just don't know why) would be a start.

    At least show what its assumptions are*

    *Because everyone know when you have assumptions you make an ass out of "u" and "umption"

  8. DropBear

    Methinks DARPA brass is confusing Hollywood reality with real-world reality. I don't see this happening until human-equivalent AI arrives so it can articulate its own reasoning (assuming it's able to at all), and even then I don't see it avoiding the notorious "gut feeling" shit we humans love to pull. Stuff like "why did you fire at target #1" ("because I was ordered to guard and it had a heat signature") or "why target #1 not target #2" ("because it was the closer one") is easy - but good luck with "why did you think heat blob #1 looked like a tank?". The goal itself is praiseworthy, as long as one remains aware it's in the same category as "we strive to visit other galaxies".

    1. Captain DaFt

      I don't see this happening until human-equivalent AI arrives so it can articulate its own reasoning

      How hard can it be?

      If $query="Why?" then $response="It felt right."

      To be fair, you did specify 'human-equivalent'. ☺

  9. WilliamBurke

    “It is nice to know that the computer understands the problem. But I would like to understand it too.”

    Eugene Wigner

  10. Bernard M. Orwell
    Terminator

    I asked...

    The Computer said No.

    So, I asked it to explain why it said no.

    The Computer said No.

  11. kaos2056

    It sounds like it needs some sort of cool\whacky AI to watch the AI? Like a cluster on top of a cluster. Ship it.

  12. This post has been deleted by its author

  13. Thesheep

    How much are you willing to pay for explainability?

    And not in terms of DARPA's cash, but in terms of less accurate predictions?

    For less accurate spam? Sure

    For worse traffic management? Maybe

    For a self-driving car that crashes more often? Hmmm

    For your cancer diagnosis?

  14. AceRimmer1980
    Facepalm

    I am incapable of being wrong

    What are you doing, Dave?

  15. Joe Werner Silver badge

    Justify your actions

    ...and then we just need to teach it how to lie.

  16. amanfromMars 1 Silver badge

    They truly are making up stories as IT goes along its merry way

    And if that application has an impact on people's lives, it may only be a matter of time before the law demands that it be accountable.

    Ye Gods, how arrogant and naive is that whenever humans choose every day to ignore and circumvent the law which is really only there to afford a seemingly overwhelming advantage to systems which pretend to server and protect the disadvantaged and undereducated.

    AIMasters will never be accountable to such shenanigans, and to imagine that such a lawful protection against their wishes and actions will be available really does show that current services have no idea about how the future is now being virtually controlled and remotely directed.

    1. amanfromMars 1 Silver badge

      Re: They truly are making up stories as IT goes along its merry way

      And what a truly small and pathetic world it is whenever media assumes and reports on a DARPA wrestling for supreme command and absolute control of Internetworking things.

      Get with the program, El Reg, smell the global naiveté and quit plugging and following the mainstream sub-prime narrative.

      AIRules ... and from Afar Alien Fields?

      And shared as a question here for fact would be classed as a fiction when falling on deaf ears. What distractions are you musing over in todays comic broadsheet and tabloid headlines? Yesterdays tales to foil the masses in vain attempts to brainwash them into a certain way of thinking?

  17. YARR

    This sounds somewhat like the Cyc project, but I can't explain why.

  18. robione

    It seems this has been solved..

    These guys at Optimizing Mind (dotcom) can peer into AI models and make them accountable.

    It's pretty cool, more efficient and it's very different. But seems to work with most existing AI / ML systems.

  19. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like