back to article US government use of AI is shoddy and failing citizens – because no one knows how it works

New York University's AI Now Institute, a research hub investigating the wider social impacts of machine learning algorithms, has published a report critiquing how the US government uses the technology. The report, emitted this week, is based around a series of case studies discussed during a workshop held in June earlier this …

  1. Anonymous Coward
    Anonymous Coward

    re. no one knows how it works

    well, I suppose AI does know how it works!

    1. H in The Hague

      Re: re. no one knows how it works

      "well, I suppose AI does know how it works!"

      I don't think it does - that's the crux, it can't explain the reasoning behind decisions. And I get the impression that often there's no reasoning/intelligence, just statistical analysis. That can be a useful tool to make statements about a population, but not for making statements about individuals. Furthermore, using inappropriate training data is a clear example of Garbage in - Garbage out. I thought that most of us would have learned enough at secondary school to appreciate both the statistical and IT flaws but I'm probably naive.

      Incidentally, today's populists seem to complain a lot about "unaccountable bureaucrats" (who actually implement rules set by the legislature) but not about "unaccountable AI" (which implements opaque rules, if any). Why?

  2. Primus Secundus Tertius

    Naturally obscure

    Nobody really knows how human intelligence works, either. There is a good case for not employing dodgy wetware.

    1. Cuddles

      Re: Naturally obscure

      "Nobody really knows how human intelligence works, either."

      Yes they do; in this case "how it works" does not refer to the basic functioning at the cellular level, but rather to the ability to show the reasoning behind reaching a particular conclusion. Showing your working is drilled into humans from primary school onwards, you don't need to break the brain down to its component quarks to get that sort of information. The problem with AI machine learning is that, unlike humans, we actually have essentially perfect understanding of how the hardware and software function, but they're generally incapable of explaining their reasoning by design.

  3. Eddy Ito

    I thought it was shoddy because the last three lines of code are:

    1850 PRINT "Ask the white guy in the corner office"

    1860 GOTO 10

    1870 END

  4. Keven E

    Designed obscure

    "The lawsuit allowed an expert to pry into some parts of the system, which was deemed impenetrable."

    "...has recommended that external auditing could help companies work out to weed out bugs and assess how effective systems are before they’re rolled out to reduce harm."

    If those, too, are pre-deemed impenetrable will they NOT be rolled out until self-audits can be fully performed? At what point do the (presumably) self-appointed auditors (experts) give up and deem such impenetrable? Some actuary is gonna set that point where ROI and anticipate lost lawsuits are offsetting in the companies favor. Me using the word "favor" there is being *generous.

  5. Anonymous Coward
    Holmes

    How it works

    There's a nice article in Quanta Magazine that has an explanation of how it works for Deep Neural Nets. There's links to a presentation from 2017 and related papers. I've been wandering around this since forever and it matches my intuition, for what that is worth.

    New Theory Cracks Open the Black Box of Deep Learning

    1. Adrian 4

      Re: How it works

      From that article : "said that as a general principle of learning, it “somehow smells right.”

      Mechanical and electrical engineers point at software engineers and laugh : there isn't any engineering, they say, just guesswork and testing. Nowadays, just guesswork. And they have a point.

      But if 'somehow smells right' is what passes for verification, I think the deep learning folks have taken it to a whole new level.

      I'm also unsure that it's reasonable to describe deep learning systems as 'so successful'. Are they ? They've been successfully marketed recently but I haven't seen too much evidence that they're actually useful. Are you sure they're not just really good at confirming their author's prejudices ?

      1. LucreLout

        Re: How it works

        Mechanical and electrical engineers point at software engineers and laugh : there isn't any engineering, they say, just guesswork and testing. Nowadays, just guesswork. And they have a point.

        Not really, no. There's plenty of fundamental software engineering practices which is followed will definitively result in higher quality, more maintainable, less buggy code. The "hobbyists" misrepresenting themselves as software engineers are akin to my presenting myself as an actual engineer because I can buold a garden shed. Unfortunately, because software engineering lacks an industry regulator, nobody can stop the hobbyists showing up and claiming parity.

  6. Anonymous Coward
    Anonymous Coward

    "...failing citizens"

    Let's just set the record straight, AI isn't being developed for the benefit the citizens - that would just be a by-product.

    Come to think of it, the same goes for the government...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like