re. no one knows how it works
well, I suppose AI does know how it works!
New York University's AI Now Institute, a research hub investigating the wider social impacts of machine learning algorithms, has published a report critiquing how the US government uses the technology. The report, emitted this week, is based around a series of case studies discussed during a workshop held in June earlier this …
"well, I suppose AI does know how it works!"
I don't think it does - that's the crux, it can't explain the reasoning behind decisions. And I get the impression that often there's no reasoning/intelligence, just statistical analysis. That can be a useful tool to make statements about a population, but not for making statements about individuals. Furthermore, using inappropriate training data is a clear example of Garbage in - Garbage out. I thought that most of us would have learned enough at secondary school to appreciate both the statistical and IT flaws but I'm probably naive.
Incidentally, today's populists seem to complain a lot about "unaccountable bureaucrats" (who actually implement rules set by the legislature) but not about "unaccountable AI" (which implements opaque rules, if any). Why?
"Nobody really knows how human intelligence works, either."
Yes they do; in this case "how it works" does not refer to the basic functioning at the cellular level, but rather to the ability to show the reasoning behind reaching a particular conclusion. Showing your working is drilled into humans from primary school onwards, you don't need to break the brain down to its component quarks to get that sort of information. The problem with AI machine learning is that, unlike humans, we actually have essentially perfect understanding of how the hardware and software function, but they're generally incapable of explaining their reasoning by design.
"The lawsuit allowed an expert to pry into some parts of the system, which was deemed impenetrable."
"...has recommended that external auditing could help companies work out to weed out bugs and assess how effective systems are before they’re rolled out to reduce harm."
If those, too, are pre-deemed impenetrable will they NOT be rolled out until self-audits can be fully performed? At what point do the (presumably) self-appointed auditors (experts) give up and deem such impenetrable? Some actuary is gonna set that point where ROI and anticipate lost lawsuits are offsetting in the companies favor. Me using the word "favor" there is being *generous.
There's a nice article in Quanta Magazine that has an explanation of how it works for Deep Neural Nets. There's links to a presentation from 2017 and related papers. I've been wandering around this since forever and it matches my intuition, for what that is worth.
From that article : "said that as a general principle of learning, it “somehow smells right.”
Mechanical and electrical engineers point at software engineers and laugh : there isn't any engineering, they say, just guesswork and testing. Nowadays, just guesswork. And they have a point.
But if 'somehow smells right' is what passes for verification, I think the deep learning folks have taken it to a whole new level.
I'm also unsure that it's reasonable to describe deep learning systems as 'so successful'. Are they ? They've been successfully marketed recently but I haven't seen too much evidence that they're actually useful. Are you sure they're not just really good at confirming their author's prejudices ?
Mechanical and electrical engineers point at software engineers and laugh : there isn't any engineering, they say, just guesswork and testing. Nowadays, just guesswork. And they have a point.
Not really, no. There's plenty of fundamental software engineering practices which is followed will definitively result in higher quality, more maintainable, less buggy code. The "hobbyists" misrepresenting themselves as software engineers are akin to my presenting myself as an actual engineer because I can buold a garden shed. Unfortunately, because software engineering lacks an industry regulator, nobody can stop the hobbyists showing up and claiming parity.