The main obstacle
The problem with neural nets (and deep learning) is that once they have been trained, we don't know what's going on inside them.
Indeed it is.
I'd say that it is the absolute obstacle for using these instruments.
Whereas we can usually explain (to ourselves and others) the reasons for making a decision, to the extent of not really knowing (consciously) and even giving that as an explanation, it is an explanation.
And even so, an I don't know why explanation can (eventually) be, analysing the context that the decision was made in, become a sufficiently clear explanation and thus enable us to make ourselves yet another decision based on what that explanation was and meant to the person who made it.
Yes, I also had to read that a few times to see if it made sense.
Having read the article, I do not think this is possible and the AI code may well learn/be taught to lie.
ie: hide/distort information for whatever purpose
I have the distinct feeling that all this can very well be our undoing.
But, as usual, it is all set up to make money, so everyone involved is going full steam ahead without considering the consequences.
"The development of full artificial intelligence could spell the end of the human race."
Stephen Hawking 1942-2018
We should take heed before it is too late.
Confirmation bias, too
I think we can safely assume that these models will soon be trained on image sets classified by other AI models, amplifying exponentially any bias...
Of course, as is often the case for denounced AI flaws, the same can be observed with the flesh version of AI : NS (Natural Stupidity).