back to article Hands on with neural-network toolkit LIME: Come now, you sourpuss. You've got some explaining to do

Deep learning has become the go-to "AI" technique for image recognition and classification. It has reached a stage where a programmer doesn't even have to create their own models, thanks to a large number available off the shelf, pre-trained and ready for download. Training these models is essentially an optimisation exercise …

  1. oiseau
    FAIL

    The main obstacle

    Hello:

    The problem with neural nets (and deep learning) is that once they have been trained, we don't know what's going on inside them.

    Indeed it is.

    I'd say that it is the absolute obstacle for using these instruments.

    Whereas we can usually explain (to ourselves and others) the reasons for making a decision, to the extent of not really knowing (consciously) and even giving that as an explanation, it is an explanation.

    And even so, an I don't know why explanation can (eventually) be, analysing the context that the decision was made in, become a sufficiently clear explanation and thus enable us to make ourselves yet another decision based on what that explanation was and meant to the person who made it.

    Yes, I also had to read that a few times to see if it made sense.

    Having read the article, I do not think this is possible and the AI code may well learn/be taught to lie.

    ie: hide/distort information for whatever purpose

    I have the distinct feeling that all this can very well be our undoing.

    But, as usual, it is all set up to make money, so everyone involved is going full steam ahead without considering the consequences.

    "The development of full artificial intelligence could spell the end of the human race."

    Stephen Hawking 1942-2018

    We should take heed before it is too late.

    Cheers,

    A.

  2. ElReg!comments!Pierre

    Confirmation bias, too

    I think we can safely assume that these models will soon be trained on image sets classified by other AI models, amplifying exponentially any bias...

    Of course, as is often the case for denounced AI flaws, the same can be observed with the flesh version of AI : NS (Natural Stupidity).

  3. ElReg!comments!Pierre

    Who tests the testing tool ?

    In an xkcd-esque musing, I now consider looking into a model that would use an arbitrarily weighted combination of Fourier transform and metadata to classify images, just to confuse LIME users.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like