Evolving a VM that runs on a neural net.

This topic was created by John Smith 19 .

  1. John Smith 19 Gold badge

    Evolving a VM that runs on a neural net.

    Since McCulloch & Pitts in the 1940s people have accepted the brain is a very large number of (relatively) simple elements with a lot (up to about 10 000) wires into (or out of) them (that contrasts with about 10 for single gate on a microprocessor chip) of an unknown number of layers in depth.

    Human "Learning" is theorized to be the process of raising or lowering the weightings of the signals on each of these connections.

    "Deep" learning seems to mimic this structure.

    But humans don't learn this way normally. No one sits reading a book thinking "I much raise the weightings of the neural cluster just down and to the right of my left eyeball," or even knows where that information will be stored (in any real sense)

    Instead we operate at levels of abstraction so far above that that we are not even conscious it exists (unless you have a knowledge of neurobiology to begin with).

    In IT terms that sounds like a "Virtual machine" rather than a system running on the "bare metal."

    IOW all existing systems are (essentially) writing in the equivalent of NN "assembler." At best the data structures are static, like arrays in 1950's FORTRAN

    This can't be an original thought, but every time I Google it all I get is how to run an NN simulation inside a VM, with one exception , and that was in a fairly narrow problem domain. However I'm assuming the answer is already yes, as that's what we are (unless someone out there really can manipulate their NN directly). SF has the "Langford death parrot" and Neil Stephenson's "Snow Crash."

    There is a precedent for this in Conways "Game of Life" with the self propelling structure called the "Glider" and the discovery of a "Glider Gun" structure to make and launch gliders across the cell array. I'm still not clear was this "discovered" by random fiddling, or did someone systematically design it (IOW did they have a deep understanding of the problem in the first place?).

    I'm curious about 3 things.

    1) Who first thought of this idea of a VM running on the human NN. It's got to be older than 1992 (when Snow Crash was written).

    2)If "deep learning" systems are already starting to evolve their own internal representations, or are they still too simple to have this ability?

    3) Deeply abstract but neuro linguistic programming (NLP) deals with identifying and changing human behavior using a variety of techniques. Can it be applied to deep learning systems, or are they not "deep" enough?

    Note. I'm not talking about the structure of the networks (that's basically fixed) I'm talking about the patterns of firing and weightings evolving over time to create a higher level representation of the information. A dynamic data structure inside the NN architecture.

    Human beings process facts. Human beings process sounds. Human beings process images.

    They don't process connection weightings on the NN inside their own heads.

    So what's between the photons spelling out "Tommy broke the window" hitting your eyeballs and you thinking "Ahh. Tommy did it" ?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon