Google’s Inceptionism: Cutting Through the Hype

Standard

News about recent Google research has made splash headlines around the internet. The Google researchers call the result “inceptionism” and there is all kind of hype about dreaming computers. Given the nature of PlaneTalk, I’d be remiss to not comment on this finding and try to offer a small voice of sanity (ironic, no?) amongst all the extravagant and somewhat silly information being bandied about.

First, Some Sources
If you haven’t heard about Google’s Inceptionism, here are a few links.

This is the blog I first found the news about this work:
Google Neural Network Produces Psychedelic Imagery | Big Think

Here is a Guardian newspaper article:
Yes, androids do dream of electric sheep | Technology | The Guardian

Here is a reasonable post by David Miller that links the original articles on ArXIV:
Google’s Inceptionism

Finally, here’s a link to an animation illustrating “inceptionism”

Artificial Neural Networks
What the Google researchers are calling “inceptionism” refers to some recent results they obtained studying artificial neural networks (ANNs). In a previous post, I mentioned how NVIDIA GPUs are furthering computational studies. I linked to Jen-Hsun Huang’s Keynote talk from the NVIDIA 2015 GPU Technology Conference, when he explains about “deep learning” in ANNs.

Deep learning is a neural network technology. Neural nets have been around since the 1940s and their genesis is associated with the names of McCulloch and Pitts. The idea is pretty simple. Imagine you have two nodes of a network linked together. The idea is that the connection strength between the two nodes can change. How can it change? Pretty much anyway you can imagine. It can go from off to on. It can vary from 0% to 100% and anything in between. It could go up and down like a sine wave if you want.

ANNs are a way to model networks that was inspired by what we know about how neurons interact with each other. We know very well today that the synaptic connections between two neurons are not fixed, but can vary, either increasing or decreasing in strength. This general topic is called “plasticity” in the neurosciences, and, like most things in modern biology, is an alphabet soup of acronyms for the molecules known to cause these changes. One also hears terms like “long term potentiation”, “long term depression” and so on, in this context.

The idea that connections between neurons are plastic inspired a whole field of mathematics (or computer science? not sure…) that today is called ANNs. Over the decades since McCulloch and Pitt invented the general idea, there has been a healthy feedback between the mathematical studies of ANNs and the neurosciences. For example, an application of ANNs with which I am familiar was invented by Edmond Rolls, and gave insight into how the hippocampus might function to form declarative memories.

In addition to helping out in the neurosciences, ANNs have their own unique role in computer science. They can be used for all manner of applications quite independent of their use in the neurosciences. For example, they can be used for medical diagnosis, all sorts of pattern recognition, and other general computing tasks.

Google’s work occurred in the context of using ANNs as a means of visual pattern recognition—image classification—as described in the NVIDIA talk linked above.

Layers
I first learned about ANNs about fifteen years ago. The general idea is you hook a bunch of nodes together in some predefined way to form a layer of nodes. In the old days, there were three layers: an input layer, a middle layer, and an output layer. Then you fed some type of information into the network, which was and still is called “training” the network. The information could be images that you want to classify, or it could be patient medical parameters that you want to classify in terms of some disease outcome, or almost anything else you can think of.

A classic three layer neural network.

What is amazing about ANNs is they can be trained to partition the input information into categories that are meaningful to us humans. They seem to do this in a fashion reminiscent of how we classify objects in our minds, but one must guard against anthropomorphizing the computer.

Again, back in the old days, how this worked was somewhat of a mystery. The ANNs make matrices of the connection weights amongst the nodes. During training, all these numbers change until they come to an equilibrium and don’t change much with further training. At that point, the ANN could perform some pattern recognition or classification task. The mystery was in how the pattern recognition task was encoded in the connection weight numbers.

Deep Learning
Nowadays, with “deep learning” ANNs, one can construct ANNs with 10, 20 or more layers. These ANNs can perform complex pattern recognition tasks. Again, if you watch Jen-Hsun Huang’s talk, you can see examples of a deep learning ANN doing image classification.

A deep learning artificial neural network.

I went off about how cool this is in my previous post. In large part, this is now possible thanks to the massively parallel computations possible because of NVIDIA GPUs and other equivalent technologies.

A Real Advance
What the Google Researchers did was study the information processing of the layers in one of these “deep learning” ANNs that had been trained to do image classification. The way they did this was to input an image into an ANN that was already trained, and then cause a specific layer to have a disproportionately large effect on the output image. This would have the effect of exaggerating whatever a given layer was doing in the process of pattern recognition.

What they found is that the various layers abstract different types of information. As you can see in the image above, closer to the input side, the ANN recognizes generic things like line segments and line directions, and stuff like that. Closer to the output side, the ANN abstracted higher order features like eyes or faces, buildings; more complex shapes that were composites of the lower order shapes.

The reason this is an advance is that, as with anything coded into a computer, we can analyze every facet of the system. Compare this to, for example, teaching a rat how to recognize something. We simply cannot dissect the rat to fully understand all the changes made in the brain in association with the learning. Thus, what has been learned by the Google research team can give some insights into the fine-grained details of how ANNs abstract information that may help inspire brain research.

Not that Big of a Deal
Now, to put all of this in some context. First, the way the ANNs abstracted information has long been suspected to be the case in the neurosciences. David Marr was one of the first researchers in vision to postulate a hierarchy of increasing abstraction in our visual system. This was back around 1980. Marr didn’t come up with this out of the blue, but based it on the Nobel-prize winning work of Hubel and Wiesel, who were amongst the first neuroscientists to demonstrate experimentally how the visual system of animals worked by showing that neurons in the lower order visual cortices reacted not to whole objects, but only to line orientations and directions. This was in 1969.

Therefore, it has been known in some detail for 40ish years that the brain constructs vision from simple primitive elements and pieces these together into higher order structures. Given this, what the Google groups showed was not that big of a deal. Again, what is useful about the Google result is that it can be fully dissected to reveal mathematical patterns that cannot be measured at all in animals. This is the main reason “deep learning” is a big deal in general: it will help us isolate mathematical patterns that underlie accurate pattern recognition, that we can then use to go look and see if a similar thing is occurring in real living brains or not.

This is the important aspect of the work. Unfortunately, the popular press is, well, the popular press. They are doing their usual thing of spreading hype and sensationalism instead of reporting the real news.

If It Smells Like…
So this brings us to the BS of all the Inceptionsim reporting that is making its way around the internet at the moment.

First, computers do not dream. The very concept makes no sense. Our brain cycles between three radically different states on a 24 hour basis called waking, non-REM and REM. Our brain literally works in three distinct modes. Computers only have two modes: on and off. And I am not talking about binary coding. I mean literally the computer can be turned on or off. Those are its two global states. When it is on, it works in the most boringly stereotyped fashion one can imagine by pushing gazillions of minute electrical charges through human-designed circuits.

It’s not as if there is some mysterious activity going on in these circuits when we are not looking that corresponds to the computer dreaming. This is just plain crazy talk. Coming from me, that is pretty strong criticism!

In short, all this stuff about computers dreaming is complete bullshit. It literally makes no sense.

Second, a lot of the hype going on right now about the “bizarre images” reminds me of, oh, when fractals came on the scene, when chaos theory came on the scene, when those funny “magic eye” stereograms came out, and so on. There are always idiots out there that thinks the new thing is the end-all-be-all.

Eventually the hype will die down and what we will be left with, with regard to the images themselves, is a new technique in computer art. It will play out and eventually become stale.  And the regular scientific stuff will go on, unhyped, as it always does.

Link to Dreaming and Psychedelics
There are a couple places that there is some reason for excitement.  The images are certainly reminiscent of what Alan Hobson called “dream bizarreness” .  The imagery is also reminiscent of a class of psychedelic-drug induced hallucinations.  There are a couple down-sides to all this worth mentioning, then I will close out on an upbeat note.

Its Not the First Time
There is a prior connection with neural nets and dreaming that garnered some hype in its day.  The famous discoverer of DNA, Francis Crick, teamed up with a fellow Mitchison to offer a theory of dreaming based on neural nets.  The idea was called “reverse learning” and based on how ANNs are trained.  The training process refines the connection weights.  When a network is only partially trained, the outputs overlap, very similar to the images that Google has produced.  Crick and Mitchison noticed this similarity too and proposed that dreams are a form of “reverse learning” analogous to what goes on during the training of ANNs and refinement of the connection weights.

I have never liked, nor been convinced, by this theory. It is the type of idea people invent who have no direct experience in the dream world.  For one, the main behavior we humans display that most resembles “reverse learning” is practice.  When we practice something, whether playing the piano or memorizing class notes for an exam, we are refining the connection weights of networks in our brain. Prior to the refinement brought about by practice (i.e. called “learning”) we too confuse inputs and outputs and mess up playing the piano or make mistakes on the exam.

Furthermore, the link between dreaming and memory is complicated (see here). The Crick and Mitchison hypothesis mows over these subtleties like a bull in a China shop.

Then Came The Materialists
Of course, some people are jumping on the Google result as proof materialism is alive and well. At least one person out there is pooh poohing mystical and occult stuff. This author says:

“If computers hallucinate as awesomely as this. It suggests the human brain doesn’t need any external help generating the self-transforming machine elves and other crazy visions often attributed to aliens or inner worlds.”

Computers hallucinating? Come on. Please. Can a less careful sentence be crafted?  How the heck does a computer hallucinate? Did they vaporize the LSD and blow it on the CPU?  Perhaps the CPU was doped with LSD? Ugh…please…

It is us who look at the images and see in them something resembling our hallucinations, just as was done with fractals in the past.

It’s a sign of the times when I have to say the following:

Computers are not conscious, Folks.

Alan Watts Foresaw This
It is certainly interesting that the mixing of images resembles some dream experiences and some psychedelic-induced experiences. The key word here, however, is “some”. By no means does this new technology address all the known phenomenology of either dreaming or psychedelics. And of course, it is mute on the million dollar question of the qualia problem.

A much more intelligent discussion (than that quoted above) of how to apply the Google finds to the psychedelic drug experience can be found here.

The imagery reminds me of a favorite quote from Alan Watts (from his Joyous Cosmology) about psychedelic-drug induced hallucinations that I used at my Kundalini & LSD web site:

Closed-eye fantasies in this world seem sometimes to be revelations of the secret workings of the brain, of the associative and patterning processes, the ordering systems which carry out all our sensing and thinking.”

I think the results of the Google work gives some credence to what Alan Watts says here.  It is for precisely this reason that I used the Google image in the cover to Chapter 19 of the Yogic View of Consciousness. The work certainly promises to give us additional insight about the “associative and patterning processes” used by the brain.

Hopefully this post cuts through some of the silliness being said out there and provides a perspective on this incremental, but useful development from the Google labs.

And no, it doesn’t mean we don’t have a soul.

Advertisements

8 thoughts on “Google’s Inceptionism: Cutting Through the Hype

  1. PeterJ

    “In short, all this stuff about computers dreaming is complete bullshit. It literally makes no sense.”

    Nice to read something sane on the topic. But then, who in the field is going to send this result to the funding agency?

    • Hi Peter!
      Nice to hear from you and thanks for the comment.

      It’s the classic issue of form vs. consciousness. You can’t get consciousness from form. Forms mold, shape, or give expression to consciousness but cannot create consciousness. Unfortunately, every time there is some new breakthrough in understanding patterns and form, we will have to sit through this kind of silly hype from people who don’t know better. The irony is that the forms are patterns within consciousness. There is a great line from an old 1970s Yes song (called “Yours is no disgrace“) that seems appropriate here:

      “Silly human, silly human race…”

  2. Reblogged this on The Dream Well and commented:
    You may have come across some of the recent hype about Google’s Inceptionism and the claim that this means computers can dream. Here is an interesting article addressing some of the reasons why this is currently nonsensical…

    • Dear Amy
      That is very kind of you to reblogg. Thank you so. And thank you for stopping by and leaving the kind comment. You’re site looks very interesting too. I will be sure to check it out. Thank you again,

      Best wishes,

      Don

  3. Great stuff as always Don. I’m so happy you take the time to write about stuff that most of us can simply sense is nonsense, but don’t have the training and knowledge to articulate.

    This is priceless (I’m still laughing):

    “Computers hallucinating? Come on. Please. Can a less careful sentence be crafted? How the heck does a computer hallucinate? Did they vaporize the LSD and blow it on the CPU? Perhaps the CPU was doped with LSD? Ugh…please…”

  4. Hi Ptero9

    So great to hear from you! Hope you have been well! He-he, thanks for appreciating my snide remarks! :p

    We’ll have to catch up on things soon! Take care in the meantime!

    Best wishes,

    Don

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s