NVIDIA GPU Computing is Out of Control!

Standard

GPUI’m interrupting the Yogic View of Consciousness series to recommend that people interested in computer technology check out Jen-Hsun Huang’s Keynote talk from this year’s 2015 NVIDIA 2015 GPU Technology Conference.  His talk sheds light on the most up-to-date technology in computer “perception”.  Somehow, I manage to tie this in to raja yoga.


I’ll start out saying that I don’t work for NVIDIA, nor is this a promotion for them, or anything like that.  Sorry, I’m just a fan boy.

As most of you know, at my day job, I am a scientist. As such, I do scientific computing and take a keen interest in developments on this front.  It’s become an annual treat for me when NVIDIA’s GPU Technology Conference takes place each year because this area of chip development is at the early stages of exponential growth and is already having massive effects across the board in all aspects of science and technology.

Jen-Hsun Huang’s Keynote talk starts at this link.  I strongly recommend watching his whole talk because this year he focuses on so-called “deep learning”.  Deep learning is a type of neural net technology and he does a great job of explaining how it works and what kinds of applications benefit from it.

For those of you who don’t know, a new type of computational revolution has been going on for the past decade. It goes by names like GPU Computing, or Massively Parallel Computing, or similar sounding names.  It has its origins in the fancy video cards made by NVIDIA.  These cards were (and still are!) designed to make playing computer games as fast as possible.  The NVIDIA graphics chips are parallel processing devices.  Unlike Intel multicore chips that are now up to 8 cores, meaning it is like having 8 CPU’s in one, the NVIDIA graphic chips have up to 3000 cores in them (3072 to be exact for the new Titan X)!

A decade ago, plus or minus, some scientists realized they could re-purpose the NVIDIA chips in these graphic cards to do scientific computing instead of calculate pixels for video games.  Soon thereafter, NVIDIA got wind of this and, recognizing a potential new market, became a major driving force to further develop this technology for scientific application.

I have to say, this is a story of the best of what capitalism can offer because whole new markets were cracked open that affect everything from how outstanding the computer graphics now look in Hollywood movies, to new ways to do oil and gas exploration, to new ways to compute drugs for treating diseases, among many other things.

It’s interesting to watch Jen-Hsun Huang because he is so obviously a salesman, but at the same time he is an incredible advocate for science (and no slouch when it comes to knowing the science), which is a pretty rare combination in the same person.

NVIDIA ended up writing the CUDA programing language that allows their graphics cards to become general purpose parallel computers.  This has been going on for a decade now, and it’s safe to say that a lot of advances in many fields of science and technology are because these chips can speed up calculations between 10 to 100 times, or even more.  The graphic above shows NVIDIA’s “roadmap” for their GPU architecture and it is interesting to note they drew the chips on an exponential curve.

Anyone that programs can appreciate what this means.  If something previously took an hour to compute, it now takes from 6 minutes (10X) to 36 seconds (100X).  Practically speaking, this just means you can get a heck of a lot more done.  It also gives you the freedom to experiment with code too, since you don’t have to wait so long to get your results.  No matter how you look at it, it’s just plain good.  So, as you can tell, I’m both a user and huge fan of the CUDA technology.

Deep Learning
Most everyone has heard of deep learning, even if indirectly.  IBM in 2011 did a big publicity stunt on the TV game show Jeopardy with its “Watson” computer, that was based on deep learning network technology.

In his talk this year, Jen-Hsun explains, in almost layman’s terms, how the deep learning computations work.  This is very interesting. It is modeled on how we know vision works in the brain, first enunciated in the neurosciences back in the early 1970s by David Marr.  Vision starts with a whole bunch of (relatively speaking) simple things, and combines them together into increasingly more complex things, until you get to the level of organization that makes up our 1st person perceptions of faces, trees, houses, and so on.

What Jen-Hsun Huang describes is the convergence of the computer science for designing deep learning networks, coupled with the raw computing power of the NVDIA GPUs and how this is, right now, making computers that are approaching our ability to visually identify complex objects.

Now, I was careful to say “identify” and not “perceive” because computers do not perceive.  They are so-called “inert matter”, without consciousness (although the atoms the computers are made of have consciousness…but I digress).   Instead, computers are mechanical, mindless beasts that do what they are told.  Their advantage is, of course, that they can do these repetitive mechanical tasks amazingly fast.  In the talk he describes computers in the next couple years that will do 28 teraflops.  That is 28 trillion (floating point) operations per second. All this in a machine you can plug into the wall socket and also that you could lift up. The rate of advancement is simply amazing.

Aside from my enthusiasm for this technology, it also factors back to the general theme of PlaneTalk.  The ability of computers to perform pattern recognition like us humans actually proves the point that yoga has been making for centuries about the distinction between the gunas and consciousness.

The gunas, as I explained in Part 11 of The Yogic View of Consciousness, are the yogic equivalent of “dead matter”.  They are the forms and processes of Nature we see and experience around us.  What Jen-Hsun’s talk shows is we now can program computers to do almost the same thing that gunas do when it comes to pattern recognition.  In fact, the activity of the computers is just another example of the gunas in action.

Of course, the computers require our consciousness to give meaning to their outputs.  This makes very clear the distinction between form and consciousness that has been the core idea in yoga for all these millennia.  Again, we see the West discovering the ancient Indian truths in its own fashion.

Hey! What do you know?! I was actually able to tie this back into PlaneTalk stuff after all!

There you go, raja yoga mixed with the brand new state of the art GPU computing.  Only here, Folks, only here.

See you next time for the next installment of The Yogic View of Consciousness.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s