DANCING NEBULA

DANCING NEBULA
When the gods dance...

Sunday, October 7, 2012

The vast gulf between current technology and theoretical singularity

The vast gulf between current technology and theoretical singularity

optogenetics-neuron
An interesting pair of news posts caught my eye this week, and they’re worth presenting for general discussion. First, VentureBeat has an interview with futurologist Ray Kurzweil, who made waves in 2005 with his book The Singularity Is Near. In it, Kurzweil posits that we’re approaching a point at which human intelligence will begin to evolve in ways we cannot predict.

The assumption is that our superintelligent computers (or brains) will allow us to effectively reinvent what being human means. In our present state, we are, by definition, incapable of understanding what human society would look like after such a shift.
Google cat

Mrow

Meanwhile, Google is working to put its neural network technology to work on different sorts of problems. This past summer, the company taught its network how to recognize a cat by showing it YouTube videos. Specifically, it showed 16,000 processors enough cat videos that the network itself learned how to “see” cat without human intervention. Total visual accuracy, according to the initial paper, is about 16%. The announcement is about applying similar strategies to language processing and how computers can “learn” to understand the specifics of human speech.

Kurzweil, as you can see in the video at the bottom, is a persuasive speaker and Google’s success with teaching a network to recognize cats really is impressive. Reading stories like these, however, I come away skeptical. It’s not that I doubt the individual achievements, or that they can be improved, but focusing on specific achievements ignores the greater problem:We have no idea how to build a brain.

Kurzweil uses advances in scanning resolution and genetic engineering together as proof that at some point, we’ll be able to either program cell structures to do the things we want far more effectively than we can currently, or that we’ll simply be able to build mechanical analogs. On some scale, this is probably true. The nematode worm Caenorhabditis elegans has 302 neurons. We could build a neural network (or neural network analog) with 302 nodes fairly easily — Google’s neural node structure is far more complex than that.

Unfortunately, just having nodes isn’t enough. The human brain has an estimated 100 billion neurons and 100 trillion synapses. Different neurons are designed for different tasks and they respond to different stimuli. They respond to and release an incredibly complex series of neurotransmitters, the functions of which we don’t entirely understand. It’s not enough to say “Yes, the brain is complex” — the brain is complex in ways that dwarf the best CPUs we can build, and it does its work while consuming an average of 20W.
Monkey brain

That’s a monkey brain. We’ve got more.

This is where Moore’s Law is typically trotted out, but it’s a wretchedly terrible comparison. Scientists have already demonstrated transistors as small as 10 atoms wide. Your average neuron is between 4 and 100 microns. If groups of transistors equals neural networks, brains would be no problem. It’s not that simple. We don’t know how to build synapse networks at anything like the appropriate densities. We don’t even know if consciousness is an emergent property of sufficiently dense neural structures or not.

Self-driving cars (an example Kurzweil mentions) are a sophisticated application of refined models, meshed with sensor networks on the vehicle and additional positional data gathered from orbit. They’re an example of how being able to gather more information and correlate that information more quickly allows us to create a better program — but they aren’t smart. Our best neural networks are single-task predictors that gather information at a glacial pace compared to the brain.

The idea that we’ll strike some sort of tipping point within the next 33 years seems farcical. 33 years ago, scientists were well aware that genetic engineering, molecular biology, and cell phone-like devices might all be possible. Fast forward to today, and we have cell phones. We have better neural networks, certainly. We can dump far more data down the pipe, access it more quickly, and process the results — but creating human or superhuman intelligences? No way.

No comments:

Post a Comment