ACCELER8OR

Jul 22 2011

Is The Singularity Near Or Far? It’s A Software Problem

""){ ?> By Valkyrie Ice


Share

When I first read The Singularity is Near by Kurzweil, it struck me that something seemed curiously “missing” from his predictions. At the time, I merely put it on the back burner as a question that needed more data to answer. Well, recently, it’s been brought up again by David Linden in his article “The Singularity is Far”.

What’s missing is a clear connection between “complete understanding of the mechanics of the brain” and how this “enables uploading and Matrix level VR.” As David points out, merely knowing how the brain functions at the mechanical level, even if we know how each and every atom and molecule behaves, and where every single neuron goes, does not equal the ability to reprogram the brain at will to create VR, nor does it necessarily translate into the ability to “upload” a consciousness to a computer.

I tend to agree with David that Ray’s timeline might be overly optimistic, though for completely different reasons. Why? Because software does not equal hardware!

David discusses a variety of technical hurdles that would need to be overcome by nanomachines in order to function as Kurzweil describes, but these are all really engineering issues that will be solved in one manner or another. We may or may not actually see them fixed by the timeline Kurzweil predicts, but with the advances we are making with stem cells, biological programming of single cell organisms, and even graphene based electronics, I don’t doubt that we will find a means to non destructively explore the brain, and even to interface to some basic functions. I also see many possible ways to provide immersive VR without ever having to achieve the kind of technology Ray predicts. I don’t even doubt that we’ll be able to interface with a variety of “cybernetic” devices via thought along, including the creation of artificial limbs which can be wired into the nervous system and provide sensory data like “touch.”

But knowing how to replicate a signal from a nerve and knowing precisely what that signal means to that individual might not be the same thing. Every human brain has a distinct synaptic map, and distinct signaling patterns. I’m not as confident that merely knowing the structure of a brain will enable us to translate the patterns of electrical impulses as easily as Kurzweil seems to think. We might learn how to send signals to devices without learning how to send signals back from that device in such a manner as to enable “two way” communication beyond simple motor control functions, much less complete replication of consciousness or complete control of inputs to enable “matrix VR” for a much longer time than mere mechanical reproduction of a human brain in simulation.

Does my perception of Green equal yours? Is there a distinct “firing pattern” that is identical among all humans that translates as “green”, or does every human have a distinct “signature” which would make “green” for me show up as “pink” for you? Will there be distinct signals that must be “decoded” for each and every single individual, or does every human conform to one of who knows how many “synaptic signal groups”? Can a machine “read minds” or would a machine fine tuned to me receive only gibberish if you tried to use it?

The human mind is adaptable. We’ve already proven that it can adapt to different points of view in VR, and even adapt to use previously unknown abilities, like a robotic “third arm”. The question is will this adaptability enable us to use highly sophisticated BCI despite that BCI being unable to actually “read” our thoughts, merely because we learn methods to send signals to it that it can understand while remaining “black boxes”, our “mind” impenetrable to the machine despite all our knowledge of the “brains” hardware?

This is the question I think Ray glosses over. Mere simulation of the hardware alone might not even begin to be the “hard problem” that will slow uploading. I don’t doubt we will eventually find an answer, but to do so, we first have to ask the question, and it’s one I don’t think Ray’s asked.

Share
  • By Rick Moss, July 22, 2011 @ 3:10 pm

    I agree that VR-level experiences aren’t going to happen in Kurzweil’s time frame. But I can imagine simpler approaches — assuming the BCI hardware is worked out — that can get a lot done with relatively little data. In my book, Ebocloud, the scientists expose the subjects to sensory stimuli, such as the image of a flower and the aroma associated with the same. Via the BCI, they record what fires off in that individual’s brain, isolating key indicators. This is, of course, highly speculative, but the concept here is based on the fact that you can look at five dots on a piece of paper and recognize the letter “K” — only a small set of data point for something that, in the brain, is firing thousands (I presume) of electrons. Once those key indicators are determined for that individual, it would be a matter of feeding back only those relatively few data points back to trigger the image or aroma — in other words, the brain fills in the rest.

    So if you know in advance that you want your software program to trigger certain images, sounds, memories, etc., you would first “learn” how that individual forms them in their brain, process that data, isolate the key data points, and customize the software for that individual accordingly.

  • By Mark Bruce, July 24, 2011 @ 4:30 am

    I tend to agree with Rick to some extent. If you have the technology to measure brain states at an adequate resolution you should be able to get around individual biases in pattern & signalling representations for different thoughts & sensory stimuli by training the software in your device by exposing the brain to real stimuli and recording the signal produced. Then when inputting virtual stimuli the first session simply runs the device through an evolution process that ensures that the device’s outputs (inputs to the brain) induce identical brain states as those produced by real stimuli.

    However, I tend to think that this could be done relatively quickly for standard sensory inputs. And if the user was unsatisfied with a particular result or type of qualia produced during the training session they would have the means to tweak and modify the input – moving it up or down some scale for example.

  • By ENKI-2, July 25, 2011 @ 6:01 am

    We already have plenty of BCI tech, and while most of it is not bidirectional, there is some for each direction. Prosthetic limbs controlled by nerves left over from the original amputated limbs are old hat, having been around for decades. Likewise with vision — there has been a fairly functional mechanism for converting signals from a pinhole CCD camera into a format that can be piped into the optic nerve (used on at least one human subject) for decades as well. These things are still quite dangerous, but I suspect the progression towards wide acceptance will be closer to that of cochlear implants (on the order of decades) than that of the steam engine (roughly two thousand years). Already, invasive BCI (like BrainGate) is commercially profitable and noninvasive BCI (mostly based on EEG — things like the NIA) is arguably ‘acceptable’ if still somewhat fringe. None of these things depend upon nanofingers or sudden jumps in CPU speed or sudden breakthroughs in AI tech, though they would probably benefit from enough of an improved understanding of the brain to simplify implanting procedures (and in the case of the vision system, avoid causing tonic clonic seizures, which last I heard it sometimes still does).

Other Links to this Post