If you’ve read The Singularity is Near, you probably remember that perhaps the essential premise Ray Kurzwell relies on to claim that we will experience a singularity is that brain scans will become non-invasive and fine enough to allow us to reverse engineer a mind, even if we can only do so at first by directly copying all of the circuitry.
Well, check this out.
HA HA HA HA!!! If you copy all the circuitry of a computer, will you have the least clue about the programming you might install on it? Or how to program one?
NO WAY, BRO!
You’re right, and I missed an important point: it’s also about being able to run the brain in debug mode: actively scanning which neurons are transmitting information in which directions while the brain is still active. And that’s exactly how programmers figure out complex code.
There is another wrong assumption about brain – that can be simulated. We build only digital computers, giving up long time ago analogic ones, as in fact the brain is. So a very complex analogic computer cannot be undestood if we do not start from simple ones, that do not exist.
Except that we’ve already proven we can completely simulate analog neurological systems, such as the brains of insects, lobsters, and slugs, with digital technology, so the analog /digital divide isn’t an issue. It just raises the number of computational components a bit (as in, it takes several digital transistors to simulate a single analog neuron).
I’m afraid that analog/digital divide issue might be an issue, in the case of human brain. First problem is that an analogic signal is far more complex in its significance(only recently we have succeded to succesfully process human voice, and we actually did not do it entirely).The sampling rate and the multitude of signals needed to be processed is absolutely staggering.
The laws governing this complexity we did not discover yet. Do you know that not even memory problem is yet solved? Some neurologists believe that part of human memory might reside in signals stored in brain by resonance.
And after total simulation of brain hardware, the problem of debug mode will still exist.
Ah, when you mention voice you’re starting to head into an area of computer science that I’ve actually done quite a bit of work in. I’m afraid you’re misunderstanding something here. Sure, we didn’t understand everything key about the human voice and still have trouble (as even the human brain only does a probabalistic reconstruction of sound), but we’ve been able to perfectly record and transmit analog audio for quite some time. We only need to achieve that level of copying for the analog-to-digital divide, which is why it’s not an issue.
You are right, but the key signal problem conversion is inescapable. The problem is this: in a analog-to-digital conversion, you can miss a key signal, if the key signal is less than sampling rate wavelenght.
That’s true, but sampling can be done at arbitrarily high resolutions, it just requires more effort. My point is just that this issue is one that, while it may mean that simulating an analog system to achieve AI will require incredible amounts of effort just to maintain the fidelity required, the analog nature of biological systems doesn’t have any fundamental problems that we don’t know how to solve, regardless of the other aspects of biological systems that we still have to overcome.
While the fundamental problems that may be raised by the analog aspect of the brain might appear or not, there is another fundamental aspect of the brain, very different from our computers, and that is the parallelism. Parallel computers are very different from what we program now. Parallel + analog might be an awsome combination to analyze.
Yes, I certainly agree that parallelism is one of the biggest fundamental challenges we have to make more headway on. Fortunately, the theoretical guys are quite a bit further ahead on the problem than the general state of the field itself, because in general, we suck at doing things in parallel on computers.