The Future of Computer Technology

Discussion in 'Hardware' started by aphexcoil, Dec 21, 2002.

  1. Sounds like stuff from Ken MacLeod's"Fall Revolution" sci-fi series, which, among other things, describes a future struggle between human beings and "post-humans" who have uploaded themselves into virtual forms, but undergo a catastrophic event also referred to as "the Singularity" - sort of a mental black hole of thinking so fast and so advanced that it collapses into itself. Not my favorite sci-fi - though MacLeod's writing has improved tremendously in recent years, in my opinion - but still has some interesting ideas.

    Greg Egan's work, especially DIASPORA, offers a more intellectually intensive treatment of similar ideas (and many others): His work sometimes reads more like intellectual experimentation of the FLATLAND variety, but manages to make you feel like you understand quantum mechanics, string theory, and so on about as well as you're going to without devoting your life to the subjects.

    METAPLANETARY by Tony Daniel was a lot more fun for me - one of my favorites of the last few years: Daniel proposes a far future in which civilization is pervaded by what he calls "grist," a nano-technological quantum computing medium/material that can be utilized for a wide range of purposes - from embodying useful objects to offering faster-than-light computation and communication. The denizens of this civilization range from recognizably "normal," if augmented, organic human individuals, to beings who exist as Large Arrays of Personalities (LAPs) living multiple lives at once, to others who exist as massive space ships mainly residing in the far reaches of the solar system, and to still others who exist as virtual entities. The central conflict concerns a depraved LAP whose efforts to subsume the entire civilization under his own dictatorial consciousness involve a genocidal war against the virtuals.

    Peter Watt is a relatively new author whose nihilistic/apocalyptic near-future scenarios come with extensive notes describing their sources (including Penrose/Hameroff). His two books STARFISH and MAELSTROM are also among my favorites. He's especially concerned with ecological matters, but the future internet, besieged by quasi-sentient self-propagating programs and patrolled by artificial intelligences developed in cultured brain matter, plays a central role in the stories.

    One of John Barnes' favorite ideas has to do with the breaching of the borderline between human consciousness and machine consciousness. In MOTHER OF STORMS - a world-spanning disaster novel - he describes individuals whose brains are infected and improved by omni-valent optimizing programs. Eventually, these individuals find their organic bodies a hindrance to their growth in virtual space, and leave them behind. The next-generation internet is also critical to the story. In another series he's been developing, self-replicating emergently sentient ideological "memes" increasingly infect, occupy, augment, and effectively take over human brains, radically altering the course of future history.

    Bruce Sterling, who's well-known to cyberpunk fans, proposes in DISTRACTION that the next technological revolution will be in the connection between biotechnology and cognitive science - though the book at least as much concerned with an hilarious satire of American politics and culture, as seen ca. 2048, some time after a lost economic war with the Chinese (America having been devastated overnight by sudden termination of intellectual property rights and the placement of all software in the public domain).
     
    #21     Dec 28, 2002

  2. I believe you are talking about their research on tubules within the brain. Neurons alone shouldn't be able to account for the vast abilities of the human mind.

    Also, there is still that problem of "consciousness" that will never be understood. How does consciousness come about within the brain. Why do I experience colors and sounds when, underneath it all, it is merely the firing of neurons within my head?
     
    #22     Dec 28, 2002
  3. Though Penrose, Hameroff, et al, have spent a lot of time examining the possibility that microtubules within neurons may play a critical role in cognition, through some some theoretical quantum mechanical and/or cellular-automatic process (a la Wolfram), this material remains controversial and speculative. (Fib traders may consider the deal sealed, however, when they learn that said microtubules are arranged in the familiar sunflower/golden ratio pattern...) Penrose's arguments against simplistic quantitative descriptions of "brain power" stand on their own.

    Try

    http://www.consciousness.arizona.edu/hameroff/

    and

    http://www.amazon.com/exec/obidos/A...1100897/sr=2-3/ref=sr_2_3/102-6651467-0963309

    The nature of perception, or at least its initial mechanics, is better understood - how the optic nerves assemble "pictures" of the world, etc. - though how precisely the eventual images are received, stored, altered, and retrieved within our minds is obviously another question (or set of questions).

    We began a discussion on these topics on some Darkhorse-inspired thread a few months ago. There's much more to be said, but it would tend to stray far outside the bounds of a "hardware" thread. (This would be an excellent chit-chat thread, though.) I'm now arguably three times guilty of going far off-topic, so I'll refrain from going any further here.
     
    #23     Dec 28, 2002
  4. Although I agree that we do not yet know how the brain works, we need not resort to mystical or quantum mechanical mechanisms to explain intelligence and consciousness. People like Penrose fall into the trap that because the functioning of the brain is highly complex, that implies that the component neurons must be complex (akin to arguing that because a Pentium processor has complex behavior, that the component transistors must be complex).

    Yet if you read research into basic neurobiology and the functioning of the nervous systems of simpler organisms you find that its all reasonably explicable. Moreover one can successfully simulate the functioning of these simpler neural systems on a computer with an artificial neural network (e.g., learn to recognize patterns, self-organize for coordinated walking/running, etc.). This begs the question of the actual neurological differences between "lower" forms of life and "higher" forms of life.

    It seems more likely that the "magic" of the human brain lies in the quantity, rather than the quality of its component neurons. This hypothesis becomes even more likely when one realizes that the traits of intelligence and consciousness are not black-or-white and are not unique to the human species. Monkeys, dogs, elephants, birds, dolphins, etc. all seem to exhibit some level of so-called human mental traits, albeit at various lower levels of functionality. Thus it is far more likely that the human brain is only special for the amount of intelligence and consciousness, rather than the fact of exhibiting intelligence and consciousness.

    If you accept the argument that human neurons are not qualitatively different from the neurons of a wide range of other animals, then there is possibility of a computer with the mental horsepower of the human brain. Estimating the needed computational power is only a matter of the estimating the computation power of individual neurons and the numerosity of these neurons and their connections. I'm not saying that building an artificial brain will be easy or occur in the near future, only that it is not as infeasible as first appears.

    Cheers,
    Traden4Alpha
     
    #24     Dec 30, 2002
  5. miniTrdr

    miniTrdr

    if accept that arguament then you if could simply make a back up copy of the pathways and restore it to a computer/another brain would you get the same person?


    this is reminding me of douglas hofstadters book the 'the minds i'.
     
    #25     Dec 30, 2002
  6. TGregg

    TGregg

    For years I've wished that we uploaded OSes to some sort of EEPROM so when we fire up a computer, the OS is already in core, and just copied to RAM, rather than all this loading from disk crap. My computer would be up by the time my monitor warmed up :D.

    And, a few days ago, I got out an old 16 mhz 386 running DOS. It booted wayyyyy faster than my Win XP on a 2.26 GHZ intel with RAID 0 and 533 mhz bus. Sigh.
     
    #26     Dec 30, 2002
  7. I'm not convinced that neural networks and other forays into AI have successfully emulated the functioning of the brains of "lower" animals: I'll grant that there's been some limited success in imitating certain functions, but no one, to my knowledge, has yet developed a computer that indepedently navigates in space as well as a fruit fly. If you are aware of an accomplishment on this order, please feel free to share...

    I'm also not ready to accept the several other assumptions your'e willing to make, and I'm far from convinced that at this point there's any valid basis in cognitive science for making an estimate of human cerebral computational power in the manner you describe, or, for that matter, assessing its meaning. I will concede that it's possible that coming up with such an estimate might be as simple as quantifying connections and so on, and I wouldn't even be surprised to learn that, in terms of raw computing power (i.e., quantity of mathematical operations performable per unit of time), the result would grossly overestimate the true capacity of the brain - especially considering the likelihood that, even in a normal, healthy brain, large masses of neurons and synapses are redundant, dysfunctional, or never used. That said, I believe the exercise would still tell us little about the brain's true operational capacities.
     
    #27     Dec 30, 2002
  8. KymerFye,

    Hmmmm.... an artificial fruit fly.... I suspect that one would need an Apple computer for that project.:D

    But you are probably right about the challenges of simulating a fly. The visual system of the fly is pretty sophisticated. It runs at frame rate of about 400 Hz in houseflies and uses a 3-stage pipeline architecture. Flies also have a neat flight control system that includes a gyroscopic stabilization system that lets them hover far better than bees or wasps can. Very tricky. I wonder if some of the DARPA programs to build bird-sized surveillance aircraft are looking at this?

    As you point out, the computational efficiency of a digitally hosted brain might also be quite poor. Complex floating point algorithms may be necessary for emulating the neurons but very costly computationally. Its akin to simulating a Pentium processor on Pentium processors -- the calculation of a single clock cycle generates some 60 MB of state data and takes forever (this is one reason why Itanium was so late and Intel's approach to designing ever more complex chips is bound to fail).

    Getting back to the topic of the thread, the interesting issue is whether computers will become more intelligent and able to operate with common sense in the everyday world. This may not require emulating the human brain any more than airplanes have to flap their wings like a bird in order to fly. What is clear is that intelligence requires learning. Whether computers can best learn with neural networks, genetic algorithms, or classifier systems is not know at this date.

    This evolution of computer intelligence will take time. Even if one creates a computer that matches the human brain in some fashion for innate ability to learn and act intelligently, it will take that computer 20 years of interaction with the real world to reach an adult-level of intelligence.

    Happy New Year,
    Traden4Alpha
     
    #28     Dec 30, 2002
  9. Indubitably - and that's just in the publically available material.


    One might argue that the evolution of an adult-level human intelligence rests not on 20 years or interaction with the real world, but on something closer to a few billion years of same.

    Evidence and persuasive theoretical work suggest, at least to me, that, from the individual neuron all the way to thinking itself, every element and process of the human brain is imbued with and encodes the same biological imperatives that operate throughout nature.

    And, even if we could, would we really want to create a new, independent, potentially superior intelligence that necessarily possessed an insatiable will to survive and to re-produce?

    Though AI efforsts may lead to interesting and useful results, I tend to think that we'll go further, and faster, by continuing to extend and augment our own capacities rather than by attempting to re-produce them in some other form. Whether, as some writers have suggested, we may someday be able to "culture" artificial intelligences from organic material (even human brain matter, as in the Watts novels mentioned above) raises other questions. In any event, I think we're already becoming cyborgs, and that it'll be a very, very long time, if ever, before pure AIs will be able catch up to the (machine-aided) NIs.
     
    #29     Dec 30, 2002
  10. nitro

    nitro

    Penrose does no such thing. He simply gives a line of reasoning (using a fascinating argument based on Godel's Theorem) that Mathematicians nearly always solve problems by essentially "jumping to the conclusion" in one swift stroke, and therefore, are doing "non-computable" type of reasoning.

    The reason that he brings QM into the foray is essentially in the use of the Feynman Path Integral, which essentially states that particles essentially explore an "infinite" number of paths instantaneously before "choosing" one. If the brain "computes" using QM principles, it too may have access to an "infinitude" of answers...

    I have comments on much of the other things you said above as well as what a buch of others have said, but I don't have the energy to go into it...

    nitro
     
    #30     Dec 30, 2002