Is a technological singularity ever going to happen?

Discussion in 'The Guru's Pub' started by Cybermancer, Nov 4, 2009.

  1. Denial

    Denial Ancient Guru

    Messages:
    12,656
    Likes Received:
    1,878
    GPU:
    EVGA 1080Ti
    Probably. Eventually they'll move on to different computing forms like organic or quantum in which case it's definitely going to speed up by quite a bit. People like to say a human brain will never be surpassed, but what if scientists create a human brain? It's definitely not outside the realms of reality anymore.
     
  2. Leafblower

    Leafblower Ancient Guru

    Messages:
    1,782
    Likes Received:
    0
    GPU:
    MSI GTX 760 SLI
    I don't think it's possible unless they learn feelings.
     
  3. Risco

    Risco Master Guru

    Messages:
    485
    Likes Received:
    0
    GPU:
    GTX 650M

    There is a replacement for fossil fuel out there, but it wont be released as the global economy relies so heavily on oil that it would collapse without it.
     
  4. Denial

    Denial Ancient Guru

    Messages:
    12,656
    Likes Received:
    1,878
    GPU:
    EVGA 1080Ti
    It won't be released because there is no real good way to store energy. That's the bottom line, there is no conspiracy, or anything. Batteries suck at what they do. They need to improve - and until they do fossil fuels are here to stay.
     

  5. IcE

    IcE Don Snow Staff Member

    Messages:
    10,693
    Likes Received:
    73
    GPU:
    Zotac GTX 1070 Mini
    Obviously we know how to stop climate change; it's called stop burning fossil fuels. However, it's not that simple. You can't just magically fix things without having impacts. The economy of the world comes first, before anything else.
     
  6. nvlddmkm

    nvlddmkm Banned

    Messages:
    4,188
    Likes Received:
    0
    GPU:
    EVGA GTX285
    They can create a human brain. The same way we all can. I once heard during a lecture on Artificial Intelligence, a professor said, "If you want to create an intelligence, find a girl, fall in love, and have a child."
     
  7. Denial

    Denial Ancient Guru

    Messages:
    12,656
    Likes Received:
    1,878
    GPU:
    EVGA 1080Ti
    I meant a programmable organism. I mean if you can engineer an ultra intelligent biological entity what do you need a computer for? Just pass calculations through the entity and get answers.

    They're going to have to do something soon anyway - getting awfully close to 9-11nm in which you run into quantum phenomena that breaks our entire model of computing.
     
  8. nvlddmkm

    nvlddmkm Banned

    Messages:
    4,188
    Likes Received:
    0
    GPU:
    EVGA GTX285
    Maybe those quantum effects can be used in a new architecture? Then we roll on with technological advancement in a new light. Or would it be a particle? ;)
     
  9. KrAzYeTy

    KrAzYeTy Master Guru

    Messages:
    428
    Likes Received:
    0
    GPU:
    MSI 570GTX SLI
    Just a quick thought, but since most technological advances are based on potential profit or gain, I don't see companies releasing new tech before the old has even got a foothold in society . If money and economics were not in the picture then I could see this as a reality. ex: Star Trek :D
     
  10. Corrupt^

    Corrupt^ Ancient Guru

    Messages:
    6,913
    Likes Received:
    279
    GPU:
    ASUS 1080GTX STRIX
    By that time... not that unlikely tbh :p
     

  11. nvlddmkm

    nvlddmkm Banned

    Messages:
    4,188
    Likes Received:
    0
    GPU:
    EVGA GTX285
    It may seem this way. But necessity is the mother of all inventions. Although it seems that modern society and it's lustful greed (makes me f-ing sick) have turned everything into a profitable venture. But still at the base root of it all is human ingenuity in the face of overcoming some ostensible limit, technologically speaking.
     
  12. Glidefan

    Glidefan Don Booze Staff Member

    Messages:
    12,505
    Likes Received:
    46
    GPU:
    GTX 1070 | 8600M GS
    Has anyone mentioned cyborgs yet? Or augmented humans?
    Where we will be able to get data directly from sensor organs other than ours.
    (though i'm guessing that peltier cooling will be needed for the brain :p)
     
  13. nvlddmkm

    nvlddmkm Banned

    Messages:
    4,188
    Likes Received:
    0
    GPU:
    EVGA GTX285
    Perhaps a "female" can be made that never gets sore and comes with a mute button?
     
  14. Chouji

    Chouji Ancient Guru

    Messages:
    5,636
    Likes Received:
    2
    GPU:
    19 inch flat screen
    Ghost in the shell guys... Ghost in the shell.

    Technology will never become superior to humans, simply because we will incorporate technology into ourselves. There will simply be technologically infused superior humans.

    Machines can do some things better than humans, and vise versa.
    The superior will be the one that can do both.
    Machines can never lie or have emotions. Only programs which calculate and give an emotion-like expression. You can program a robot with self preservation, but not fear.
    Fear is illogical, Computers can only do logical.

    An AI will never surpass a technologically infused human. (With an exception at the bottom)
    Humans strive to increase technology, make computers faster, larger storage capacities.
    But a computer-ai will never have that ambition, at least not naturally.

    The difference is time. A computer doesn't care of it takes 1 minute or 1 week to calculate a new formula. Humans on the other hand are impatient and demand more work to be done in less time, which appeals to our mortality. Computers are immortal, but only because they have no awareness. And a computer that knows it is turned on, doesn't count.
    The point to this, a computer would see no reason to upgrade itself as long as it can get the job done, it wont matter how long it takes.

    Humans will always be able to understand the design of computer devices.
    Same reason as stated before, we use computers to assist in creating better computers, well once we incorporate that technology into ourselves, we will have the ability to "keep up."

    Ghost in th shell. ~_^


    The X series games from Egosoft have a particular race inside the game. The Xenon, AGI, Artificial General Intelligence.
    An Intelligence that learns on it's own. With the Ability to be aware of it's surroundings and able to make the best use out of them.
    It does not absorb the technology of the other ingame races, but will rather only upgrade itself to meet a particular goal.
    If it needs to mine an asteroid, but currently has no means to do so, it will develop the tools required to meet that goal.
    This is the only realistic worst possible outcome for AI.
    One that can set it's own goals, and be able to calculate the means to meet those goals. If by some glitch it's goal is to destroy humanity, or all life, it will evolve and expand to meet that goal.


    Long story short, We can develop AI that can make better computers or technology, we can make AI that can find and solve it's own problems.
    We just better be damn sure not to mix them.
    If a computer has the ability and desire to surpass humanity, if we give it that ability, it will surpass us, and we will never be able to match it.
    The machine will never be superior to humans as a species, but that doesn't mean it won't be able to destroy us.
     
    Last edited: Nov 5, 2009
  15. acon

    acon Member Guru

    Messages:
    112
    Likes Received:
    0
    GPU:
    eVga/8800gts/512

    you do know that´s fake, right?

    http://www.snopes.com/inboxer/hoaxes/computer.asp
     

  16. Cybermancer

    Cybermancer Don Quixote

    Messages:
    13,795
    Likes Received:
    0
    GPU:
    BFG GTX260OC (192 SP)
    I don't think I am forgetting anything. I just wasn't asked this (indirect) question before. Obviously I can't write a single post covering every aspect of this vast field either. :nerd:

    Actually, I think, I already addressed most points in my post above but to extend a bit on that I'll try to explain what I read about - as far as I can remember it correctly. If I understood you correctly, you're basically saying that with current CPUs we're not able to create or simulate a software based AGI? Because we humans are better at pattern recognition, for example?

    The CPU is only one potential substrate that the software AGI is going to run on - maybe of course. I agree with you that current CPUs probably aren't sufficient to meat the criteria of being parallel enough like the human brain is, for example, but it's pretty obvious already that we are already starting to create more parallel CPUs. There have been remarkable advances in creating better, narrow AIs but quite a few (iirc) of those lead back to a better understanding how our own brain works. This also seems to be a key strategy to create AGI. "Reverse-engineering" the human brain. We now know that the human brain consists of several hundred different regions performing specific tasks. Actually, about twenty of those areas are already (well enough) understood to create functioning and sound models of them. These models can be recreated as software and run on a computer. This is not some kind of science fiction but real, hard science and well funded.

    Until a few years ago (potentially decades, iirc) medicine was not subject to changes in technology. Now that we use technology to understand (and diagnose) the human body, specifically the human brain in this case, it's subject to technology's progress. MRI and other computer aided scanning methods of the human brain are getting better and better and the data collected through these is growing exponentially. While scanning methods are getting better and better, data and bandwidth of this information collection is growing, so is our understanding of the different regions - how they function and how they correlate, etc. With this information we can create models like this one: http://img199.imageshack.us/img199/6828/chart23.gif
    Eventually we will have created models of all the different regions. When this point is going to be reached nobody can say for sure, but we can certainly make (thoughtful) predictions about it and currently they are predicting a timeframe around 2030 - 2035.

    You might argue that the human brain is way too complex for us to ever understand how it works, but this is not really the point - as far as I understood. No human brain equals the other. They are all wired differently, but similar enough to function the same. It is derived and built accordingly to the information contained in the human genome which consist of roughly eight hundred million bytes of information. Removing its (massive) inherent redundancy, etc., estimates are that about 12 megabytes of that information is used to build the human brain out of the genome. This is possible because the DNA doesn't specify were each and every cell has to be in the brain but only describes a rough (wiring) pattern of the different regions. Therefore abstract models providing the same input/output as a region is going to simulate it very well, as existing ones already prove.
    That is an excellent point, Chouji. Of course we're going to transcend biology and enhance ourselves. Artificial hearts, limbs, and lesser known but nevertheless working things like computer chips bridging damaged areas of the brain and countering a very specific from of Alzheimer's disease are already a reality. An artificial pancreas is being worked on, too, for example. (Again: although we don't know what each and every cell in the pancreas does, we do know how it works in general and have a working model of it helping us to create a system mimicking the insulin delivery of a normal pancreas: http://wwwp.medtronic.com/Newsroom/NewsReleaseDetails.do?itemId=1244301588079&lang=en_US)
    This really is a philosophical question though, Chouji. How do you define consciousness and how do you prove objectively that I am conscious or simply a software posting in this forum, for example. You might be able to give a subjective answer and tell me what you think about me being conscious or not but there is at least currently no objective test for consciousness. Once we created an AGI it will be able to convince you through arguing that it is conscious. Just like I am able to.

    You can believe me - I'm conscious, Chouji. You probably just thought that of course, he is, I've seen him post in this forum for 2 years now and I'm fairly certain that he's a real human and not some advanced chatterbot only mimicking a human. Prove it, though. The answer is that you simply can't. Even the photos I posted in the faces of Guru3D thread could be fake. How do you know they're not? What if in 20 years a software will behave and post exactly like I am right now? How will you know that it's not conscious? The same goes of course for emotions...

    Btw: my real name is Shodan. :nerd:
     
  17. Wicky

    Wicky Maha Guru

    Messages:
    1,092
    Likes Received:
    0
    GPU:
    Sapphire Radeon 5850
    The three laws come shipped with a build-in fundamental flaw, as proven in the movie "I, Robot". Sooner or later, robots will find out that the main purpose of humans is to harm each other, like in wars, destruction of nature reducing our health, working in dangerous jobs, or even injecting a letal poison to criminals, who are sentenced to death. Such a robot would try to rescue the criminal from the electrical stool, or even better take control over the humans in order to prevent us from destroying nature. An AI with such 3 laws would only have one goal: To take over the world and put us all in a padded cell with 3 warm meals daily.
     
  18. nvlddmkm

    nvlddmkm Banned

    Messages:
    4,188
    Likes Received:
    0
    GPU:
    EVGA GTX285
    Too many convalescent ideas that intertwined within interlacing contexts and meanings gave it away. Plus the self questioning indicated a self awareness that cannot be re-created by a non sentient being.

    Was that a good answer, Roberto? :D
     
  19. Cybermancer

    Cybermancer Don Quixote

    Messages:
    13,795
    Likes Received:
    0
    GPU:
    BFG GTX260OC (192 SP)
    Theoretically yes, practically no. :nerd:

    Who says that a software can't argue like that? Who says that it won't be sentient? Who says that it can't question itself? Indicate isn't proving either.
     
  20. nutyo

    nutyo Ancient Guru

    Messages:
    4,589
    Likes Received:
    0
    GPU:
    Sapphire Vapor-X HD5870
    The rate of technological advancement will never pass our ability to comprehend it until we take ourselves completely out of the creation process. Once we have computers that invent their own successors then we may hit technological singularity. But I doubt we would ever take ourselves completely outside the process so we will be the limiting factor.

    It isn't really a flaw. As an extension to Asimov's three laws the Zeroth Law was created: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. We may not like it but it'll be good for us. :p
     

Share This Page