Being interested in Science Fiction (specifically Cyberpunk) and technology since the mid to late 80ies, I just recently stumbled across a phenomenon called "technological singularity," that is currently being discussed both in sci-fi literature and by world-renowned futurists, scientists, and thinkers. A summarizing description of this term is that the currently ongoing exponential (not linear!) growth of technology, described for example by the law of accelerating returns, is going to lead to a point at which the speed of new technology being invented, released, applied and used is going to be faster than we are able to follow or, at least currently, comprehend. Now, the mind-boggling and maybe scary part about this so-called technological singularity is that it might not be as far of into the future as you think it is but that it could already happen around 2050. A little bit of background info: Wikipedia definition of the term: http://en.wikipedia.org/wiki/Technological_singularity The Law of Accelerating Returns - http://www.kurzweilai.net/articles/art0134.html?printable=1 Video: Ray Kurzweil presentation - http://www.youtube.com/watch?v=43zo82W7aPI (Ray Kurzweil talks about the technological singularity at the Google headquarters in Mountain View, CA.) Video: Interview of Vernor Vinge - http://singinst.org/media/singularitysummit2008/vernorvinge-bobpisani (Vernor Vinge introduces the concept of the Singularity and explains why he believes it will happen before 2030.) Video: Ray Kurzweil and John Horgan debate - http://singinst.org/media/singularitysummit2008/raykurzweiljohnhorgan (Ray Kurzweil and John Horgan debate whether a singularity is near or far.) Singularity Summit - http://www.singularitysummit.com/ (A gathering of thinkers to explore the rising impact of science and technology on society.) Singularity Hub - http://singularityhub.com/ (A blog about current news in the fields of nanotechnology, robotics, genetics, AGI, etc.) "The Singularity Is Near" by Ray Kurzweil (2005 New York Times Best Seller, http://singularity.com/) (A book describing the phenomenon of a technological singularity and what could lead to it.) "Accelerando" by Charles Stross (2005 science fiction novel, http://en.wikipedia.org/wiki/Accelerando_(novel)) (A collection of futuristic short stories describing the events before, during, and after a technological singularity.) "Rainbows End" by Vernor Vinge (2006 science fiction novel, http://en.wikipedia.org/wiki/Rainbows_End) (A futuristic sci-fi novel taking place in 2025 - just a few years before a potential technological singularity.) => Do you think something like this could really happen? => Have you even ever heard about it? => What are your thoughts about this (potential) phenomenon?
back in the 60s they said by the year 2000 we would be living on the moon and flying around in flying cars, has that happened? well apart from 2 or 3 people with lfying cars..No it hasnt. We are almost at 2010. We dont even know how to stop climate change or how to detract from our use of fossil fuel. If anything we are going to go back in time...not forward. The advance of technology is going to be used souly for natural purposes and trying to save the planet. We still havnt figured out how the Egyptians built the pyramids, and look at the technological advancments we have mad. Technology wont ever overtake us, you know why? Because a human mind has created it. And where there is a mind to create there a mind to equal or advance on it. Lets not forget, technology is a human creation. Its very movments and orders are dictated by us.
Oh god, I just had to write a 1300-word essay about John Horgan. It was part of one of my CS classes. I hate that guy. I personally don't believe in it. If some computer would become self-learning, you could just pull the plug if you wanted. Not much of a singularity then? Unless the machine learns how to use air molecules as it's source of energy >_>
-> Yes it could happen, but the people predicting it with graphs like that one are full of it. Exponential returns is a side effect of our current technologies it is not a law inherent to all of science. It's very likely that we will enter a period shortly (likely within twenty years) where computational power per unit of area stagnates. However if that occurs then during that time interdisciplinary integration & ubiquitous computing will still make significant gains. Biology especially is starting to heat up. -> I hear about it a few times per month. -> I consider the singularity to be a form of religion. I'm glad folks have found hope in something, but I see this thing becoming a business/academic cult in the not so distant future if it keeps raking in the dough.
Today AI assists processor designers in building new CPUs and planning out circuit diagrams to conform with certain laws of microwave amplification. I would not be suprised if technology does accelerate to the point where humans are no longer able to comprehend the building of such devices and that other computers will play a much larger role in their construction. But I think people get the wrong idea about artificial intelligence, it's the artificial nature that should be put into real consideration, as it's only as intelligent as we make it or deem it be, and the scope of it's intelligence is also defined by rules. Whereas there are no rules that govern human thought (at least not strict ones). If you make an intelligence whos sole purpose is to plan and map out processor architectures then I really wouldn't consider it a threat to human life as we know it as processor design is all it will and can ever do, any extra functionality would limit its speed and cause great concern for those people responsible for optimising the AI itself.
No, I believe it's impossible. With the way computers and such are made now they can only be as smart as the person programming them, and likely not even that smart. They are fundamentally different as as such there is no possible way for them to surpass us or even to match us.
Yes, singularity it is going to happen. We already broke Moore's law this year, the transistor count is tripling now every... i dont remember really the time period but i know that Moore confirmed that the hardware manufacturers are speeding up the transistors count thus he had to change his law or something. Also. we underestimate the future of computers, and this is the perfect example to prove it So Skynet is iminent
Nope it will never ever happen. Think about it, all computers are are transistors, silicon, metal and plastic. Without power they will not turn on. And without HUMANS to code programs for the computer to be operational. Then all computers are are transistors, silicon, metal and plastic.
Kinda like humans huh? In theory it should be possible to build a brain out of transistors, the brain is merely a mess of different logic gates and wires that happens to work - in my opinion.
I think that people will never find out what gives that spark of life to really create some human like or superior unless they make like a hybid of somesort.
Raw computational power might be increasing exponentially but AI like a computer that thinks, learns and grows is going nowhere IMO..I think it's all very laughable...a dog can do better tricks. Expert systems are a different thing entirely but will never be better than the expertise of the designers and experts with input...creativity will always have a degree of transcendence over the thing being created.
It's inevitable... But it's probably going to take a long time. Most likely in my view computers will not look like anything close to what they are today. Maybe they will be organic, or maybe sub-atomic, or maybe something we can't imagin right now. But at some point humans will put something together that learns on it's own and develops a personality and etc... It's just a mater of time.
Already happened to me... I believe it's called getting older. Humanity will probably have reduced itself to stoneage levels or wiped itself out completely before this can happen though. Cheery enough for a Wednesday? ;-)
Actually a scientist managed to construct a tiny brain from rat cells and then connected it to a flight simulator. After some crashes it managed to fly straight. For the "humans are needed for apps/code", think that right now, we have applications that compile code on the fly depending on parameters so the start at least is already here. Simulated brain is close, and the material used could be silicon or perhaps flesh.
Those were just unfounded dreams and not based on any kind of underlying facts. On the other hand if we wanted to, we could already be flying around in cars as some home inventors have proven in a few crude examples. There's just no current need for such a thing and therefore no resulting pressure on creating it. Current computers are already a lot faster than we in most things we built them for. So, it is definitely possible to create something faster than the human mind - if that's what you're aiming at. Narrow AI is already being used in various fields like controlling bank transfers, checking for fraud, Bayesian spam filters, genetic algorithms, autonomous missile target detection, flying airplanes, etc., etc. Another example is Deep Fritz. In 1997 Deep Blue was able to beat the then reigning world champion Garry Kasparov in chess. This was possible by the sheer brute force of calculating 100,000s of moves in a second in so-called probabilistic move-countermove trees while Kasparov (after being asked) said he's only thinking about one move per second. Until 1996 Kasparov was able to beat Deep Blue because of the great pattern recognition capability of the human brain which computers lacked. Deep Fritz is an advancement in software development taking advantage of this pattern recognition capability. It has only 1/1000ths of the computing power of Deep Blue (about 1.3% !) but the software uses a pruning algorithm - yet it's able achieve the same results as Deep Blue. Not to all of science but technology which is spreading into more and more scientific research and economical areas/fields once they implement technology. It holds true in quite a few fields actually without them being subject to any tabletop physics experiment rules. I think that's somehow fascinating: http://img213.imageshack.us/img213/7695/paradigmshiftsfrr15even.jpg http://img263.imageshack.us/img263/7941/internetbackbonebandwid.jpg http://img263.imageshack.us/img263/7571/magneticdatastorage.jpg http://img171.imageshack.us/img171/2986/randomaccessmemory.jpg http://img410.imageshack.us/img410/543/exponentialgrowthkurzwe.gif Intel already predicted that Moore's Law is going to hold true until 2022 based on their current research and proofs of concept. Once we encounter a limitation in a technology (like the shrinking of transistors on an integrated circuit) we will use that technology to move on to the next paradigm. Most likely this new technology is going to be three-dimensional molecular computing. Again: Intel already said that that they have working prototypes in their labs and that they'll most likely be introduced to the market before 2022. What makes you think it's a religion? What you're talking about is narrow AI. Currently scientists are working on building AGI - artificial general intelligence. There are multiple areas in which we are (like in your example) using AI to support and help us - today. A lot of these advances in narrow AI are based on models created through reverse engineering areas of our own brain and figuring out how it works. What speaks against combining those models into one? The amazing thing is that once something gets mastered by artificial intelligence it often or most of the time gets dismissed as not being AI at all. I'm both fascinated and scared about what the future might hold for us, because I don't think we can stop this kind of progress - our own economical laws of competition and survival of the fittest are clearly pointing into this direction, imo. (Your argument that AI is only as smart/good as we create it to be is not true, afaik. Genetic algorithms would be one example.)
Asimov already solved it - all we have to do is make sure the three laws are in place (use Robots and AI interchangeably): 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Problem solved lol... Really though, Science Fiction (and other creative thinkers) have thought of a lot of different outcomes in regard to AI and robots, etc... But guess is though what will happen in reality will be mostly different, but similar.
I think with the nature of human ingenuity, we are the singularity. Albeit the singularity is a dynamic one, we are the spurn of technology and it's governing factor as well. This is a great topic!!
Cybermancer, I think you are forgetting the difference between a human brain and a CPU. A CPU is exponentially faster at doing math. However, when it comes to critical thinking and pattern recognition the human brain is far superior. Sure, a computer might get some of the easier pattern extremely fast, but they can't even solve the most complex ones. Humans are needed. Computers have to change in the way they function to ever truly surpass us.