Our understanding of how things work increases every year. This increased understanding has led to ever improving technologies. When improved technology increases our ability to learn, the resulting accelleration of our intelligence approaches infinity.
Humans have an upper limit on the size and speed of their brains. Not so for machines. If machines can be programmed to learn, then machines can create a smarter machines. The smarter machine could then create an even smarter machine, etc. The result eventually leads to an intelligence that could undoubtedly solve all our problems. Global warming, disease, famine, and warfare could all be cured by such an "infinite" intelligence.
These concepts and other mind boggling ideas were presented at the Singularity Summit at Stanford University last week. The first speaker was Ray Kurzweil, whos recent 672-page book, The Singularity Is Near : When Humans Transcend Biology explains a concept known as the "singularity".
If you aren’t familiar with the concept of singularity, here is the elevator pitch:
Sometime in the next few years or decades, humanity will become capable of surpassing the upper limit on intelligence that has held since the rise of the human species. We will become capable of technologically creating smarter-than-human intelligence, perhaps through enhancement of the human brain, direct links between computers and the brain, or Artificial Intelligence. This event is called the "Singularity" by analogy with the singularity at the center of a black hole - just as our current model of physics breaks down when it attempts to describe the center of a black hole, our model of the future breaks down once the future contains smarter-than-human minds. Since technology is the product of cognition, the Singularity is an effect that snowballs once it occurs - the first smart minds can create smarter minds, and smarter minds can produce still smarter minds. —Singularity Institute for Artificial Intelligence
Douglas Hofstader followed Kurzweil, offering his critique of the Singularity. Hostader, professor of Cognitive Science and Computer Science Adjunct Professor of History and Philosophy of Science, Philosophy, Comparative Literature, and Psychology at the University of Indiana and the author of Gödel, Escher, Bach: An Eternal Golden Braid, doesn't buy into the whole Singularity vision.
I strongly recommend exploring this "Singularity" concept. I first came across it several years ago when I went to "Ask Jeeves" with my question "What is the purpose of life"? Jeeves recommended contributing to the "seed program" effort to create a "learning how to learn program" that would insure that when machines became super intelligent they would still take care of humans.
Well, I don't thnk itwould be possible for compuers to learn more than the human race as a whole because computers were created by humans, and so can only reach the maxium intellegence s the human race, even if they can learn.
My sig!!!- Life is way too short, so, if you really want to jump off the bridge that no one else even dares to jump off, then go ahead. As long as you enjoy life, it does't really matter.
If you are interested in the "singularity" concept, you should check out the Singularity Institute for Artificial Intelligence blog. There are links to podcasts and a 20 page PDF from the 2007 Singularity Summit.
Computers have already the processing power of the human brain. However: As long as we don't understand the concept of intelligence, we will not be able the create it artificially.
Using a metaphor for the current state of science: Man covering themselves with feathers and jumping off of towers, in the hope they could fly. However, if you don't understand the concept of flying (air lifting, center of gravity, etc), you will never be able to create a flying machine.
@ARTiFactor: (Technological) Singularity is completely based on the evolution theory. And the evolution theory is - scientifically proven - incorrect. Facts: Micro-evolution (variety) is proven. However, macro-evolution is never proven - even invalidated, because;
Degeneration of DNA is proven and by that - 'improving DNA' (Singularity) is disproven.
An example: Some flowers decided - during their evolution - to be completely dependent of bees for their reproduction. Oops, a problem: The bees were not evolved yet for hundreds of millions of years. Please, stay alive for a little while...
(And if those flowers would be evolved later - after the bee was invented - why did they gave up their successful reproduction method for a 100 percent dependence of the little bee. That would argue against evolution.)
We really need to reconsider the current state of AI.
Menno,
I don't know about evolution theory being
but you might be interested in "Intelligent Design" developments within my Synthetic biology post.
Well, if computers could think for themselves, then there is no limit to what they can accomplish... or destroy. However, in order to accomplish self-thinking or creating a robotic consciousness, that would mean to give it emotion. Then rendering it as good as a human, but one that you could almost use as a slave to constantly think and get smarter. But would this be wrong because it has a consciousness? Also, "global warming" is a hoax. There are no proven results of it in the timeframe of human existance (not more than a few thousand years ago). Global warming won't be noticeable for another few million, if not BILLION years. By that time, we won't even need earth. We will be so far advanced that it isn't even thinkable. So who even cares about Global Warming? Al Gore?- oh but that's because he's making MILLIONS OF DOLLARS off of it.
Let's hope this will never happen, why would we need computers to equal human thinking when we have humans to do that? What's the point really? I mean, I am glad I have computer and access to world information but I think this should be it. I only have the cloud security to worry about, I don't need a computer to compete with me in any way...
I think the whole purpose of this idea is to suggest a computer that can think for itself and in turn find connections that we haven't thought of yet. To me, that would be a great asset to humanity. For example, if it could figure out a way to use magnetism that we haven't thought of yet... The possibilities would be endless.
Computers are improving at an amazing rate, operating at ever faster processing speeds and with larger memories. The software is becoming more and more complex and able to handle a vast array of tasks. But all said and done it's still just a machine performing a task that it has been designed to do, it doesn't actually come up with any new ideas of its own or do any thinking.
Val - cfd specialist
Post new comment