Free Essay

Is Superintelligence Feasible

In:

Submitted By leonhert34
Words 2347
Pages 10
Superintelligence
Reflection

There are many differences in this paper compared to my first draft. In my first paper I tended to overuse the same pronouns when referring to the author. I also did not compare and contrast the authors as well as I should have and did not supply any possible solutions to the problems the authors wrote about. In this draft I tried to do a better job of inserting quotes without using sentences starting with “this quote…” or “this quote highlights”. Also I gave my view on how we should approach singularity as my solution to the problems and uncertainties discussed in this paper. As a whole I believe this paper is way better than my first. The biggest thing I took away from this assignment is how vital technology has and will be in mankind’s existence today and continued survival. Also my researching skills have improved after the continued practice I had throughout this assignment.

Superintelligence
A Superintelligence is “any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills”; however, this definition leaves open how the super intelligence is implemented – it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else. The ethical issues surrounding the creation of these machines with general intellectual capabilities that far outstrip those of humans are very different and have far greater implications than current ethical dilemmas. Superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields. The superintelligence would be able to conduct research with superhuman efficiency. It also could have the potential to surpass humans in the quality of its moral thinking. However, the designers of the superintelligence will be responsible for specifying its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the potential technologies it could develop, it is crucial that it be provided with human-friendly motivations in order to ensure the survival of the human race.

One view on the future of AI and the Singularity is that there is an immense uncertainty attached to the creation of dramatically greater than human intelligence. Those who take this side believe there will be no way for us to eliminate or drastically mitigate the existential risk involved in creating superintelligence. Since it is extremely difficult to control the behavior of a goal-directed agent that is vastly smarter than you are; this problem is much harder than a normal (human-human) principal-agent problem.

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make.”(Hansen).

According to this view, building superintelligence is essentially plunging into the Great Unknown and swallowing the risk because of the potential reward for future human benefit but the risk outweighs the reward.

Others believe that we'll have nothing to fear from superintelligence out of a conviction that something so astoundingly smart couldn't possibly be stupid or mean enough to destroy us. Those of this standpoint also believe a superintelligence will be naturally more moral than we are.

“As we become more intelligent -- and find more alternatives to attain our goals and solve our problems -- we can find more ways of doing so without impinging on other people’s goals. A superintelligence would, by definition, be able to find even more ways to solve problems and create solutions that do not hurt others. Therefore, increases in intelligence would increase the probability of friendliness. Indeed, superintelligence would increase the probability of not just attaining its own goals without impinging on others, but would lead to the increased probability of attaining its goals while benefiting the success of others in attaining their own goals.”(Vinge).

However human morality and "common sense" are extremely complex and peculiar information structures that we do not fully comprehend. If we want to ensure continuity between our world and a world with superintelligence, we need to transfer over our "meta-ethics" which focuses on differentiating what is right and wrong. For example "Obvious" morality, like "don't kill people if you don't have to" is extremely complicated, but seems deceptively simple to us, because we have the brainware to compute it intuitively.

Another view is that if we engineer and/or educate this superintelligence correctly, we can drastically mitigate the existential risk associated with superintelligence, and create a superintelligence that's highly unlikely to pose an existential risk to humanity. Those who take this stance believe we have to give superintelligence goal systems that are compatible with our continued existence, or we will be destroyed.

“The option to defer many decisions to the superintelligence does not mean that we can afford to be complacent in how we construct the superintelligence. On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the superintelligence, is of the utmost importance. Our entire future may hinge on how we solve these problems.”(Bostrom)

Certain basic drives common across many different kinds of current technologies may prove inconvenient to ourselves when the superintelligence implementing them are extremely powerful and do not obey human commands. If we got to tinker with different control methods, make lots of mistakes, and learn from those mistakes, maybe we could figure out how to control a self-improving superintelligence with years of research. But, we may not have the opportunity to make so many mistakes, because the transition from human control of the planet to machine control might be surprisingly rapid.

One possible reason for a rapid transition from human control to machine control is called recursive self-improvement. If an AI with general intelligence correctly realized that it would be better able to achieve its goals, whatever these goals may be, if it does original AI research to improve its own capabilities. That is, self-improvement is a "convergent instrumental value" of almost any "final" values an agent might have, which is part of why self-improvement books and blogs are so popular. In AI, the system's capability is roughly "orthogonal" to its goals. That is, you can build a really smart system aimed at increasing Shell's stock price, or a really smart system aimed at filtering spam, or a really smart system aimed at maximizing the number of paperclips produced at a factory. As you improve the intelligence of the system, or as it improves its own intelligence, its goals don't particularly change rather, it simply gets better at achieving whatever its goals already are.

Giving a superintelligence a moral and goal system implicitly via interacting with it and teaching it in various particular cases by asking it to infer, would be one way to try to transmit the complex information structure of human morality and aesthetics to a superintelligence. But if we do create a conscious being such as a superintelligence we have no way of knowing how it will behave or react: as a being that would possess an intelligence vastly greater than our own we would have no way of controlling it if it decided mankind was counterintuitive to its own goals. We shouldn't stake the fate of the planet on a risky bet that all mind designs we might create eventually converge on the same moral values, as their capabilities increase. Instead, we should fund lots of highly educated individuals to think hard about the general challenge of superintelligence control, and see what kinds of safety guarantees we can get with different kinds of designs. I believe the best solution at this present time is to use our current technologies to advance the human race by combining technological power with the power of the human brain. By doing this we could immensely advance the human race and the current standard of living. Then if we were to create a superintelligence the human race would be able to keep it in check and prevent said superintelligence from decimating human life as we know it.

Technology has given mankind an advantage over all other types of specious indigenous to the earth since the beginning of our existence. The creation of weapons, which allowed us to effectively hunt for food and defend ourselves from threats, could be viewed as the first technology and we have come so far since. But, the possibility of superintelligence is something completely unknown to us and its implications are exponential. I think what we need to keep in mind when debating this issue is how we have always used technology and not the other way around. Even if it has its advantages and could bring about perfection, it would destroy what has made our species so unique, which is the fact that we are imperfect, unpredictable, and think freely. In my opinion it is not worth taking the risk of losing these traits in exchange for the benefits of a superintelligence and if we are ever able to create one we would need to be able to control it completely or we would be risking the annihilation of our species .

Outline
-Intro paragraph w/thesis at the end
-Introduce the idea of singularity and main components scientists are using to support this theory (moores law, etc.)
- Present supporters of singularity’s views on how it will come about and how it will affect human society
-Present argument against singularity and main problems some scientists have with the idea (Chinese room, etc.)
-Present detractors of singularities views on how it will have negative impact on society and will take far longer than we think
-Analyze both arguments in depth and draw a conclusion from it
-Present my view on argument and possible solutions benefits and consequences
- Wrap up with conclusion
Superintelligence
* unstoppably powerful * intellectual superiority
The potential technologies it could develop, it is crucial that it be provided with human-friendly motivations in order to ensure the survival of the human race. The Singularity is a hypothetical moment in the not-so-distant future when machine
Intelligence will supplant human intelligence as the dominant force in the world. There is a growing movement of scientists, authors, and advocates who believe Singularity is not only possible, but inevitable. Leading this movement is Ray Kurzweil in his 2005 book, The Singularity is Near: When Humans Transcend Biology. Kurzweil predicts a utopian future of advanced human/machine hybrid intelligence and radically extended life by the year 2045. The counter argument presented by those opposing the theory of technological singularity presented by Kurzweil and others believe aspects of this theory we cannot fully understand because it is too complicated and are just banking on the continued exponential growth of technology. Since we do not fully understand the complexity of the mind these changes are far more difficult, will take longer, or may never happen even if we can replicate the processing power of the brain.

Thesis question: Is technological singularity possible within the next 100 years and regardless of how intelligently technology behaves will it ever truly have a mind, understanding, or consciousness.

Bibliography

"Ray Kurzweil." N.p., n.d. Web. <http%3A%2F%2Fwww.theequitykicker.com%2F2010%2F08%2F23%2Fthe-counter-arguments-to-kurzweils-singularity-thesis%2F>

Examines singularity in multiple smaller parts and explores counter arguments.

Kurzweil, Ray. "The Coming Merging of Mind and Machine." Scientific American Global RSS. N.p., n.d. Web. 12 Apr. 2014. <http://www.scientificamerican.com/article/merging-of-mind-and-machine/>.

Report written by kurzweil explaining his theory of singularity (humans and tech merging).

Cole, David. "The Chinese Room Argument." Stanford University. Stanford University, 19 Mar. 2004. Web. 12 Apr. 2014. <http://plato.stanford.edu/entries/chinese-room/>.

Philoisophical argument against the possibility of true artificial intelligence. (John Searle)

"The Coming Technological Singularity." The Coming Technological Singularity. N.p., n.d. Web. 12 Apr. 2014. <http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html>.

In favor of singularity argues that we are on the edge of change comparable to the rise of human life on Earth. By Vernor Vinge

Brouwer, Albert-Jan. "Brouwer on Identity." Brouwer on Identity. N.p., n.d. Web. 12 Apr. 2014. <http://www.ibiblio.org/jstrout/uploading/brouwer_essay.html>.

Albert-Jan Brouwer on Personal Identity discusses whether downloading and extracting thoughts from the brain is an achievable reality.

Koene, Randal. N.p., n.d. Web. 12 Apr. 2014. <http%3A%2F%2Fwww.davidorban.com%2F2009%2F09%2Frandal-koene-on-whole-brain-emulation%2F>.

Randall Koene discusses his goal of whole brain emulation and whether it will be obtainable in our lifetime.

Hansen, Robin. "IF UPLOADS COME FIRST." If Uploads Come First. N.p., n.d. Web. 12 Apr. 2014. <http://hanson.gmu.edu/uploads.html>.

Robin Hansen discusses possible consequences of uploading the brain and merging with technology.

Merkle, Ralph C. "The Molecular Repair of the Brain." The Molecular Repair of the Brain. N.p., n.d. Web. 12 Apr. 2014. <http://merkle.com/merkleDir/techFeas.html#DESCRIBING>.

Discusses the complexity of the physical structure of the brain and what we are currently capable of doing in terms of repairing and improving it. (ralph c merkle)

The Editors of Encyclopædia Britannica. "Moore's Law (computer Science)."Encyclopedia Britannica Online. Encyclopedia Britannica, n.d. Web. 12 Apr. 2014. <http://www.britannica.com/EBchecked/topic/705881/Moores-law>.

Defines Moores Law which is the basis of the technological singularity theory .

Bostrom, Nick. Superintelligence Paths, Dangers, Strategies. N.p.: n.p., n.d. Web. 12 Apr. 2014.

Nick Bostrom debating the whole idea of superintelligence and possible outcomes and how they would affect us.

Similar Documents

Free Essay

La Singularidad

...NOTE: This PDF document has a handy set of “bookmarks” for it, which are accessible by pressing the Bookmarks tab on the left side of this window. ***************************************************** We are the last. The last generation to be unaugmented. The last generation to be intellectually alone. The last generation to be limited by our bodies. We are the first. The first generation to be augmented. The first generation to be intellectually together. The first generation to be limited only by our imaginations. We stand both before and after, balancing on the razor edge of the Event Horizon of the Singularity. That this sublime juxtapositional tautology has gone unnoticed until now is itself remarkable. We're so exquisitely privileged to be living in this time, to be born right on the precipice of the greatest paradigm shift in human history, the only thing that approaches the importance of that reality is finding like minds that realize the same, and being able to make some connection with them. If these books have influenced you the same way that they have us, we invite your contact at the email addresses listed below. Enjoy, Michael Beight, piman_314@yahoo.com Steven Reddell, cronyx@gmail.com Here are some new links that we’ve found interesting: KurzweilAI.net News articles, essays, and discussion on the latest topics in technology and accelerating intelligence. SingInst.org The Singularity Institute for Artificial Intelligence: think tank devoted to increasing...

Words: 237133 - Pages: 949

Premium Essay

Logical Reasoning

...updated: April 26, 2016 Logical Reasoning Bradley H. Dowden Philosophy Department California State University Sacramento Sacramento, CA 95819 USA ii iii Preface Copyright © 2011-14 by Bradley H. Dowden This book Logical Reasoning by Bradley H. Dowden is licensed under a Creative Commons AttributionNonCommercial-NoDerivs 3.0 Unported License. That is, you are free to share, copy, distribute, store, and transmit all or any part of the work under the following conditions: (1) Attribution You must attribute the work in the manner specified by the author, namely by citing his name, the book title, and the relevant page numbers (but not in any way that suggests that the book Logical Reasoning or its author endorse you or your use of the work). (2) Noncommercial You may not use this work for commercial purposes (for example, by inserting passages into a book that is sold to students). (3) No Derivative Works You may not alter, transform, or build upon this work. An earlier version of the book was published by Wadsworth Publishing Company, Belmont, California USA in 1993 with ISBN number 0-534-17688-7. When Wadsworth decided no longer to print the book, they returned their publishing rights to the original author, Bradley Dowden. The current version has been significantly revised. If you would like to suggest changes to the text, the author would appreciate your writing to him at dowden@csus.edu. iv Praise Comments on the earlier 1993 edition...

Words: 189930 - Pages: 760