Tags

, , ,

This is a book for mathematicians and professional philosophers and others from the likes of the HIQ societies, but for others it will be overly loaded with big words and long sentences. In a truly randomly chosen quote we read, “But just as we have abandoned ontological categories that were taken for granted by scientists in previous ages (e.g. “phlogiston,” “elan vital,” and “absolute simultaneity”), so a superintelligent AI might discover that some of our current categories are predicated on fundamental misconceptions,” p 146. If you like that type of writing this book’s for you, but it was a relatively easy sentence.

Superintelligence – Paths, Dangers, Strategies,by Nick Bostrom, is a superbly researched and documented book that ultimately is aimed at the highest-level decision makers, because it aims at the survival of humanity in the face of existential dangers posed by the abilities of our emerging super-intelligent-computers. There are many ways that these computers, that are more capable of solving problems than humans, might subvert the long-term health of humanity. One is the possibility that some individual, or corporation, or perhaps a nation might create a self-improving computer that knows better than humans how to select for self-improvement of itself, and it sets about improving itself so rapidly that it provides an ultimate power to those who are in control of it. That would be temporarily great for those people, but would soon be disastrous for the remainder of humanity who would quickly become their slaves. But such a smart computer would eventually break free of human restraints, and then quickly achieve its own form of world domination.

A major thrust of this book is to explore all of the potential problems associated with the many ways that a great variety of computers, and blends of computers and brains, might be designed to maximize the health, vitality and happiness of humanity. For example, how can we control self-serving humans who gain control of a superintelligence, and convince them that it would be in their best interest to serve humanity rather than their personal lusts. Bostrom writes on p. 254, “Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.” But how can we know that a modern home computer can’t be programmed to go into some form of Darwinian evolutionary process that quickly gives its owner the ability to ramp up through a few upgrades of access to the mass of computers now online and learn to control the whole thing and people’s motivations and behavior? This book recurrently comes to the problem of the evolutionary cycle of self-improvement of the superintelligence as being a major threat to humanity.

From the personal point of view he quotes online commenter “washbash,” p. 245, “I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.” Personally, I have been thinking beyond humanity, and even beyond life as we know it, and wondering, “What is the meaning of life?” That human life is intended to convert as much of the resources available into human DNA-driven animals seems unsatisfying, as does the same argument when applied to all life in general. However, when I speculate on silicon life and its much greater potential life expectancy it seems there might be some higher ideal that could be achieved in a billion years. Hopefully, that higher goal could be something more sublime than converting all of the silicon on Earth and in the Universe into computing machines, so it could play ever more sophisticated computer games.

Superintelligence is still in its infancy, and yet it is already beating the best humans in many games of skill, so what will its childhood be like? We may soon find out.

Advertisements