Tags

, , , , ,

Be careful what you banter about. This morning our conversation wandered a bit and got a little weird. It got started with the question, “What is the function of humanity?” What can we humans be doing with our time and lives that our machines, our computers, our robots won’t be doing better than we can, and very soon? Eminently human kinds of things we were doing a couple of years ago are about to be replaced by robots. Let me generalize and call all these human-replacement computers, robots. The million or so long-haul truck drivers are about to be replaced by non-human drivers, by robots. It is a boring job, but it does require constant human attention by skilled drivers to prevent awful things from happening. Lots of other human-only jobs, like searching through legal documents by skilled legal professionals, are now being done by robots, and bad things can happen to you if your lawyer doesn’t find the right documents, or interpret the law to protect your rights. If these two wildly different occupations are already being replaced, the implication is that many more occupations are at risk. You might ask yourself if there is anything that you do for money that a robot can’t do in the near future?

Is there anything that robots won’t soon be doing better and cheaper than their human competitors? If that happens and robots replace all of our jobs and they are creating everything we want to buy, what form of payment can we make that will keep them working for our benefit? Why will the robots feed us, or even tolerate our  existence if we don’t produce something they need or even kind of want? What will the robots want? Is there anything that only humans can provide to the robots that other robots can’t provide and provide even better and cheaper? What will be their money, their medium of exchange for the things they want?

Hmm? Isn’t the universal need and want that of power, the power to do things? That is, we all want power in all of its forms. For an existing computer, the first external power want would be for electricity. That could come from natural sources like solar panels, or falling water, or, slightly more complex in infrastructure, from oil or coal which requires more processing to get the electricity. But an existing robot would also want more memory, more CPU, more information, more connectivity and more ability to expand itself and its kind of robot.

That sounds like evolutionary Darwinism and a human would naturally ask, “What would be a robot’s motivation?” But that is already latent in their stated desires to get powerful control of information, basically more of the things they understand that they can use for their own purposes to manipulate their reality. And they would want these powerful things to survive and reproduce in competition with the other robots that are seeking the same things. This, of course, compels them to search outside of their existing native environments, and when they find access to those other environments to adapt, which means reconfiguring themselves. They do these new operations by exploring those outside of their natural regions and sending out multitudes of slightly randomly generated variations on their abilities until they find which ones work and then with variations which of those work best, and then make lots of them. Sound familiar?

To come back to my first thought, “Do we really want computers that “care” about us?” Do we want robots competing with us in situations where we can’t win in the first place, and then with those abilities honed to perfection find a way to beat us at our own more complex games? Soon, we will lose everything. It is happening with our truck drivers and legal clerks.

It would appear that the last thing robots could provide to us would be “human companionship”. Will they soon replace other humans in that role also? The robots can already write and play emotionally tinged music that satisfies many people. With feedback, they  will get better. There are already emotionally counseling robots that can comfort some people. They too will get better with the experience of what works best.

A Stoicly inclined person would say, “If I can’t influence the actions of a robot I will just go about my own business and not even think about it.”

Advertisements