You may have heard of Isaac Asimov’s “Three Laws of Robotics“. The first law says “A robot may not injure a human being or, through inaction, allow a human being to come to harm“. While this was intended to apply to a conceptual robot with a level of functionality approaching the appearance of free will, as depicted in science fiction literature, we observe it pretty well when it comes to industrial robots and other equipment. We place fences around them, include physical limits and limit switches in their designs, require power lockout switches and procedures to allow people to safely approach them for servicing, and attach alarms and cut-off switches to gates one would use to get close to them. All this is regulated by a government agency (OSHA) that some detest and fight, mostly because the required safety measures add cost or because their sense of self and control are somehow offended – “I don’t want government (or anyone) telling me what I can and can’t do.” The net result has been, however, that very, very few people have been injured or killed by robots in spite of the large numbers of them employed in industry. In this case it appears that following the First Law of Robotics has been a huge success. When it comes to other human creations or “technologies”, however, we aren’t quite as convinced of the need, not quite as clear on the similarity of, say, our financial institutions to robots, and not as willing to accept, create, or manage the controls we need for our safety. Hasn’t the recession of 2008-9 been nothing more than a huge failure of controls over what are essentially financial technologies – paper “robots” that were created to accumulate and manage wealth?
It has been a long time since I heard of a worker killed by a robot. A man was struck and killed by a robot at a Michigan casting plant in 1979, and another was killed by being pushed into a grinding machine by a robot on which he was working in 1981. The fact that few of us remember, or even heard of, these deaths is a testimonial to the success of the government agencies and regulations set up to protect workers from such accidents. They weren’t the only ones, but the numbers harmed by robots have been vanishingly small.
Technology is more than just “techie” inventions and robots. For the purpose of this article I’d like to broadly define “technology” as including such human-invented systems as the complex of financial and banking processes and structures we deal with daily. It is important to recognize that there are far greater complexities and variations in the systems that make up our economies than in the visible machines we use, to the point that those systems are becoming harder and harder to understand.
Understanding technology in a holistic way has become extremely difficult. It was my personal observation as a computer engineer in the 1980’s and ’90’s that, while I had gained an almost-holistic understanding of how computers and their communication networks worked, by 2000 I was having trouble finding anyone who understood the entire range of technologies involved in the internet. I can understand, from long experience and broad-based training, how the electron movements in transistor junctions (that often populate single integrated circuits by the millions) are represented by binary codes, the binary codes are represented by higher level codes, the binary codes are then represented by computer languages, the computer languages are used to create working programs, and how they all work together with wired and wireless data transmission systems to ensure that when you enter a web address in a little box, you then see the image that was connected to that address. Over the years I have found fewer and fewer who grasp the span of all that technology, however, and that is understandable given the exploding complexity of our computerized world. I have met geniuses who grasp particular levels of all that technology, but few who know how to build a computer from scratch. The specialization involved has put this out of reach, and there is a direct parallel with our social, economic, and governmental systems.
Invisible “robots” such as those in financial systems need safety regulations, too. The point is, we easily understand the value of regulation, and agencies to do the regulating, when it comes to technologies we can see and easily understand. Many people fail, however, to see and address the danger in human creations that are hard to put in a picture. Professor Dan Gilbert spoke about this in 2008, and revealed that we have evolved to deal very effectively with threats like a person trying to hit us with a stick, but have not yet reached the point where we understand complex, faceless things that threaten us with great future harm. Our recent, severe recession resulted from greed and the sub-optimization of our financial systems, in this case the harming of others being a source of gain for organizations and individuals, combined with lax “safety” standards and regulations that could have prevented it.
Our future depends on evolving ourselves and regulating all of our technologies. All of the technology we’ve developed in the past century or two, including socioeconomic technologies like corporations, will need to obey the first rule of robotics – harm no humans – if we are to do well in the next century. Can enough people be helped to achieve an understanding of our most dangerous technological issues? Can enough people learn to understand the value of regulation in keeping our complex technologies from causing us serious harm in the future that we might avoid such scenarios?
We can’t wait for evolution to “smarten” us if we wish to avoid global-scale cataclysms and widespread but often slow-approaching disasters in this century. We don’t have a million years for our brains to evolve better abilities to cope with faceless, unemotional, future threats that don’t readily contrast with the rest of our experience and perception.
Is the next century a turning point in our evolution? Throughout time, as evolution proceeded in a herky-jerky manner, the more highly evolved members of a species adapted more quickly and took action to save themselves while others failed to save themselves, leading to changes in the gene pool and the species being “smarter” thereafter and better equipped to handle similar problems. Evolution can be envisioned as a process of biological learning, and this very process has itself been adopted in the brains of more highly evolved and complex life forms. A cat, for example, will learn to stay away from fast moving human feet and rocking chairs after a few bad experiences with them, and the cat has evolved the techniques of hanging around humans, rubbing on our ankles and purring, because it gets our attention and makes us reciprocate with food.
Can we evolve ourselves? Now we humans are faced with faceless, complex, and hard-to-understand problems that threaten to kill off many of us and reduce the lives of the rest to a far lower standard of living than we enjoy today. Can we learn – essentially evolving ourselves – fast enough to avoid ever-worsening and increasingly widespread disasters caused by our own success? I hope to stick around long enough to see what happens as I attempt to be part of the solution and reduce the part I play in creating the problem.
What can we do to control our technologies and reduce or eliminate our future problems? Education is of primary importance. The first gap we must bridge is in the understanding of the rapidly growing mass of humans on the planet. This requires education, as the ability to read and understand can be achieved, for most people, with training. Is a good education available to most humans on the planet, or a minority? This can be addressed by the aid of the countries that have the knowledge and economic power to make it happen. More educated people may better understand the need for regulatory systems that balance our need for a sustainable future with our needs for food, shelter, and social well being.
Once educated, can we share and evolve our understanding? Secondly, we need to use that education to spread the understanding that the planet can only provide so many resources before it starts to run out, and that we can easily overpopulate the planet beyond its capacity to provide for us, and appear to have already done so. We can spread the understanding that people who worry about having children around to help them in their old age will have large families if they are afraid their children may not out-survive them. That requires stabilizing economies and political systems (increasing our safety) as well as protecting populations from large scale disasters, whether they be from epidemic or shortages. The provision of knowledge and technologies supporting basic family planning is also a key area that can help curb population growth – after all, the population explosion is all about family size in the final analysis.
In the end, we must educate ourselves and others, and regulate our systems by the application of the First Law of Robotics to achieve a sustainable world and good quality of life in the future. If the technologies we have invented, including financial, industrial, and agricultural systems, are not properly regulated to be safe and serve the good of all, some of us will temporarily thrive, but many will suffer and die in the future. It is up to us to educate ourselves, think critically about what we learn, and take action by requiring our governments (originally created to protect us) to regulate the dauntingly complex systems by which we now live, and do so in a way that is ever-harder to subvert for individual or minority-group gain. Please ask for and support funding for education and family planning, and regulation of the systems that provide us the most wonderful and abundant life styles in history. Otherwise we will suffer a decline and a series of increasingly severe disasters I don’t like to contemplate.
As always, I welcome your comments — Tim