back to top
by: Owen Gunden
The robots are coming! And it's going to be awesome.
Robots means we get to automate away a lot of our problems. I like automating. It frees us up from tedius work. I envision a future in which all the unpleasant work is done by robots, and we are free to spend our time on other pursuits; art, family, intellectual pursuits; heck, whatever we can dream up!
However, as our society becomes more automated, we need a stricter sense of ethics. There are three reasons this is more important than ever: automation scales, automation is fast, and automation can be powerful.
Whatever kind of automated system we create, we're gonna get lots of.. whatever it does. So we'd better think hard and make sure that whatever it does is something we want to be a part of.
A fine example of this is factory farming. As we increase the efficiency and automation of raising and slaughtering animals for food, we have created an environment that would repulse even the strongest stomach. Almost everyone would agree that there's got to be a better way.. and many are starting to search for that way. Some want to back off from the capitalist progression towards increased efficiency (and automation!) in terms of where our food comes from. Others (like me) think that's a losing battle, and that the factory farm is simply showing us, in a concentrated fashion, what necessarily goes into a meat-based meal.
Whatever we automate tends to get faster and faster at whatever it does. This means if we screw up the design, we could screw things up really fast -- without time to intervene manually to fix it.
The perfect example of this is in finance. I used to work at a high-frequency trading company, where we wrote programs whose task was to exploit tiny market inefficiencies as fast as possible. We, together with other companies like us, automated away the work of many thousands of market traders. In 2010 there was a "flash crash" which was widely attributed to some particular funny interaction of -- or bug in -- robotic trading machines. The flash crash was a mini market crash that took place so quickly it would have been difficult for a person to evaluate the situation and react thoughtfully. Fortunately, most firms and exchanges have the forethought to engineer failsafe checks onto their robots that do things like halt trading when something seemingly unexpected occurs.
All this is simply to say that as we automate and increase the speed of our financial system, we are going to get results faster -- so whatever direction it's headed in had better be a good one. Once again I don't oppose the automation per se, but I think it provides us a good opportunity to look deeply at our entire financial system and ask if it's on solid legs. Because if it's not, we're bound to find out soon, and we may not have time to react.
In my third example, automated killing robots (a.k.a. "drones"), the relationship between the ethics of our decisions and the reality on the ground is perhaps the most direct.
Automating war has been done for a long time. Warmakers often get access to technologies first because it sure is better to be the team that has airplanes than the team that doesn't. However, until now, we've always had to risk losing lives when we went to war. Now, we don't have to risk anything except a few (expensive) hunks of metal and computer parts.
Whether or not you think drones are a good idea, it's unlikely that they are going away anytime soon. On the contrary, China and other nations are working overtime to create their own drone fleets to match ours.
The concept of drone wars isn't all bad -- if both sides are fighting with drones then nobody gets hurt, after all. What automation brings to the table is, once again, the necessity to look really hard at the ethics of each decision that goes into the drone war. Is this war truly being fought for just reasons? Because it's easier than ever before for the military to go to war when it doesn't have any weeping mothers and body bags to atone for. At least, not on the winning side. And perhaps it seems far-fetched to some in the U.S. right now, but in some parts of the world there is always the possibility that drones might be used to scare the populace of a country into supporting one leader or another.
Ultimately, somebody has to write the rules for how the robots act. In the case of drones, perhaps it's constitutional law that ultimately governs the robots. In the case of markets, it's financial companies, exchanges, and possibly regulators. And in the case of factory farming, it's ultimately the consumer who decides what the history of their meal is going to be. In all cases, automation acutely brings to a head the need to look hard at the direction the automation is going and make sure it's a world we want to live in. In a real sense, we need to be "careful what we wish for" because we are getting more of it, faster, every day.
It's either that, or we may have to fire the robots. And I, for one, am looking forward to robotopia!