Who wants to be the guinea pig?

Robots trained with Machine Learning are entering our lives. How willing are we to risk our lives training them?

The software revolution is in full swing. Machine Learning Algorithms take on more an more jobs: From chat bots answering telecom customers to software warning of problems with railroad switches. Just as many people got accustomed to normal software, they must get familiar with this new breed.

Where coders used to solve problems by defining every possible scenario and giving the software distinct orders for each, this new kind of software takes the problem solving away from the software engineer. He´s no longer responsible for thinking of every possible scenario and neither is he for finding the right branch of the decision tree. In the new world, algorithms use training data – pictures for image recognition, machine-down-data to detect malfunctions – and inputs/features to learn for themselves, when to ring the alarm and how to identify cancer in medical records.

Most of the time, training the algorithms does only cost money like hiring people to tag pictures as “nude” or collect and prepare data for algorithms to comb through. But with the development of robots that enter the real world, the training can cost lives. As this excellent article on Bloomberg (here)describes, often we will be standing at the crossroads and ask ourselves, how many casualties we are causing by introducing AI-robots and how many are causing by delaying the introduction.

Self-driving cars are the obvious case: How many lives do we save by introducing cars that avoid human mistakes? How many lives does it cost to introduce machines that do mistakes that no human driver would have done? But the same question will lure once we introduce robots in other fields: Military robots that are active in the battlefield, surgery robots that work autonomously, care robots that lift elderly people out of bed. As Boston Dynamics is already delivering it´s famous robot dog Spot to selected first adopters (here), the moral questions around ML-robots become more pressing.

Machine learning works with probabilities. So, ML-powered cars will never avoid all accidents. They will just avoid most accidents with a probability of 99,99 percent. They will lower the number of fatal accidents in the long run – no question about that. But on the way to perfection via learning, we have to accept that those cars will fail. Like a baby falls many times before it can walk into the arms of it´s daddy for the first time.

Complicating things: The fatal accidents that ML-powered cars provoke are most likely ones that human drives would have been able to avoid. That is the pattern we experienced so far. Given these difficulties, we will most likely see the introduction of ML-robots in different societies in different speeds. After all it will be a test of how a society is open to innovation and the risks associated with it.

Share it on
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •