Scientists Want to Regulate AI Now Before We End Up Like The Terminator Movies
As a kid, the Terminator movie scared the living daylights out of me. Other movies were scary, sure, but you could always shoot bad guys, torch Pennywise, and so on. Terminators – not so much. And who really has a vat of super-hot metal melting goodness just laying around their house? Mega machine press to turn a Terminator into a pancake? Yeah, I don’t have one of those either. Then the next Terminator movies came out and, even as an adult, in the back of my mind, they are still really scary. As technology moves forward with crazy leaps, you cannot tell me that you’ve never thought: “One day, we’re going to create robots that will kill us.” Apparently, scientists are worried about the same thing.
In fact, the one and only Elon Musk recently brought up this very issue. He said that insane robots that hate us and want to ‘kill all humans’ could easily rampage through cities across the world one day. How do we stop them? His solution is to begin placing regulations on AI now. Musk says that the idea of Terminator robots roaming our streets tomorrow is far-fetched, he isn’t the only one saying that we need to have some AI regulation in place now.
Other Experts Agree on Regulations
Manuela Veloso, an AI expert from Carnegie Mellon, doesn’t think that we’re anywhere close to T-1000’s busting through our doors. However, she does agree that there need to be regulations on AI products that will go out to consumers in the mass market. She believes that there should be some type of regulation on AI just like there is on milk from a factory. Veloso points out that any new type of milk would have to be regulated by the FDA (Food and Drug Administration) before it went out to the public.
“I think the research, before it becomes a product, you can experiment, you can research or anything, otherwise we’ll never advance the discoveries of AI,” Veloso says.
Some Experts Want More
Others in the field agree that AI should be regulated, but not just when it hits the mass market. Chris Brahm, of Bain & Company, believes that AI should be regulated when it is at the point of doing jobs that are regulated for humans. For example, banking machines like ATMs.
Brahm states that as a society, we’ve decided that there are certain things that humans do that need to be regulated to protect people. Since we already do that, he asks why aren’t we regulating machines that already make these decisions? To me, it’s a fair question, especially when I assumed that we already did this.
But Who Would Regulate AI?
Although it seems that the majority of both experts and researchers agree that there needs to be regulations, the question is who will do the regulating? There is currently no governmental body that is completely dedicated to regulating how AI is vetted and could be a while before there is one. However, it also could be a long time before we have Transformer-type AI fighting it out in the streets too.
The problem is that the technology is already there and being put out into the mass market. For example, self-driving cars are something that is out there now, driving the streets in some areas. Yet, only recently was a House panel put together to discuss regulations for self-driving cars.
Moving Into the Future of AI
Veloso looks at the regulations like this: “Generating and enforcing such regulations can be very hard, but we can take it as a challenge.”
Musk’s fears about Terminators crushing all humans might be something movies are made of, but his worries over regulations are real. If the government cannot implement any type of regulations to match the rate that AI is growing, then we are in real trouble as a society who can’t get enough of the latest technology.