Stop Calling it AIWritten on August 14th, 2019 by Cody Snider
The term AI is getting bounced around a lot alongside terms like machine learning. It may feel like it’s a new, insidious entity that is taking over your digital life. It’s creeping into your social media, cameras and mobile devices. Well, I’m here to tell you that a machine that thinks like you and I do is a long ways off and you have little to worry about.
First, we’ll take a look at what people think AI is. It conjures images of machines in film that have sparked our interest and imagination. VIKI from I, Robot; Hal from 2001: A Space Odyssey; Rosie from The Jetsons; Data from Star Trek: The Next Generation.
In fiction, each of these automatons think and feel like a normal human would. They are capable of taking in new information and adapting. Hell, they are even better at it without emotions getting in the way. And what happens when we have too many of them or one that is too powerful? The Matrix.
It’s very easy for a machine to blindly repeat what it knows. Existing systems like Google’s search and Amazon’s recommendation system do a great job with this. They know what you have bought or searched and know what others who have bought or searched those same things have subsequently bought or searched. It’s not particularly intelligent, it’s just making sense of a large amount of behavioral data.
If you had a small bar and grill and knew that most customers who came in and had the burger were probably going to have a drink as well, wouldn’t you offer a beer and burger special? Are you some mastermind that has analyzed so many burger eaters that you know a beer might be an enticing addition to the meal? By making that menu, you are already demonstrating the level of intelligence most “AI” is capable of.
Recognition of purchase or decision-making patterns has existed long before the internet. It may be done fast now, but it is still an old idea.
I had a strong interest about 15 years ago in the Daisy and Billy chat bots. These were programs that would attempt to put together responses based on the training information provided to them. I had the lofty goal of feeding all major religious texts into one and asking it who god was. Don’t get too excited, all attempts failed miserably. They spat out absolute nonsense.
Part of the problem (pointed out by my ex who thought the exercise a fools errand), was the inability to synthesize new information from gathered information. This means that, while it could repeat what I fed into it, it didn’t actually render any new information itself. It was parroting back what I told it.
That’s another big milestone we have yet to achieve. Machines don’t actually learn and draw a conclusion. They take the information in and render the aggregation we tell them to render.
Granted, a lot of the hurdles in that area deal with NLP (natural language processing). How do we translate the words we use to describe things to something a machine can work with? It’s very trick business, but that’s the topic of another post.
The idea of a learning machine can be applied to many non-linguistic tasks. Imagine a machine that understood how to make stock picks based on trends or how to warn of danger in tsunami-stricken countries months before a storm.
There is no real intelligence to it. There’s a complex set of rules that apply weights and conditions here-and-there. But, honest to god, that’s about it. We really don’t have machines predicting things we weren’t already working on, they are just helping us make predictions faster.
The hype on terms like “machine learning” and “AI” is a rebranding of the terms “statistics” and “general programming logic”. It’s a long ways away from the scary AI you envision from sci-fi. At best, it makes cancer research faster. At worst, it spends a lot of research money on AWS.
End of the day, it’s so far away from being a boogeyman that you should refocus on things that matter like global warming or overpopulation.