In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.
As it currently stands, the vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning. These algorithms use statistics to find patterns in massive amounts of data. They then use those patterns to make predictions on things like what shows you might like on Netflix, what you’re saying when you speak to Alexa, or whether you have cancer based on your MRI.
Machine learning, and its subset deep learning (basically machine learning on steroids), is incredibly powerful. It is the basis of many major breakthroughs, including facial recognition, hyper-realistic photo and voice synthesis, and AlphaGo, the program that beat the best human player in the complex game of Go. But it is also just a tiny fraction of what AI could be.
The grand idea is to develop something resembling human intelligence, which is often referred to as “artificial general intelligence,” or “AGI.” Some experts believe that machine learning and deep learning will eventually get us to AGI with enough data, but most would agree there are big missing pieces and it’s still a long way off. AI may have mastered Go, but in other ways it is still much dumber than a toddler.
In that sense, AI is also aspirational, and its definition is constantly evolving. What would have been considered AI in the past may not be considered AI today.
Because of this, the boundaries of AI can get really confusing, and the term often gets mangled to include any kind of algorithm or computer program. We can thank Silicon Valley for constantly inflating the capabilities of AI for its own convenience. (Cough, Mark Zuckerberg, cough.)
To clear things up, I drew you this flowchart on the back of an envelope so you can work out whether something is using AI or not.