It’s been over six decades since computer scientist John McCarthy first coined the term Artificial intelligence (AI), and today the technology is more ingrained in modern society than ever. From virtual assistants to news articles, millions of people now interact with AI on a daily basis without even knowing it. AI is moderating our social media feeds and marketplaces. It’s helping us to find the perfect hotels or even the side table that would complement that sofa we’ve been eyeing.
Despite this, there are still many misconceptions about AI. In fact, for many people the technology remains shrouded in mystery: disconcerting, fascinating, and confusing all at once. With this in mind, I’m going to outline and debunk 5 common myths about AI and share what this fabled technology is really about.
1. AI is all robots.
Thanks to Hollywood, the term AI may conjure up images of talking humanoid robots. In reality, only smart robots typically have AI incorporated into their operating systems. Further, powering these robots is really just a fraction of what AI can actually do.
AI can be incorporated into many machines like drones, cars, or smartphones. Most importantly, it can enhance standard computers and software.
Researchers at the University of Nottingham were able to use AI to analyze standard medical data in online medical health records and accurately predict heart attacks and strokes occurring in 355 more cases than doctors using the traditional system alone. Here, the researchers didn’t need any more information than what is standardly collected for patients (e.g. lab data, demographic information, and the patient’s medical history) and stored in an electronic medical record. Yet, with AI, they were able to analyze this data more effectively, potentially helping doctors with a notoriously difficult task.
So, while robots can certainly incorporate AI, most of us will be able to attain all the benefits of AI with the technology we already have: laptops and desktops.
2. AI can do everything.
AI can do many things, but as I explained here, when we talk about AI we are generally referring to narrow AI. Narrow AI is a computational system that seeks to mimic the way the human brain works but focuses on only one particular task.
Since the goal of AI is to replicate human intelligence, its potential is limitless. So far, it’s learnt to beat humans at complex strategy games like chess and Go. With the help of developers, AI has even wrote a well-received novel. However, as one Oxford researcher noted, AI isn’t “magic,” and it can’t solve all of humanity’s problems. AI is still very much a developing technology, and we are still many years away from general AI, i.e. AI that can do everything humans can.
So, while AI can drive cars or, as virtual assistants do, learn and understand human languages, identify credit card fraud, and make market predictions, most of the technology is still in its infancy. As such, it is up to businesses to do their due diligence when determining what AI solutions are worth investing in and to keep their expectations realistic.
3. It’s not ready for action.
On the other hand, because of its status as a developing science, many people underestimate its abilities. Many business stakeholders are unaware that AI is ready to help them drive real revenue and improve their workflows.
Computer vision, for instance, has already shown itself to be more than ready for implementation and well worth investing. Social networks like 9Gag and Momio have used Clarifai’s computer vision technology to help them better moderate the content that is uploaded to their platform to protect their users from harmful images and videos.
Meanwhile, i-Nside, a world-leader in endoscopic technology, developed a small device on Clarifai’s computer vision platform that can be attached to any smartphone and used to identify and diagnose ear diseases. Computer vision is helping people to find dates, find a home, and even decorate. It’s changing the way we shop and even making the world more inclusive and accessible for people with disabilities. So, while much of AI is still in its infancy, computer vision is already making strides across multiple verticals.
4. AI is going to replace humans.
While AI may replace jobs, it will not replace humans. Computer vision, for instance, has assisted rather than replaced by taking over rote tasks like content moderation or tagging and actually doing them more efficiently. This, in turn, allowed companies like 9Gag and Photobucket to move their moderators into jobs that are more customer-facing and rewarding. As our founder and CEO Matt Zeiler put it, “We automate tasks that are not leveraging humans to the best of their abilities.”
In addition, AI will also create jobs such as machine learning researchers, data scientists (to collect data), and engineers (to build the infrastructure around AI). “The second order will also be jobs created when industries completely change,” Matt said. “For example, when cars were created we needed to install traffic lights to control the drivers. As cars become automated, do we need the same infrastructure for traffic lights? How will that change, and who will work on that transition to a traffic lightless world?” Humans.
5. AI is going to destroy the world.
In addition to job automation, one of the other major fears people have about AI is the potential “AI Apocalypse.” The story goes that once AI matches human intelligence (i.e. general AI), it will quickly pass us by consuming information and learning faster than even the brightest human minds. If knowledge is power, computers will be the most powerful beings on the planet, but they will also still be machines, lacking a moral compass to guide them in making decisions.
This, it’s believed, is what potentially puts humanity in harm’s way, as where our existence conflicts with AI’s goals. It is feared AI will simply view humans as an obstacle to be eliminated. It’s a compelling narrative, but it leaves out a crucial detail: AI is still very much in human hands. General AI is still theoretical, and AI is still under human control.
“It's 100% correct that computers don't have a moral compass. That is exactly why you shouldn't fear. They are being programmed by humans to do something, just like your word processor. It's, therefore, up to the programmers to have the system learn tasks and apply their knowledge in areas that are of benefit to the world.” - Matt Zeiler, Founder and CEO of Clarifai.
This belief that AI is autonomous is actually dangerous in itself. This technology is advancing and becoming more ingrained in our society, and it’s vital that we are all aware that humans are responsible for the actions of the technology. Whether or not AI becomes our “big bad” is wholly dependent on how we train it. That means ensuring diversity in its development to remove bias from the technology.
This applies to both in how companies hire and in how they train their AI. It also means being intentional with using AI’s capabilities for the better of humanity. Here at Clarifai, one of our most prominent filters when determining whether to pursue a challenge is to ask ourselves, “How are we changing the world for the better?” With the wrong data and development, AI can certainly be a dangerous weapon, even where this is unintended. Still, here at Clarifai, we’ve seen how our technology has been used to build apps for humanity’s benefit, from helping us with recycling to to finding missing persons after major disasters. As our CEO puts it:
“There is more good in the world than bad. Together with our customers we want to push the limits of what is possible with AI and do it to better mankind.”
Just starting to research AI? Download our glossary below for some important terms to help you as you decide how AI can benefit you!