New
Clarifai is recognized as a Leader in The Forrester Wave™: Computer Vision Tools, Q1 2024
May 14, 2021

What is Artificial Intelligence?

Table of Contents:

Learn the basics of what AI is

children-in-the-graph

There is no question that humans are intelligent creatures by nature, and we use our intelligence to perform numerous tasks. But can we create machines (computers) that are capable of doing the same? This is a question that today's top computer scientists, data scientists and engineers are trying to answer.

 

The result is the creation of artificial intelligence (AI) or machines capable of emulating human intelligence to perform tasks. One can only imagine the potential that AI holds, considering that it is free from human constraints like scalability, transferability or even the need for sleep.

 

While we may be decades away from AI that closely resembles the way people think, machine learning and deep learning are bringing us closer with each breakthrough. Modern AI is already being used across a wide range of industries. Across automotive, healthcare, retail, e-commerce, finance and more, companies and people are using AI to make tasks easier and create business capabilities that were never possible before. 

 

In this overview, we'll discuss everything you need to know about AI, from its definition and the types of AI to machine learning and specific examples of how it is used.

Download the 2023 Edition

Ultimate Artificial Intelligence Glossary

Your essential glossary for AI and machine learning technology.

Download now

ebook-ai-glossary-2023-res-ctr

What do we mean when we say AI?

The fact is that intelligence is difficult to define, and artificial intelligence encompasses many different technologies and approaches. But in the broadest sense, AI is the capability of machines to simulate human intelligence – to think and act the way that humans do.

 

However, human intelligence is so much more than just thinking and acting. Learning is a huge part of it. This means that for any artificial intelligence to be considered "true" AI, it also needs to be able to learn. After that, it needs to apply what it has learned (by generalizing the knowledge) to solve new problems. After all, humans learn from experience to make better decisions.


cracking-the-code-ai
A Brief Overview of AI Through the Years

AI has had a long and interesting history to get to where it is today. The developments of AI over the years can be looked at through the following landmark developments:

  • 1950, Alan Turing published the seminal paper "Computing Machinery and Intelligence." In this paper, he proposed what is known as the Turing Test, which provided a way to tell if a machine possesses human intelligence.

  • 1956, John McCarthy, a computer and cognitive scientist, coined the term artificial intelligence at a Dartmouth College summer workshop. He defined it as "the science and engineering of making intelligent machines."

  • 1958, McCarthy introduced the world to LISP. This is a flexible and expressive programming language that quickly became popular in the AI programming community.

  • 1956 to 1974, AI research led to the development of reason searches or means-to-end algorithms. This allowed machines to solve complex mathematical expressions as well as process natural language.

  • 1980 to 1987, systems capable of using reasoning and logic were developed, giving rise to decision support systems. While their reasoning can be described as complex, they are still heavily constrained, since their algorithms provide no way for the systems to evolve their decision-making capabilities.

  • 1993-2003, AI researchers developed neural networks, which are artificial neurons (nodes) that simulate the way a human brain works. Meaning neural networks can process large amounts of data, identify complex patterns and solve nuanced problems by themselves. They are modeled to function in the same way a human brain does.

  • 2013 to present, Matt Zeiler drove groundbreaking research in computer vision alongside renowned machine learning experts Geoff Hinton and Yann LeCun which has propelled the image recognition industry from theory to real-world applications. Deep learning is fueling the development of AI. It's a machine learning method that takes advantage of neural networks to help machines mimic the way that humans learn – discerning complex patterns by recognizing and categorizing simple patterns.


Types of AI

Artificial intelligence is usually grouped into the following two broad categories:

AI Applications or "Narrow" AI

AI applications are capable of simulating human intelligence in a limited capacity. They can learn or be taught how to perform several specific tasks as efficiently as possible. In fact, it excels when it's performing a single task, despite being heavily constrained.

 

AI applications are what we use every day. One of the biggest examples is personal data assistants like Siri and Alexa. These systems are capable of speech recognition and processing of natural language to determine what task you need them to perform.

 

Artificial General Intelligence (AGI) or "Full" AI

AGI is the type of AI that computer scientists dream of creating. It can simulate human intelligence in the truest sense with the ability to adapt to a multitude of scenarios and solve almost any problem that it encounters. This AI can drive a car, go shopping, balance an accounting book or even fly an airplane.

 

Currently, this type of AI is rooted in fiction. A good example of AGI would be Data from Star Trek, HAL from 2001: A Space Odyssey and JARVIS from Iron Man. AI analysts believe we are decades away from achieving this level of AI.

 

If we manage to create AGI, it is predicted that this will give rise to "superintelligence". This is the type of AI that will surpass human intelligence on every imaginable level, including on the creative and emotional level. A caveat of superintelligence is that it will eventually start believing it is better than humans and may try to dominate us. Luckily, this seems like a problem for the distant future.

 

learning-library-alexandria

Machine Learning

Machines can be taught to perform tasks through programming. They can also learn by themselves without the help of a programmer through machine learning. All we have to do is feed the machines the necessary amount of data, and they will be able to figure out how to perform the task by analyzing that data, learning from it, and making predictions.

 

Machine learning is a key component in many of the complex AI systems that we see today.

 

Elements of Machine Learning

For machine learning to take place, the following elements need to be available:

  • Data set: Data or information is what fuels machine learning – no machine learning project can take off the ground without it. This can be any type of data, whether visual, audio, text and/or numerical data that the machine can analyze for decision-making purposes.

  • Models: Everything to a computer is code, specifically, it is a set of ones and zeros (binary), whether it is an image or sound. A model is how real-world processes are represented to a machine in terms of code so it can recognize patterns and learn. To show what it has learned, the machine makes predictions. For example, it can tell if an image (the data) is a tree.

  • Tasks: This is what needs to be achieved based on the data set.

  • Algorithms: Simply put, an algorithm takes the data and transforms it into a model by using computational methods.

  • Loss Function: A loss function provides a way to determine how close a model fits the real-world processes. It is a way to separate the viable models from the non-viable ones.

  • Evaluation: This just involves feeding the model different sets of data and seeing if it is making the right predictions every time. This allows us to determine the accuracy of the model.

 

Types of Machine Learning

Machine learning is usually grouped into the following three categories:

  • Supervised learning: This is where the data being analyzed by the algorithm is labeled. Each label corresponds to a specific value which the system is trained to predict, even if the data set being used is new. For example, the data might be an image with a label that corresponds to a shoe or a particular phrase.

  • Unsupervised learning: Unsupervised learning does away with the label and instead, the algorithm analyzes the data to look for similar patterns within the data set and groups them together. For example, houses with the same number of bathrooms or shirts with the same color.

  • Reinforcement learning: This is where the algorithm takes a trial and error approach to learning. It "looks" at the data and tries a bunch of different scenarios to determine the best course of action that will maximize rewards and minimize risk.


Deep Learning

Machine learning is a learning method, and deep learning is one of its techniques. Since machine learning is driven by algorithms to analyze and learn from the data, deep learning is used to structure the algorithms into interconnected layers known as neural networks.

 

Basically, the algorithms feed data into each other for a deeper understanding and analysis, allowing the AI to learn independently and make complex decisions with little to no human help. Because of deep learning, we have virtual assistants, chatbots, computer vision, speech recognition and image recognition.

 

An example of deep learning is a convolutional neural network (CNN or ConvNet). Since neural networks are arranged in ways that mimic the neurons of the human brain, the "neurons" of a CNN look like those of the frontal lobe which is responsible for processing visual stimuli. This is why convolutional neural networks are usually used for image analysis.

 

future-cool-ai

Why Artificial Intelligence is Important

Artificial intelligence has numerous benefits, and these extend to virtually every industry. Here are a few of them.

Making Existing Products More Intelligent

Artificial intelligence is not something that is sold to the public as a standalone product. Rather, it's integrated into existing products to make them more intelligent. One area where this is taking off today is the Internet of Things (IoT). This is a network of interconnected smart devices that constantly communicate to anticipate human needs with little human intervention.


For example, when driving home, your phone will communicate to your garage that it's in range so that it opens up when you arrive. At the same time, the phone will set the thermostat to a comfortable temperature depending on the weather report it "checked" on the internet. Also, once the phone senses you're inside your home, it'll turn on your favorite relaxation music or the TV so you can watch the news.

 

As AI improves, IoT will become better at anticipating your needs to the point human intervention will be completely unnecessary.

Automating Repetitive Tasks

Humans can only perform a task for so long before it becomes boring due to repetition. Thankfully, this is not a problem for machines.

 

Whether it is checking a document for errors, billing multiple clients for completed projects, or even sending customer onboarding emails, AI is taking mundane tasks away from humans. This allows people to focus on high-level tasks that machines can't do, as of yet.

Speeding Up Decision-Making

Humans can only actively process so much information at a given time. There are many factors, such as whether one is stressed or if one is physically tired, that play a role in our ability to make decisions on time. AI is free from all this and only focuses on what it is programmed to do.

 

You can see this in action when you play computer chess on your Windows machine. While you can be off on your game, a computer never has that problem. It will make the best move possible to win every time. This is especially true if it is set too hard.

Reduce Human Error

Not only do our levels of physical and emotional distress affect our decision-making process, but they also can cause us to make mistakes. As long as a system has a human component, mistakes are inevitable. However, a well-programmed machine will not make mistakes.

 

For example, the use of predictive AI has helped reduce human error in forecasting the weather. Businesses and brands have used the same AI to help them identify customers who are most likely to buy them.

Always Available

There is a reason why businesses have office hours, and call centers take shifts. Humans require rest and need at least six to seven hours of sleep every night to recharge for the next day. Not to mention the other breaks they take during the day, since people can't work nonstop for more than four to six hours.

 

On the other hand, machines don't need rest. Whether it is lunchtime or 2 am, Alexa will always be available to process your queries. It's also why businesses use chatbots to help customers after business hours.

 

Uses of Artificial Intelligence In Everyday Life

You may not realize it, but these days AI is a big part of everyday life. Here are some examples of how AI is used these days:

  • Digital Personal Assistants (DPA): These days, our smartphones have become our very own DPA devices. They can answer any question we ask them or even help us plan our entire day.

  • Film: Film directors are using AI in helping them make casting decisions by analyzing an actor's past performance, along with popularity and compatibility with the script. Speaking of scripts, AI is being used to speed up the scriptwriting process by analyzing numerous scripts to come up with a workable and unique script.

  • Cybersecurity: Cybersecurity experts are using AI to stop cyberattacks before they happen by feeding it data that allows it to recognize patterns that signify a cyber attack.

  • Search engines: Google and other searches are constantly monitoring your online behavior to bump the websites you're most likely to visit to the top of your search results.

  • Online shopping: Amazon, eBay and other online marketplaces, which are essentially search engines for online shoppers, analyze your past purchases and search history to recommend products that you're most likely to buy.

  • Advertising: AI is helping marketing agencies analyze a large set of data like trends, consumer behavior, ad spend and sales. The insights extracted from the analysis helps them create effective ad campaigns that deliver the best return on investment (ROI) while minimizing costs.

  • Cars: We're still on the road to inventing self-driving cars. However, AI is used in current cars for navigation and improving the safety of passengers. For example, cars are outfitted with sensors that can detect and alert the driver of possible danger.

  • Banking: AI is being used by banks and other financial institutions to identify fraudulent activity that can easily be missed by human eyes.

  • Healthcare: The health sector has seen the rise of personal healthcare assistants. These devices help patients by recommending what they should eat, how much exercise they need and when to take their medication based on their individualized medical profile.

  • Manufacturing: AI-predictive maintenance is helping manufacturers forecast future machine failure. This way, they can get ahead of the problem and repair it to avoid costly downtime.