September 13, 2021

What Is Edge Computing in AI?

Table of Contents:

A brief introduction to Edge AI on local devices

Back in 2017, there was something unusual about the announcement of the Apple iPhone X. Other than the front of the phone being all glass, the lack of a physical button, and the yearly surprise announcement that this is the best iPhone they've ever made, the new processor had the curious name of "A11 Bionic."

 

The A11 Bionic included a processor called the "Neural Engine" dedicated to tasks such as face and speech recognition. As most of this type of analysis is done using deep neural networks, they further dubbed the processor "bionic" to reflect how these new SoC use technology inspired by biological brains. It doesn't hurt that it also makes a pretty sweet marketing term.

 

Every iPhone since 2017 has included these systems-on-a-chip (SoC) that all go by the name of "Bionic"; a telling sign that something as compact as a smartphone has become increasingly powerful for machine learning tasks.

 

siri_edge_ai

Your Device, Your Voice Assistant

 

What was the motivation for adding voice and image recognition to the iPhone's SoC? If you've ever used Siri, Apple's voice assistant, you may have run into occasional problems where, instead of responding to your command, she says something along the lines of "Please wait a moment..." This is because at present, Siri uses cloud processing of voice data, and if she is unable to connect to Apple's servers through the internet, that's where the party ends. This is due to change very soon however, as this fall's release of iOS 15 will switch Siri to process your voice commands completely on the device itself.

 

Voice assistants such as Siri, Amazon’s Alexa, Google Assistant, or Microsoft’s Cortana, on-device processing brings a host of benefits:

Reduced latency since the data doesn’t have to travel over the internet to be processed
with wearable technologies 

Less use of bandwidth which can translate to cheaper internet bills
Better privacy as the processing is all done locally and not on someone else’s computer

 

The Natural Language Processing (NLP) functionality on these smart assistants are sometimes designed as a hybrid edge and cloud solution known as “fog computing” because it’s at the “edge of the cloud”.  In these systems, they process some data locally and more complex data in the cloud.

 

Edge AI Rises in Popularity

 

Edge AI refers to the kind of on-device inference processing that we are seeing on smartphones, and inspired by a more general trend towards “edge computing”. Data is processed on the same device that produces it, or at most on a nearby computer. Edge AI means there’s no reliance on distant cloud servers or other remote computing nodes, allowing the AI to work faster, and respond more accurately to time-sensitive events.

 

For example, a factory might collect data from its local devices and submit it to a computer in the same building on the same network. This reduces delay significantly and allows certain types of use cases to scale more efficiently.

 

Why AI Needs to Be on the Edge

 

Some amount of latency or lag is fine in certain scenarios, such as analyzing large volumes of data in batches. But for some applications, critical decisions need to be made with extremely small delays, sometimes less than a second. The additional delay imposed by network latency can often be unacceptable. In some specific cases, information may already be invalid by the time it’s been processed and has traveled back through the network.

 

It’s not just about industrial applications either. Self-driving cars are another example of AI technology that needs local processing. If you have an Alexa or Echo in your home, you’ve already seen another noteworthy use case in action. The Natural Language Processing (NLP) functionality on these smart assistants are typically designed as a hybrid edge and cloud solution where they process some data locally, while sending more complex ones to the cloud.

 

Video Cameras Driving Edge AI

 

Smart assistants gather data by using microphones, but as edge AI technology matures, it is expected that many advanced use cases will be collecting input data through video cameras.

 

By integrating AI modules directly into the camera hardware, processing can be done in real time with significant reduction in network traffic. Edge AI can make video AI solutions more responsive, affordable and scalable than they have ever been before. Computer vision models can even be optimized for performance on edge devices, so that they require less memory on device.

 

The CCTV market is already very active. CCTV market growth is projected to increase by almost 100% over the next five years, a trend no doubt driven by the needs of the industrial sector.

video_edge_ai

Enterprise Use Cases for Edge AI

 

Edge AI solutions for video are prominent in the industrial and private security sector. Video enables improved manufacturing controls, perimeter security, equipment maintenance and more.

 

Inspection: Inspections can be automated, performed faster and with greater reliability. 

Quality Control: Quality control standards can be scaled and consistently enforced. The impact of human error due to fatigue can be reduced.

Automated Building Inspections: Buildings can be inspected from close up with a cell phone, or from the air by using a drone.

Precision Agriculture: Farmers can monitor their fields more precisely, and each stage of the growing operation can be controlled with greater precision, leading to better yields and improved stability. Farmers can also split their attention between multiple fields more easily without having to micromanage specific aspects of their production. Video surveillance and monitoring can happen both on the ground (for example in tractors and other heavy machines), as well as from the air. 

Predictive Maintenance: Edge devices can monitor equipment for signs of degradation and prevent costly downtime. 

Facial Authentication: Restricting physical access without the need for outdated solutions like passwords or access cards.

Remote Location Monitoring: Locations can be monitored from one centralized spot, reducing the need for a company to invest in its monitoring workforce and capabilities. With the right detection and recognition systems, the responsibilities of human operators may be reduced to responding to the occasional alert and verifying it.

Workplace Safety: Workplaces can be monitored and potential safety violations can be identified immediately. Improving compliance can have a direct impact on overall workplace safety. This applies both to problems caused by human error, as well as equipment failure that nobody has noticed yet.

 

Public Sector Use Cases for Edge AI

 

Law enforcement, healthcare, utilities and transportation are just a few examples of public sector verticals that are likely to see significant benefits from the use of edge AI.

Environmental Scanning and Inspection: Floods, fires and other natural disasters can be identified early to minimize their destructive power. 

UAV Drone Inspections: Drones can process their feeds locally without transmitting them to a ground network. This can facilitate autonomous navigation and specific use cases, like searching for objects or people.

Smart Cities: Smart cities are no longer a sci-fi concept, and have been actively evolving in some parts of the world. Citizens enjoy improved connectivity, safety and comfort thanks to the use of advanced monitoring and predictive systems. We’re still actively researching the possibilities and limits of this field, but we’ve already come quite far. Smart homes can integrate very easily into a connected, smart city, and this is already happening in some places. 


Industry Specific Use Cases for Edge AI

 

Certain industries have also adopted edge AI video solutions for their own needs with varying degrees of success. Some of the more successful examples include power and energy, transportation and retail. 

Power and Energy: Advanced monitoring systems are now at the heart of many energy facilities. This doesn’t end with the actual production, but also spills over into areas like security, which are just as important for this sector. In fact, the energy industry has been one of the major adopters of AI-driven video technology, according to recent reports, and it looks like this trend is on the rise as well.

Transportation and Traffic: Self-driving cars are perhaps one of the most notable examples of this technology’s use in the transportation sector. Cars rely heavily on video feeds processed locally for this technology to work. It doesn’t end there though; the transportation infrastructure also leverages similar solutions heavily for monitoring and controlling traffic on public roads.

Retail: It should be no surprise that the retail sector is also an active adopter of this technology; its applications in the field are numerous. Stores are adopting edge AI for inventory management, customer care and security. AI is becoming a major component of loss prevention strategies in the sector.

smart_city_edge_ai

 

Conclusion

Edge AI is already making an impact on our everyday lives, and use of this technology is likely to become more widespread as the industry matures. By integrating AI modules directly into local device hardware, video AI technology will be more responsive, affordable and scalable. While many of the most well known edge AI technologies today primarily work with audio alone, we are likely to see significant growth in specialized edge AI technologies designed to operate efficiently on local video devices.