CC Blog Insights Tech The Future

The Future of Artificial Intelligence

Figure 1 The IoT stack has three layers of compute: cloud, aggregation edge (gateways) and the physical edge (connected sensors and devices).
Written by Scott Nelson

Intelligent Edge: Is AI Ready to Play a Role?

Three of the most common, buzzy phrases in the 2019 technology forecasts are edge compute, machine learning (ML) and artificial intelligence (AI). There’s good reason for that. There is no question that all three are real trends that will have dramatic impacts on how the Internet of Things (IoT) is used to change our world. But a critical reader will be uncomfortable that these terms are often used interchangeably.

I know I am.

Edge compute, ML and AI are three very distinct technologies and they have very different roles for those designing products and systems that are or will become part of the IoT stack. If you are an embedded systems engineer, my guess is that you make one of two faces to someone combining AI and IoT Edge in the same sentence: rolling your eyes or twisting in horror. Let’s break down the buzzwords and see if we can better prepare for what we are told is inevitable.

Edge compute
Edge computing is a distributed computing paradigm in which computation is largely or completely performed on distributed device nodes known as smart devices or edge devices as opposed to primarily taking place in a centralized cloud environment. – Wikipedia

The IoT stack has three levels of compute: Cloud, Aggregation Edge and the Physical Edge. Figure 1 shows where each is deployed and offers a rough scale. The importance of understanding edge compute is that it is the resource used for any edge-application deployment and it is the platform upon which the tools of machine learning and AI may be deployed. As such, edge designers must think about both the compute capabilities they have in the functional blocks of an edge device as well as the compute capability that they should design in for future proofing.

Figure 1  The IoT stack has three layers of compute: cloud, aggregation edge (gateways) and the physical edge (connected sensors and devices).
Figure 1
The IoT stack has three layers of compute: cloud, aggregation edge (gateways) and the physical edge (connected sensors and devices).

For example, a cellular modem today has both RF and MCU processors. Forward-thinking designers have implemented code emulators—for example Python and MicroPython—within those processors and partitioned the memory to protect the functional firmware of the modem. The result is a functional computing block with a programmable engine. Applications at this scale include protocol translators, business logic and auto-configurators just to name a few. This physical edge compute capability enables interoperability, security and edge analytics.

— ADVERTISMENT—

Advertise Here

At the next level up, the scale increases to the point where full Linux containers can deploy PC-level applications. Virtualization becomes operative at this level and architects can look at moving cloud compute applications down to this level to take advantage of reduced data latency as well as reduced data transport costs. Systems designers leveraging Aggregation Edge compute are looking a full, closed loop control over the local wireless networks.

Machine Learning
Machine learning (ML) is the study of algorithms and statistical models that computer systems use to progressively improve their performance on a specific task. Machine learning algorithms build a mathematical model of sample data, known as “training data”, in order to make predictions or decisions without being explicitly programmed to perform the task. – Wikipedia

Training vs. learning—that is the question when we consider Machine Learning vs. Artificial Intelligence. In my book, Machine learning is not true learning. Machine learning is training. The algorithms upon which it is based make it deterministic within the constraints of the digital twins used to train them. Machine learning is very powerful for optimization and adaptation to known key parameters precisely because of its functional ability to train.

Training drives performance and optimization, whether training people or machines. As such, machine learning and digital twins are already proliferating. Digital twins can be as simple as a linear correlation algorithm and as complex as a fully parameterized emulation of a machine, including end of life. Machine learning and machine training will allow operations to “see around the corner” of time, but it still doesn’t reach the threshold of true learning.

Artificial Intelligence
Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”- Wikipedia

The key difference between Machine Learning and Artificial Intelligence is that training vs. learning definition. AI-based systems will learn outside of the statistical and physical model constraints of machine learning digital twins. AI-based systems have the cognitive capacity to question a model and experiment with it. AI-based systems can learn from mistakes.

That is the rub. Most controls applications cannot afford mistakes. AI can work in consumer preference and recommendation domain because the consequences of the mistakes are effectively virtual. I watched a show I didn’t like on Netflix the other night. I lost an hour of my time but I mitigated those consequences by working during the show. AI in consumer Internet applications may have financial consequences, but they will not crash trains, planes or automobiles.

AI, therefore, has the opportunity to get traction in areas where the learning environment can be made safe or constrained to performance levels that do not cause harm. Comfort and productivity applications are excellent candidates here. Another valuable IoT application is device behavior in the security context. AI algorithms can learn about and predict behavior which, if applied to edge devices, can help identify both physical performance issues—calibration for example—as well as security breaches.

The Future
Moore’s Law has made edge compute ubiquitous. There are two edges when we talk about edge compute—the physical edge and aggregation edge. Virtualization will happen at both and across the two, and this virtualization will drive the value of IoT deployments. The better we understand the physical world, machines and systems, the better we can “teach” machines to learn and perform better, and that is what machine learning is all about: training and optimization, not the unknown.

— ADVERTISMENT—

Advertise Here

Artificial intelligence is the most powerful of these three trends but also the most uncertain and least understood. Designers and control engineers can feel safe with machine learning via the determinism and constraints of their models (digital twins) and algorithms. AI, on the other hand, will remain in the cloud for a while because experimenting in the virtual world of data is safer than the physical world of the IoT.

There is a reason this discussion is still somewhat philosophical. in the same way quantum mechanics was a philosophical discussion between Bohr and Einstein in the early 1900s. We cannot yet see how AI systems will transform our world and that is a bit scary—at least in the context of 2019.

RESOURCES

Published in Circuit Cellar Magazine Issue 343 • February 2019 — Get a PDF of the Issue


Don't miss out on upcoming issues of Circuit Cellar. Subscribe today!

 
 
Note: We’ve made the October 2017 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.


Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Become a Sponsor
Chief Product Officer at

Scott Nelson is Chief Product Officer at Digi International. For more than 25 years he has led product development and entrepreneurial business growth as both a technology and business leader. Scott is formerly CEO/CTO of Reuleaux Technology, where he helped companies in both Silicon Valley and the MSP area with strategy and new business development in the Internet of Things (IoT). After beginning his career at Honeywell in the Corporate R&D center, he spent the next 15 years at Logic PD as CTO and EVP.
As technology evangelist, Scott is connected to Silicon Valley and at present is a member of multiple tech start-up advisory boards and a leading start-up accelerator the Alchemist Accelerator. He holds a Ph.D. in applied and engineering physics from Cornell University, a doctoral minor in business administration from the Samuel Johnson School of Management at Cornell University, and a B.A. degree in physics and mathematics from St. Olaf College.