Research & Design Hub Tech Trends

Embedded Vision Systems Adopt AI and IoT Tech

Written by Jeff Child

Smarter Machine Vision

As vision systems evolve, they’re leveraging all the latest technology trends in embedded computing. Box-level system solutions are keeping pace with AI-level processing, IoT functionality and advanced camera connectivity.

Machine vision has come a long way from the simpler days of cameras attached to frame grabber boards—all arranged along an industrial production line. While the basic concepts are the same, emerging embedded systems technologies such as Artificial Intelligence (AI), deep learning, the Internet-of-Things (IoT) and cloud computing have all opened up new possibilities for machine vision system developers.

— ADVERTISMENT—

Advertise Here

To keep pace, companies that used to only focus on box-level machine vision systems are now moving toward AI-based edge computing systems that provide all the needed interfacing for machine vision, but also add new levels of compute performance to process imaging in real-time and over remote network configurations.

AI IN MACHINE VISION
ADLINK Technology appears to be moving in this direction of applying deep learning and AI to machine vision. The company has a number of products, listed “preliminary” at present, that provide AI machine vision solutions. These systems are designed to be “plug and play” (PnP) so that machine vision system developers can evolve their existing applications to AI-enablement right away with no need to replace existing hardware.

Acquired production images are a common critical data type that’s particularly suited to AI analytics. The capture, processing, and management process does, however, present numerous challenges, making it costly, complicated and time consuming to implement, says ADLINK. A typical system comprises cameras connected to a frame grabber module in a host computer, with images acquired on the line and processed by specific vision software. Processing and management of the necessarily massive amount of captured data for machine learning requires a VPU/GPU integrated vision solution or a new heterogeneous platform.

— ADVERTISMENT—

Advertise Here

An example of from this ADLINK AI Machine Vision product line is the EOS-i6000-M Series. The EOS-i6000-M Series is designed for deep-learning inference AI GigE vision systems (Figure 1). The pre-installed AI development component reduces testing and integration time, saves personnel costs, and speeds up time to market. The box-level system has passed an extreme validation process to provide high reliability in power consumption, thermal control and compatibility. This alleviates any need to worry about the integration of hardware and software. The compact EOS-i6000-M Series supports up to four Intel Movidius Myriad VPUs, making the system well suited for object classification and detection application with deep learning.

FIGURE 1 – The EOS-i6000-M Series is designed for deep-learning inference AI GigE vision systems. The compact EOS-i6000-M Series supports up to four Intel Movidius Myriad VPUs, making the system well suited for object classification and detection application with deep learning.

CLOUD CONNECTIVITY
In December, ADLINK Technology announced it was teaming up with Intel and Amazon Web Services (AWS) to simplify AI at the edge for machine vision. The jointly integrated solution provides an Amazon Sagemaker-built machine learning model optimized by and deployed with the Intel Distribution of OpenVINO toolkit, the ADLINK Edge software suite and certification on AWS Greengrass.

Dubbed ADLINK AI at the Edge, the solution automates edge-computing processes so that system developers can focus on developing applications without needing advanced knowledge of data science and machine learning models. Intel’s distribution of OpenVINO toolkit optimizes deep learning workloads across Intel architecture—including accelerators—and streamlines deployments from the edge to the cloud. Meanwhile, Amazon Sagemaker provides a fully-managed service that covers the entire machine learning workflow. AWS Greengrass extends AWS to edge devices so they can act locally on the data they generate, while still using the cloud for management, analytics and durable storage. The ADLINK Data River software does translation between devices and applications to enable a vendor-neutral ecosystem to work seamlessly together. The ADLINK Edge software suite builds a set of deployable applications to communicate with end-points, devices or applications. Those end-points, devices or applications can then publish and/or subscribe to data topics on the ADLINK Data River.

ADLINK says it has worked on multiple industrial use cases that benefit from AI at the edge, including a smart pallet solution that makes packages and pallets themselves intelligent so they can detect where they’re supposed to be and when they’re supposed to be there, in real-time. Additional use cases include object detection modeling for object picking functions or worker safety, such as identifying product defects on conveyor systems or worker impediments in manufacturing environments, and equipment failure predictions to reduce machine downtime and increase productivity. AI at the Edge software capabilities can be fully optimized on certified ADLINK devices, including its NEON industrial smart camera, EOS vision system, and deep learning accelerator card and GigE frame grabber with Intel Movidius Myriad X VPU.

— ADVERTISMENT—

Advertise Here

VISION IN THE IoT ERA
Aaeon’s approach to machine vision is a comprehensive one. Aside from offering vision controllers, Aaeon cooperates with many different partners, including camera vendors like Basler, FLIR, IDS and Baumer. It also works with machine vision software vendors like Mvtec Merlic and Halcon. Aaeon’s line of Box PCs are compatible with CameraLink and CoaXpress frame grabber cards.

Like other makers of machine vision technology, many of Aaeon’s latest box-level systems have evolved to meet the new requirements of the IoT era. This means supporting functionality like Power-over-Ethernet (PoE) and IoT gateway functionality. Exemplifying those trends, in March 2019, Aaeon announced the BOXER-6405U (Figure 2), a turn-key rugged embedded PC built to be flexible and adaptable to a wide range of Industry 4.0 applications, including machine vision, AI edge computing and industrial IoT gateway.

FIGURE 2 – The BOXER-6405U is a turn-key rugged embedded PC built to be flexible and adaptable to a wide range of Industry 4.0 applications, including machine vision, AI edge computing and industrial IoT gateway.

The BOXER-6405U is built to be a go-anywhere, work-anywhere solution. Its rugged design gives it a wide operating temperature range from -20°C to 50°C. Its palm-sized form factor, only 37mm thick, allows it to squeeze into tight operating spaces, and its wide voltage input range of 9V to 24V allow it to easily integrate with industrial power sources. The BOXER-6405U even comes with wall-mount brackets to ensure it’s ready to install wherever you need it.

The BOXER-6405U features an Intel N3350 or N4200 processor with 2G DDR3L 1600 memory and up to 32 GB eMMC storage onboard. The BOXER-6405U comes with four USB 3.0 and two Intel i211AT Gigabit Ethernet ports, suited for machine vision applications. The BOXER-6405U has two internal expansion bays, one full-sized Minicard and one half-sized Minicard, supporting a wide range of options including AI modules such as AAEON’s own AI Core X with Intel Movidius Myriad X VPU. With support for Wi-Fi or 4G LTE cards, the BOXER-6405U can also be used for remote edge computing or as an industrial IoT gateway.

GPU-BASED SYSTEM
Also following the AI path of embedded vision evolution, in August 2019, Aaeon rolled out the BOXER-8170AI, a computing solution featuring the NVIDIA Jetson TX2. Equipped with four PoE LAN ports and four USB 3.0 ports, the BOXER-8170AI provides stability and flexibility for AI edge networks.

The BOXER-8170AI features the NVIDIA Jetson TX2 6-core processor, designed by pairing the Dual Denver 2 and Quad Arm 57 processors into a single SOC. This unique design with up to 256 CUDA cores provides speed and performance to power AI edge solutions. The BOXER-8170AI comes with 8GB LPDDR4 memory and 32GB eMMC storage onboard, and supports AI frameworks such as TensorFlow and Caffe, as well as supporting AI inference software from developers and customers.

The BOXER-8170AI is built to provide flexibility and stability for AI Edge networks. It is designed with four PoE LAN ports, each with their own dedicated chip. This allows for higher bandwidth and stability for each port, allowing PoE cameras to operate individually on dedicated connections. The BOXER-8170AI supports a maximum output of 60W for up to four PoE cameras, perfect for a range of AI solutions incorporating PoE cameras such as smart retail, virtual fences and access control.

The BOXER-8170AI is designed with a wealth of I/O features including four USB 3.0 ports, allowing for additional cameras or devices to be connected to the system. The BOXER-8170AI also features two COM ports for easy integration into industrial systems, two HDMI ports, and remote ON/OFF. The BOXER-8170AI is built to easily connect to networks with a Gigabit LAN port and two antenna ports to connect to wireless networks or act as an AIoT gateway. The BOXER-8170AI also features an SD Card slot and USB OTG for easy maintenance.

Built for the harsh environments of industrial systems, the BOXER-8170AI features a fanless design and all aluminum chassis, protecting the system from dust, vibration and other hazards. The BOXER-8170AI is capable of operating in temperatures from -20°C to 50°C and features a wide input voltage range of 12 to 24VDC. With a compact height only 48mm thick, it can fit into almost any convenient space needed to power AI Edge applications. According to Aaeon, the BOXER-8170AI was designed to provide system developers with a stable platform for PoE connected cameras and AI solutions.

RUGGED PASCAL-BASED SYSTEM
Axiomtek is another company where its machine vision product lines seem to be morphing toward AI-based solutions to serve machine vision needs. Exemplifying that trend, in November the company announced its AIE500-901-FL (Figure 3), an advanced embedded system for edge AI computing and deep learning applications. The embedded system employs an NVIDIA Jetson TX2 module, which has a powerful 64-bit Arm A57 processor, NVIDIA Pascal GPU with 256 CUDA cores and 8GB of 128-bit LPDDR4 memory. To withstand the rigors of day-to-day operation, the fanless AIE500-901-FL has an extended operating temperature range of -30°C to +60°C and vibration of up to 3Grms with its strong construction.

FIGURE 3 – The AIE500-901-FL is an advanced embedded system for edge AI computing and deep learning applications. The embedded system employs an NVIDIA Jetson TX2 module. The unit is specifically designed for video analysis, object classification, computer vision, quality control and more.

The unit is specifically designed for video analysis, object classification, computer vision, quality control and more. The AIE500-901-FL comes with 32GB eMMC onboard and is equipped with one M.2 Key M 2280 SSD slot with PCIe and SATA signal and one Micro SD slot for massive data processing and AI applications. The system has one full-size PCI Express Mini Card slot and one SIM slot for 3G/4G, GPS, Wi-Fi and Bluetooth connections.

The AIE500-901-FL provides a rich set of I/O interfaces, including one HDMI 2.0 port, two USB 3.1 Gen1 ports, two GbE LAN, two COM ports or two CAN ports, one OTG port and one MicroSD port. It also offers one reset button, one power button, one recovery switch and four SMA-type antenna openings. To meet the industrial and automatic requirements, the reliable edge AI system has a 12 or 24VDC power input.

The AIE500-901-FL bundles with NVIDIA JetPack 4.2.1 to help developers to jump-start their development environment for developing with the NVIDIA Jetson platform and minimize what they need to do to develop AI-related applications.

LTE-CONNECTED SYSTEMS
Connectivity is another area that’s seen technology advancement in the machine vision realm. Using wireless interfacing with Wi-Fi or Bluetooth has become more commonplace, and IP-cameras connected over a wireless or wired network is no longer considered exotic. Going a step further, for its part, Imago Technologies released in May 2019 its Arm-based VisionBox Daytona computer that adds LTE connectivity (Figure 4). The capability means that remote machine vision systems can use LTE so that “questionable decisions” can be remotely “validated by the expert through image analysis,” according to Imago.

FIGURE 4 – The Arm-based VisionBox Daytona computer that adds LTE connectivity. The LTE radio, along with the 802.11ac Wi-Fi built into the Nvidia Jetson TX2 module that powers the system, make image and program transfer easier.

The LTE radio—along with the 802.11ac Wi-Fi built into the Nvidia Jetson TX2 module that powers the system—also make image and program transfer easier. Applications could, for example, make use of the 4G connection by enabling a forklift truck with an installed VisionBox to immediately contact the logistics infrastructure,” says Imago.

The VisionBox Daytona is based on an NVIDA hexa-core Jetson TX2 module. This provides the Daytona with its AI functionality within the CUDA acceleration libraries running on its 256-core Pascal GPU for applications such as deep learning, hyperspectral imaging and computing 3D images. Options include the original 8GB LPDDR4/32GB eMMC 5.1 Jetson TX2 model or the newer 4GB/16GB version. Only the 8GB version has onboard Wi-Fi. Like all the company’s VisionBox models, the Daytona provides an unnamed FPGA with real-time I/O controller capabilities. The FPGA is supported with Imago’s RTCC SDK.

The VisionBox Daytona also offers an external SD slot and an internal M.2 slot for solid-state drives (SSDs). At the heart of the system is a pair of GbE-based GigE Vision camera ports with Power-over-Ethernet and 2x Trigger-over-Ethernet camera triggers to match. Antenna mounts are available for both WiFi and LTE. The system is further equipped with a standard GbE port, a DisplayPort, 2x USB 3.0 ports and an encoder based on RS-422. There is also 8x inputs and 8x outputs with opto-isolation for 24V. Like the GigE camera triggers, the serial and DIO connections are controlled by Imago’s real-time FPGA.

The system’s 163m x 163mm x 48mm footprint expands to 210mm in one dimension with the addition of a mounting plate. The 1.3kg computer has a 20-28VDC input and runs at 7W (idle) or 24W (typical). Power can be supplied to PoE-powered devices such as cameras. The fanless system has a heatsink and can tolerate a 0 to 50°C range.

SMARTER CAMERAS
In embedded vision systems, a parallel trend to the evolving of box-level systems is the notion of adding more embedded intelligence to the camera subsystem. Along those lines, Allied Vision Technologies makes a line of embedded and machine vision cameras. The Alvium 1500 family of cameras combines the advantages of classic machine vision cameras with the advantages of embedded sensor modules, says Allied Vision.

Just last month, (January) the company released its latest member of the 1500 family, the Alvium 1500 C-210 (Figure 5). The new camera offers a resolution of 2.1Mpixels and thus fills the gap between the 1.2Mpixel Alvium 1500 C-120 and the 5Mpixel Alvium 1500 C-500. The camera combines a Full HD resolution with high frame rates (up to 118 frames per second).

FIGURE 5 – The Alvium 1500 C-210 camera offers a resolution of 2.1Mpixels and thus fills the gap between the 1.2Mpixel Alvium 1500 C-120 and the 5Mpixel Alvium 1500 C-500. The camera combines a Full HD resolution with high frame rates (up to 118 frames per second).

Together with the new CSI-2 camera model, Allied Vision released Vimba MIPI CSI 2 drivers for NVIDIA Jetson SoMs (System on Modules). The drivers support all current and future Alvium camera modules with MIPI CSI-2 interface, no matter which sensor the camera module uses. With minimal development effort, various cameras can be tested with different sensors, diverse resolution variants of a system can be developed or existing systems can be upgraded to latest sensors.

That saves time, but also significantly reduces development costs. Comprehensive documentation and support further facilitate system integration and simplifies prototyping. Available on GitHub are drivers for NVIDIA’s SoMs Jetson TX2, Jetson AGX Xavier and Jetson Nano.

The Alvium 1500 series is well suited as a camera for easy hardware and software integration in embedded applications. All models are equipped with a MIPI CSI-2 interface, which is particularly suitable for embedded vision applications as it can address the dedicated hardware of the embedded boards. The Alvium 1500 Series offers an essential feature set. Software integration can be done via Video4Linux2 (V4L2), GStreamer, OpenCV or direct register access.

In addition to the NVIDIA Jetson driver, even more open-source drivers for selected processor architectures are provided for V4L2 support, enabling easy integration and fast go-to-market on the system integrator side. The image pre-processing functionalities can be configured directly on the Image Signal Processor (ISP) in the camera to save CPU load on the host side.

For its part, E-con Systems has rolled out its own steady stream of intelligent cameras over the past several months. Most recently, in December, E-con launched a 5Mpixel MIPI Camera for the NVIDIA Jetson Nano developer kit. The e-CAM50_CUNANO is a 5.0 MP 2-lane MIPI CSI-2 fixed focus color camera. This camera is based on 1/2.5″ AR0521 CMOS Image sensor from ON Semiconductor with a built-in ISP.

According to the company, this powerful ISP helps to brings out the best image quality from the sensor and making it well suited for next generation of AI devices. This camera can be directly connected to camera connector (J13) on the NVIDIA Jetson Nano developer kit. e-CAM50_CUNANO is exposed as a standard V4L2 device and customers can use the V4L2 APIs to access and control the camera.

TWO-CAMERA SMART SYSTEM
Many embedded machine vision applications need to blend several other kinds of sensor input besides just vision. Meeting such needs, in December, Teledyne DALSA, a Teledyne Technologies company announced its newest smart camera system called VICORE (Figure 6).

This flexible system provides high performance for inspection applications using traditional 2D, thermal and 3D imaging or a combination thereof. Its small, book-style format consumes minimal cabinet space and provides convenient, front-accessible connections for cameras, I/O and system components. This includes a dedicated industrial Ethernet port that offers efficient communication with complementary factory devices using Ethernet/IP or Profinet. VICORE can be setup and deployed as a standalone system, with attached HDMI display and keyboard, or as a remotely accessible networked device through its LAN port.

VICORE provides a choice of embedded application software. New users, or users of Teledyne smart camera technology, can be up and running in minutes with their iNspect software, says the company. For users that need additional flexibility or customization, Teledyne’s flagship Sherlock 7 software meets those needs. For users looking to measure height features using 3D profile sensors, Teledyne’s new Sherlock 8 software provides a solution. Sherlock 8 expands on Sherlock 7 capabilities and offers improved ease-of-use.

The company says the VICORE system is well suited for a variety of applications, including dual-camera applications, low-cost high-resolution (up to 25 Mpixel) applications, thermal applications using Teledyne Calibir cameras, 3D applications using Teledyne’s Z-Trak profile sensors or surface applications using Genie Nano with multi-segment lighting (Shape from Shading).

As with almost every segment of embedded systems today, machine vision is embracing the promising new technologies such as AI, IoT and the cloud. That trend will continue as high-performance AI-level computing becomes ever more affordable and mainstream. 

RESOURCES
Aaeon | www.aaeon.com
ADLINK Technology | www.adlinktech.com
Allied Vision Technologies | www.alliedvision.com
Axiomtek | us.axiomtek.com
E-con Systems | www.e-consystems.com
Imago Technologies | www.imago-technologies.com
Teledyne DALSA | www.teledynedalsa.com/mv

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • FEBRURARY 2020 #355 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
Former Editor-in-Chief at Circuit Cellar | Website | + posts

Jeff served as Editor-in-Chief for both LinuxGizmos.com and its sister publication, Circuit Cellar magazine 6/2017—3/2022. In nearly three decades of covering the embedded electronics and computing industry, Jeff has also held senior editorial positions at EE Times, Computer Design, Electronic Design, Embedded Systems Development, and COTS Journal. His knowledge spans a broad range of electronics and computing topics, including CPUs, MCUs, memory, storage, graphics, power supplies, software development, and real-time OSes.

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

Embedded Vision Systems Adopt AI and IoT Tech

by Jeff Child time to read: 13 min