Research & Design Hub Tech Trends

Intelligent Vision Systems Leverage Embedded Tech

Written by Jeff Child

Smart Cameras, Software and Box-Level Systems

Vision systems have evolved from simple camera-computer setups to sophisticated implementations that leverage IoT-connectivity, smart cameras and AI technology.

  • What’s happening in intelligent vision system technology?

  • Industrial AI smart cameras

  • Short-wave infrared cameras 

  • Deep-learning cameras

  • Line scan cameras 

  • 3D robotics cameras

  • AI runtime learning software

  • Visual inference technology for the IoT edge

  • Box-level machine vision systems

  • IoT face-detection systems

  • ADLINK Technology’s NEON-2000-JNX industrial AI smart camera

  • Allied Vision released its Goldeye SWIR camera

  • Imago Technologies’ VisionAI, smart camera

  • Teledyne Imaging’s Linea Lite family of line scan cameras

  • eCapture’s LifeSense G53 3D smart camera

  • Teledyne DALSA’s 2021-07 edition Sapera Vision Software

  • IOTech’s visual inference solutions for the IoT edge.

  • OnLogic’s Helix 500

  • Axiomtek’s MVS100-323-FL machine vision system

  • Vecow’s EAC-2000 Series

  • Maxim Integrated and Xailient’s IoT Face Detection system

Vision systems represent one of the most dynamic areas of embedded system design today. Over the years the term “machine vision system” has been usurped by the shorter term “vision system”—a trend fueled by the launch the industry publication Vision System Design in the mid-1990s. Fast forward to today, and vision systems are embracing all the leading advanced embedded technologies—including GPUs, artificial intelligence, IoT, box-level solutions and more.

To keep pace with system designer demands, over the past 12 months technology suppliers have been rolling out new solutions for today’s vision system market. There seems to be two tracks of technology advancement. On the one hand, camera manufacturers are packing more intelligence into the camera systems themselves. On the other hand, box-level systems have emerged with high-performance AI processing and rich sets of camera interface technologies. These two trends overlap for many system developers who are embracing both more camera intelligence and more integrated box-level solutions. Meanwhile, sophisticated AI-based vision software is providing system developers powerful tools for their system designs.

GPU-BASED SOLUTION

GPUs like NVIDIA’s Jetson family of modules offer a high-performance solution for vision system implementations. With that in mind, in May ADLINK Technology launched the NEON-2000-JNX series. The company claims it as the industry’s first industrial AI smart camera that integrates the NVIDIA Jetson Xavier NX module. The camera’s high performance, small form factor and ease of development provide an AI vision solution for manufacturing, logistics, retail, service, agriculture, smart city, healthcare, life sciences and other edge applications, says ADLINK. The camera is an all-in-one solution, eliminating the traditional need for complex integration of the image sensor module, cables and AI box PC.

The NEON-2000-JNX series comes with all necessary components and an optimized OS already integrated and well validated. The unit supports six sensor configurations in total between 1.2MP to 8MP to deliver raw data and complete image detail for machine vision, including four image sensors with Basler. Two MIPI image sensors reduce CPU loading and support a higher operating temperature range. An embedded image signal Processor (ISP) provides enhanced and environment-adaptive imaging to improve AI accuracy. The NEON-2000-JNX’s integration overcomes EMC/EDS/vibration/thermal problems, interface compatibility, image drops caused by fault camera and OS settings and other common reliability issues.

The NEON-2000-JNX series AI smart camera is pre-installed with ADLINK’s new edge vision analytics software, EVA SDK (Edge Vision Analytics Software Development Kit) (Figure 1). The solution provides a wide selection of field-ready application plug-ins and ADLINK-optimized AI models guarantee AI vision quality and simplify building AI vision applications with limited coding required. A preview function makes the verification of AI Inference flow and results quick and intuitive. According to ADLINK, the smart camera enables AI developers, even newcomers to AI, to focus on the application and training and build a proof of concept in as few as two weeks.

Figure 1
The NEON-2000-JNX series is an industrial AI smart camera that integrates the NVIDIA Jetson Xavier NX module. The camera is pre-installed with ADLINK’s new edge vision analytics software, EVA SDK.
MULTI-SPECTRUM CAMERA

Cameras that support multiple wavelengths of light—such as visible list and infrared—provide a powerful solution for many types of machine vision applications. Along just those lines, in July Allied Vision released its Goldeye short-wave infrared (SWIR) models with Sony IMX990 and Sony IMX991 SenSWIR sensors.

— ADVERTISMENT—

Advertise Here

Allied Vision is among the first camera manufacturers to integrate Sony’s SenSWIR InGaAs sensors in its Goldeye SWIR camera series, making the cameras sensitive in both visible and SWIR spectrums. The Goldeye G-030 is equipped with the Sony IMX991, and the 1.3MP Goldeye G-130 (Figure 2) is outfitted with the Sony IMX990 sensor, both utilizing Sony’s SenSWIR technology. Both camera models are available with GigE Vision interface and integrated single-stage thermo electric sensor cooling (TEC1). Models with Camera Link interface are planned for release in Q4/2021.

Figure 2 Supporting both visible and short-wave infrared (SWIR) spectrums, the Goldeye G-030 is equipped with the Sony IMX991, and the 1.3MP Goldeye G-130 (shown) is outfitted with the Sony IMX990 sensor, both utilizing Sony’s SenSWIR technology.
Figure 2
Supporting both visible and short-wave infrared (SWIR) spectrums, the Goldeye G-030 is equipped with the Sony IMX991, and the 1.3MP Goldeye G-130 (shown) is outfitted with the Sony IMX990 sensor, both utilizing Sony’s SenSWIR technology.

The new sensors are based on Sony’s new SenSWIR technology, which, due to the InGaAs sensor architecture, creates a quantum leap in pixel size and image homogeneity while enabling image acquisition in the visible and short-wave infrared ranges (40nm to 1,700nm) with high quantum efficiency. This is expected to expand the application possibilities of Goldeye SWIR cameras regarding the spectral analysis of objects and increase the precision of details detected, all enabled by a small pixel size of only 5µm.

The Goldeye G-030 features the 0.25″ VGA sensor, IMX991, which provides a frame rate of 234fps at a resolution of 656×520 pixels. The Goldeye G-130 with the IMX990 1.3MP SXGA sensor (1280×1024 pixels) offers a maximum frame rate of 94fps. Both new camera models feature a robust, compact and fanless design optimized for industrial applications. The integrated single-stage sensor cooling (TEC1) and several integrated image correction functions contribute significantly to the camera’s excellent image quality. In addition, comprehensive I/O and GenICam standard compliant feature control options considerably simplify their system integration.

DEEP LEARNING CAMERA

Deep learning AI is a key technology for today’s machine vision and the latest trend is to embedded that deep learning technology in the camera system itself. For example, in May Imago Technologies introduced the VisionAI, a smart camera that uses an intelligent area scan camera combined with the Google Coral processor. If deep learning models are to be used in the industry, they require appropriate hardware with interfaces suitable for industry, says Imago. That means that, above all, embedded solutions are needed that can be adapted quickly and without great effort to the task at hand. The VisionAI features a 5M resolution camera, quad-core Arm processor and Google Coral Accelerator (Figure 3).

Figure 3 The VisionAI features a 5M resolution camera, quad-core Arm processor and Google Coral Accelerator.
Figure 3
The VisionAI features a 5M resolution camera, quad-core Arm processor and Google Coral Accelerator.

With the freely programmable VisionAI, processing applications from the fields of AI, deep learning and machine learning can be easily implemented. With its integrated Google Edge TPU, the inference system supports the TensorFlow Lite and AutoML Vision Edge frameworks. This makes it well suited for tasks such as pattern recognition, classification, anomaly or defect detection in inspection applications, code reading and many other custom applications. With SDK and sample programs, system developers don’t have to deal with image acquisition, I/O handling or other basic functions, but can fully concentrate on the development of the actual image processing solution.

Complementing this, VisionAI users also have the flexibility to develop their own image processing applications based on Halcon, C++ or Python, incorporating any libraries or their own source code. The VisionAI’s free programmability makes it extremely flexible in selecting the appropriate software and allows complete access to the hardware, giving users full control over the design of their embedded solution.

2K AND 4K LINE SCAN CAMERAS

Line scan cameras are a staple in machine vision, particularly for any sort of industrial inspection application. For example, in May Teledyne Imaging launched the Linea Lite family of line scan cameras built for a wide range of machine vision applications. The new Linea Lite cameras feature a 45% smaller footprint than the original Linea. Based on a new proprietary CMOS image sensor from Teledyne Imaging, it expands on the success of the original series of low-cost, high-value Linea line scan cameras.

Designed to suit many applications, the Linea Lite offers vision system developers a choice between “high full well” mode or “high responsivity” mode, via easy to configure gain settings. The cameras are available in 2k and 4k resolutions, in monochrome and bilinear color (Figure 4). Linea Lite has all the essential line scan features, including multiple regions of interest, programmable coefficient sets, precision time protocol (PTP) and TurboDrive. With GigE interface and power over Ethernet (PoE), Linea Lite is a good fit for applications such as secondary battery inspection, optical sorting, printed materials inspection, packaging inspection and more.

Figure 4 Available in 2k and 4k resolutions, Linea Lite has all the essential line scan features, including multiple regions of interest, programmable coefficient sets, precision time protocol (PTP) and TurboDrive.
Figure 4
Available in 2k and 4k resolutions, Linea Lite has all the essential line scan features, including multiple regions of interest, programmable coefficient sets, precision time protocol (PTP) and TurboDrive.

The camera features a bilinear color architecture that maximizes color fidelity and image quality. It’s able to do that while still providing high speed and throughput made possible with Teledyne DALSA’s proprietary TurboDrive technology. TurboDrive enables the Linea Lite to deliver its full image quality at line rates up to 50kHz sustained (and up to 64kHz in burst mode) with no changes to your GigE network.

— ADVERTISMENT—

Advertise Here

3D ROBOTICS CAMERA

Robot-mounted cameras represent a whole sub-category of vision system design. Because they are mobile, intelligent 3D camera technology is critical for robotic vision systems. In August, eCapture introduced what it claims is the smallest form factor stereoscopic 3D depth sensing camera. The new LifeSense G53 measures 50mm × 14.9mm × 20mm and is designed for depth capture and object tracking for industrial, robotics and other applications driven by AI (Figure 5). eCapture plans to introduce a full range of depth map cameras to address the growing need for stereo imaging equipment over the next quarter.

Figure 5 The LifeSense G53 is designed for depth capture and object tracking for industrial, robotics and other applications driven by AI. The G53 provides a 50-degree field of view (FoV) and includes two mono sensor pairs for various resolutions of stereo, mono and depth disparity/distance map output via USB.
Figure 5
The LifeSense G53 is designed for depth capture and object tracking for industrial, robotics and other applications driven by AI. The G53 provides a 50-degree field of view (FoV) and includes two mono sensor pairs for various resolutions of stereo, mono and depth disparity/distance map output via USB.

The G53 provides a 50-degree field of view (FOV) and includes two Mono Sensor pairs for various resolutions of stereo, mono and depth disparity/distance map output via USB. The camera is well suited for development of robots, automated guided vehicles (AGV) and autonomous mobile robots (AMR), goods-to -person (G2P) delivery, as well as fast-motion depth capture, says the company.

eCapture camera solutions are based on the company’s eYs3D stereo vision processing solutions. The eYs3D vision processor can compute the stereo depth map data and reduces the burden on the host CPU/GPU, allowing for higher performance and lower power solutions. Synchronized frame data from both cameras allows development of SLAM algorithms.

This very small form factor camera offers a clean depth map output and requires only minimal host computing support. It is suitable for a wide range of depth applications in fast moving systems with excellent indoor and outdoor depth output performance. The eCapture LifeSense Depth Camera G53 includes the eCapture SDK supporting Windows, Linux, and Android OS environments. A variety of programming language support and wrapper APIs are also available.

AI FOR RUNTIME LEARNING

AI software is becoming more of a propriety in today’s smart vision systems. Feeding such needs, in August, Teledyne DALSA announced the 2021-07 edition of its Sapera Vision Software. The software provides image acquisition, control, image processing and AI functions for designing, developing and deploying high-performance machine vision applications.

The new upgrades to the Sapera Vision Software include enhancements to its AI training graphical tool Astrocyte and the image processing and AI libraries tool Sapera Processing (Figure 6). The software is well suited for applications such as surface inspection on metal plates, location and identification of hardware parts, detection and segmentation of vehicles and noise reduction on x-ray medical images.

Figure 6 The new upgrades to the Sapera Vision Software include enhancements to its AI training graphical tool Astrocyte and the image processing and AI libraries tool Sapera Processing.
Figure 6
The new upgrades to the Sapera Vision Software include enhancements to its AI training graphical tool Astrocyte and the image processing and AI libraries tool Sapera Processing.

New features in this release include a new continual classification algorithm that allows users to pre-train a classifier in Astrocyte and then perform further training at runtime in Sapera Processing. Threre is also a new Anomaly Detection algorithm that is more robust in locating defects while providing the ability to generate output heatmaps. Heatmaps at runtime are very useful for obtaining the location and shape of defects without the need for graphical annotations at training, says the company.

The new software version also enables live acquisition for dataset creation. When creating a dataset in Astrocyte you can now acquire live video from a camera and generate a series of files automatically prior to training. During acquisition images are prepared for training—adjusted for size and aspect ratio—before being saved to disk.

VISUAL INFERENCE FOR IoT EDGE

Gone are the days when machine vison was exclusively a contained process within a facility or production line. The IoT phenomenon has shown the benefits of connecting systems and acting on acquired data over the Internet. And visions systems are embracing those advantages too. With that in mind, in September IOTech partnered with Lotus Labs to deliver AI and visual inference solutions at the IoT edge. The partnership enables IOTech to integrate Lotus Labs’ computer vision technology into its edge software solutions.

This combination provides a functionality that is especially useful for companies building intelligent solutions across vertical use cases, says IOTech. These include loss prevention in retail, crowd management in entertainment venues, manufacturing component fault detection, COVID safe-distancing management and smart safety systems within industrial plants.

The integrated solution will enable the data from conventional sensors and OT endpoints to be combined with the results from the latest AI and video inference technologies to provide a much more accurate real-time operational picture and make smarter decisions from the fusion of data. IOTech has pilot programs underway at major sporting venues and anticipates soon deploying AI and visual inference solutions for a number of these.

Lotus Labs provides visual inference through Padmé, its AI platform. Padmé will be integrated with IOTech’s Edge Xpert to offer a comprehensive solution for computer vision at the edge. The solution will support a range of use cases, including people counting, predictive maintenance, product quality checking and theft detection. All of these increase in accuracy through AI and video inference.

IOTech’s Edge Xpert edge computing platform is supported by a pluggable open architecture for computer vision that allows users to run their AI algorithms and vision models at the edge (Figure 7). Edge Xpert enables users to easily control camera devices, collect video streams and automatically apply AI and vision inference right at the edge. The platform supports deploying models that can include object detection, classification and recognition. It passes the inference results to other services for real-time decision making.

Figure 7
The Edge Xpert edge computing platform is supported by a pluggable open architecture for computer vision that allows users to run their AI algorithms and vision models at the edge. (Click to enlarge)
BOX-LEVEL SOLUTIONS

Box-level systems have become a popular choice for a lot of today’s vision system designs. Box-level embedded PCs can provide the AI processing, the network connectivity and the camera interfaces that are required for advanced vision system implementations. In an example, in July OnLogic announced that leading machine vision specialist Artemis Vision is using OnLogic computers to build solutions for quality control inspection, dimensioning and smart logistics.

Artemis Vision is using OnLogic computers for a number of projects, including the RaPTr (Rapid Pallet Tracker), a smart logistics solution used to automatically scan and track barcodes on pallets as they pass through warehouse checkpoints like dock doors (Figure 8). RaPTr drastically improves fulfillment speed and accuracy by eliminating slow and error-prone manual scanning. The system is built on OnLogic’s Helix 500 industrial computer platform.

— ADVERTISMENT—

Advertise Here

Figure 8 The RaPTr (Rapid Pallet Tracker) is a smart logistics solution used to automatically scan and track barcodes on pallets as they pass through warehouse checkpoints like dock doors. The system is built on OnLogic’s Helix 500 industrial computer platform (shown).
Figure 8
The RaPTr (Rapid Pallet Tracker) is a smart logistics solution used to automatically scan and track barcodes on pallets as they pass through warehouse checkpoints like dock doors. The system is built on OnLogic’s Helix 500 industrial computer platform (shown).

Another Artemis Vision solution enables automatic dimensioning of flooring tiles. The embedded OnLogic computer receives, calibrates, and mosaics images from four high-resolution cameras for consistent measurement of varying sizes and materials, providing reliable data and product quality control for the end customer.

OnLogic industrial computers are also used by Artemis Vision for a device, which automates the inspection of woven and braided products. The solution helps to avoid missed quality issues, eliminate the need for manual monitoring and inspection and reduce scrap rates that result from catching defects too late. The dimensioning and quality inspection solutions both leverage the performance and flexibility of the OnLogic Karbon 700 rugged computer.

REAL-TIME VISION I/O

In another box-level solution example, earlier this year Axiomtek introduced its MVS100-323-FL, an ultra-compact fanless machine vision system with real-time vision I/O and camera interfaces. Its vision I/O includes trigger input/output, LED lighting control and isolated DIO. With support for two independent IEEE802.3af GbE LAN ports (PoE) for connection with industrial cameras, the vision system is specially designed for automated optical inspection (AOI), label presence inspection, optical character recognition (OCR) and defect inspection.

The MVS100-323-FL embeds the Intel Atom x5-E3940 processor. The two IEEE802.3af GbE LAN ports ensure independent bandwidth capacity and power supply for GigE cameras. Through software integration, its integrated lighting controller comes with both strobe mode and trigger mode, which can support various types of LEDs (Figure 9). Moreover, the ultra-compact vision system has real-time vision I/O for camera triggering to ensure high-quality image capturing. A set of isolation digital I/O channels allows this vision system to control different kinds of devices, such as robotics and pneumatic actuator for object sorting. To ensure reliable and stable performance in harsh environments, the IP40-rated MVS100-323-FL supports a wide operating temperature range of -10°C to +55°C.

Figure 9 The MVS100-323-FL is an ultra-compact fanless machine vision system with real-time vision I/O and camera interfaces. Its vision I/O includes trigger input/output, LED lighting control and isolated DIO.
Figure 9
The MVS100-323-FL is an ultra-compact fanless machine vision system with real-time vision I/O and camera interfaces. Its vision I/O includes trigger input/output, LED lighting control and isolated DIO.

Machine vision inspection plays an important role in quality control in the manufacturing industry. For object inspection applications, the timing correlation between proximity sensor input, camera trigger output and illumination actuation control are crucial, says Axiomtek. The MVS100-323-FL can operate under sequential control and be synchronized with two cameras. Meanwhile, the machine vision system is equipped with a high-speed trigger function and flexible lighting control. This machine vision system achieves seamless interoperability between cameras and vision devices through the integrated real-time vision I/O.

The MVS100-323-FL comes with DDR3L SO-DIMM sockets for up to 8GB of system memory. It offers flexible I/O expandability including one GbE LAN port with Intel Ethernet Controller I211-AT, one RS-232/422/485 port, two USB 3.2 Gen1 ports, one HDMI port, one VGA port and one optional internal USB connector. It has a full-size PCI Express Mini Card slot in support of mSATA signal for WLAN/aWWAN/mSATA modules. The machine vision system offers enhanced security through its Trusted Platform Module 2.0 (TPM 2.0).

GPU-BASED EMBEDDED SYSTEM

Using NVIDIA Jetson-base GPU technology for AI vision is happening at the box-level too. In July, Vecow announced its fanless embedded system the EAC-2000 Series. Powered by NVIDIA Jetson Xavier NX module, the EAC-2000 supports operating temperatures from -25°C to 70°C, 9V to 50V wide range DC-in, along with GMSL technology linked with Fakra-Z connectors (Figure 10). According to Vecow, the EAC-2000 Series brings small size and easy deployment to AI vision and industrial applications including traffic vision, intelligent surveillance, auto optical inspection, Smart Factory, AMR/AGV and any AIoT/Industry 4.0 applications.

Figure 10 Powered by an NVIDIA Jetson Xavier NX module, the EAC-2000 supports operating temperatures from -25°C to 70°C, 9V to 50V wide range DC-in, along with GMSL camera interaces linked with Fakra-Z connectors.
Figure 10
Powered by an NVIDIA Jetson Xavier NX module, the EAC-2000 supports operating temperatures from -25°C to 70°C, 9V to 50V wide range DC-in, along with GMSL camera interaces linked with Fakra-Z connectors.

Vecow EAC-2000 is based on the new NVIDIA Jetson Xavier NX module that provides more than 10x the performance of its widely adopted predecessor, NVIDIA Jetson TX2. Featuring 4x GMSL automotive cameras via rugged FAKRA-Z connectors, the EAC-2000 is well suited for industrial and outdoor environments. The EAC-2000 is further equipped with 4x GigE LAN, including 2x PoE+ to simplify cable installations and deployments and 1x CANbus to offer faster and robust communication between vehicles.

The EAC-2000 Series is equipped with an external microSD slot for up to 128GB plus an M.2 M-key 2280 socket for NVMe SSDs. There is also an M.2 B-key 3042/3052 slot with a nano-SIM slot for optional 4G or 5G modules and an M.2 E-key 2230 for an optional Wi-Fi/BT module. Six antennas are available.

FACE DETECTION SYSTEMS

Vision systems aren’t only in the factory space. Face detection is an emerging vision-based technology. In an example, in July Maxim Integrated and Xailient announced that Maxim Integrated’s MAX78000 low power neural-network microcontroller (MCU) detects and localizes faces in video and images using Xailient’s proprietary Detectum neural network (Figure 11). Xailient’s neural network draws 250 times lower power (at just 280µJ) than conventional embedded solutions, and at 12ms per inference, the network performs in real time and is faster than the most efficient face-detection solution available for the edge.

Figure 11 The MAX78000 low power neural-network MCU detects and localizes faces in video and images using Xailient’s proprietary Detectum neural network. Xailient’s neural network draws 250 times lower power than conventional embedded solutions and at 12ms per inference.
Figure 11
The MAX78000 low power neural-network MCU detects and localizes faces in video and images using Xailient’s proprietary Detectum neural network. Xailient’s neural network draws 250 times lower power than conventional embedded solutions and at 12ms per inference.

Battery-powered AI systems that require face detection, such as home cameras, industrial grade smart security cameras and retail solutions, require a low-power solution to provide the longest possible operation between charges, says Maxim. In addition to supporting standalone applications, Maxim Integrated’s MCU paired with Xailient’s neural network improves overall power efficiency and battery life of hybrid edge/cloud applications that employ a low-power “listening” mode, which then awakens more complex systems when a face is detected.

Xailient’s Detectum neural network includes focus, zoom and visual wake-word technologies to detect and localize faces in video and images at 76x faster rates than conventional software solutions, at similar or better accuracy. In addition, the flexible network can be extended to applications other than facial recognition, such as livestock inventory and monitoring, parking spot occupancy, inventory levels and more.  CC

RESOURCES
ADLINK Technology | www.adlinktech.com
Allied Vision Technologies | www.alliedvision.com
Axiomtek | www.axiomtek.com
eCapture | www.ecapturecamera.com
Imago Technologies | www.imago-technologies.com
IOTech | www.iotechsys.com
Maxim Integrated | www.maximintegrated.com
OnLogic | www.onlogic.com
Teledyne DALSA | www.teledynedalsa.com/mv
Vecow | www.vecow.com

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • OCTOBER 2021 #375 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the May 2020 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
Former Editor-in-Chief at Circuit Cellar | Website | + posts

Jeff served as Editor-in-Chief for both LinuxGizmos.com and its sister publication, Circuit Cellar magazine 6/2017—3/2022. In nearly three decades of covering the embedded electronics and computing industry, Jeff has also held senior editorial positions at EE Times, Computer Design, Electronic Design, Embedded Systems Development, and COTS Journal. His knowledge spans a broad range of electronics and computing topics, including CPUs, MCUs, memory, storage, graphics, power supplies, software development, and real-time OSes.

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

Intelligent Vision Systems Leverage Embedded Tech

by Jeff Child time to read: 15 min