Rugged Computers Run Linux on Jetson TX2 and Xavier

By Eric Brown

Aitech, which has been producing embedded Linux-driven systems for military/aerospace and rugged industrial applications since at least 2004, announced that Concurrent Real-Time’s hardened RedHawk Linux RTOS will be available on two Linux-ready embedded systems based on the Nvidia Jetson TX2 module. With Redhawk Linux standing in for the default Nvidia Linux4Tegra stack, the military-grade A176 Cyclone and recently released, industrial-focused A177 Twister systems can “enhance real-time computing for mission-critical applications,” says Aitech.


MIL/AERO focused A176 Cyclone (left) and new A177 Twister
(click image to enlarge)
Here, we’ll take a closer look at the A177 Twister, which was announced in October as a video capture focused variant of the similar, MIL/AERO targeted A176 Cyclone. Both of these “SWaP-optimized (size, weight and power) supercomputers” are members of Aitech’s family of GPGPU RediBuilt computers, which also include PowerPC and Intel Core based systems.

We’ll also briefly examine an “EV178 Development System” for an Nvidia Xavier based A178 Thunder system that was revealed at Embedded World. The A178 Thunder targets MIL/AERO, as well as autonomous vehicles and other applications (see farther below).

Both the A177 Twister and A176 Cyclone systems deploy the Arm-based Jetson TX2module in a rugged, small form factor (SFF) design. The TX2 module features 2x high-end “Denver 2” cores and 4x Cortex-A57 cores. There’s also a 256-core Pascal GPU with CUDA libraries for running AI and machine learning algorithms.


 
A177 Twister (left) and Jetson TX2
(click images to enlarge)
The TX2 module is further equipped with 8GB LPDDR4 and 32GB eMMC 5.1. Other rugged TX2-based systems include Axiomtek’s eBOX800-900-FL.

The RedHawk Linux RTOS distribution, which was announced in 2005, is based on Red Hat Linux and the security-focused SELinux. RedHawk offers a hardened real-time Linux kernel with ultra-low latency and high determinism. Other features include support for multi-core architectures and x86 and ARM64 target platforms.

The RedHawk BSP also includes “NightStar” GUI debugging and analysis tools, which were announced with the initial RedHawk distro. NightStar supports hot patching “and provides a complete graphical view of multithreaded applications and their interaction with the Linux kernel,” says Concurrent Real-Time.

A177 Twister

The A177 Twister leverages the Jetson TX2 and its “CUDA and deep learning acceleration capabilities to easily handle the complex computational requirements needed in embedded systems that are managing multiple data and video streams,” says Aitech. The system is optimized for video capture, processing, and overlays.


A177 Twister
(click image to enlarge)
The A177 Twister supports applications including robotics, automation and optical inspection systems in industrial facilities, as well as for autonomous aircraft and ground environments,” says Aitech. Other applications include security and surveillance, mining and excavating computers, complex marine and boating applications, and agricultural machinery.

The 148 x 148 x 63mm A177 Twister is protected against ingress per IP67. The fanless system weighs 2.2 lbs. (just under 1Kg) and supports -20 to 65°C temperatures.

The Jetson TX2 module supplies 8GB LPDDR4 and 32GB eMMC 5.1. The A177 Twister adds a microSD slot with optional preconfigured card, as well as an optional “Mini-SATA SSD with Quick Erase and Secure Erase support.”

The system shares many features with the A176 Cyclone, with the major difference being that it adds optional WiFi-ac and Bluetooth 4.1, as well as support for simultaneous capture of up to 8x RS-170A (NTSC/PAL) composite video channels at full frame rates. It also has lower ruggedization levels and a smaller 6-24V input range compared to 11-36V, among other differences.


 
A177 Twister block diagram (left) and I/O specs
(click images to enlarge)
As shown in the spec-sheet above, you can purchase the Twister with and without 8x composite inputs and/or 1x SDI input with up to 1080/60 H.264 encoding. There’s also a choice of composite or SDI frame grabbers, both, or none at all. The one SKU that offers all of the above sacrifices the single USB 3.0 port.

Standard features include USB 2.0, HDMI, Composite input, GbE. 2x RS-232 (one for debug/console), 2x CAN, and 4x single-end discrete I/O. Most of these interfaces are bundled up into rugged military-style composite I/O ports.

Power consumption is typically 8-10W with a maximum of 17W. The system also provides reverse polarity and EMC protections, hardware accelerated AES encryption/decryption, temperature sensors, elapsed time recorder, and dynamic voltage and frequency scaling.

EV178 Development System for A178 Thunder

Aitech revealed an A178 Thunder< at computer at Embedded World. The company recently followed up with a formal announcement and product page for an EV178 Development System that helps unlock the computer for early customers.


 
EV178 Development System for A178 Thunder (left) and Jetson AGX Xavier
Built around Nvidia’s high-end Jetson AGX Xavier module, the compact, Linux-driven A178 Thunder “is the most advanced solution for video and signal processing, deep-learning accelerated, for the next generation of autonomous vehicles, surveillance and targeting systems, EW systems, and many other applications,” says Aitech. The EV178 Development System for A178 Thunder processes at up to 11 TFLOPS (Terra floating point operations per second) and 22 TOPS (Terra operations per second), says Aitech.

The Jetson AGX Xavier has greater than 10x the energy efficiency and more than 20x the performance of the Jetson TX2, claims Nvidia. The 105 x 87 x 16mm Xavier module features 8x ARMv8.2 cores and a high-end, 512-core Nvidia Volta GPU with 64 tensor cores with 2x Nvidia Deep Learning Accelerator (DLA) — also called NVDLA — engines. The module is also equipped with a 7-way VLIW vision chip, as well as 16GB 256-bit LPDDR4 RAM and 32GB eMMC 5.1.
EV178 Development System for A178 Thunder
(click image to enlarge)

Preliminary specs for the EV178 Development System for A178 Thunder include:

  • Nvidia Jetson AGX Xavier module
  • 4x simultaneous SDI (SD/HD) video capture channels
  • 8x simultaneous Composite (RS-170A [NTSC]/PAL) video capture channels
  • Gigabit Ethernet
  • HDMI output
  • USB 3.0
  • UART Serial
  • Discretes
  • Pre-installed Linux OS, drivers, and test applications
  • Cables and external power supply

Further information

Concurrent’s RedHawk Linux RTOS appears to be available now as an optional build for the A177 Twister and earlier A176 Cyclone, both of which appear to be available with undisclosed pricing. No ship date was announced for the EV178 Development System for A178 Thunder. More information may be found in Aitech’s RedHawk Linux announcement, as well as the A177 Twister product page. More on the A178 Thunder may be found in the EV178 Development System for A178 Thunder announcementand product page.

This article originally appeared on LinuxGizmos.com on March 18.

Aitech | www.rugged.com

Chip-Level Solutions Feed AI Needs

Embedded Supercomputing

Gone are the days when supercomputing meant big, rack-based systems in an air conditioned room. Today, embedded processors, FPGAs and GPUs are able to do AI and machine learning operations, enabling new types of local decision making in embedded systems.

By Jeff Child, Editor-in-Chief

Embedded computing technology has evolved way past the point now where complete system functionality on a single chip is remarkable. Today, the levels of compute performance and parallel processing on an IC means that what were once supercomputing levels of capabilities can now be implemented in in chip-level solutions.

While supercomputing has become a generalized term, what system developers are really interested in are crafting artificial intelligence, machine learning and neural networking using today’s embedded processing. Supplying the technology for these efforts are the makers of leading-edge embedded processors, FPGAs and GPUs. In these tasks, GPUs are being used for “general-purpose computing on GPUs”, a technique also known as GPGPU computing.

With all that in mind, embedded processor, GPU and FPGA companies have rolled out a variety of solutions over the last 12 months, aimed at performing AI, machine learning and other advanced computing functions for several demanding embedded system application segments.

FPGAS Take AI Focus

Back March, FPGA vendor Xilinx announced its plans to launch a new FPGA product category it calls its adaptive compute acceleration platform (ACAP). Following up on that, in October the company unveiled Versal—the first of its ACAP implementations. Versal ACAPs combine scalar processing engines, adaptable hardware engines and intelligent engines with advanced memory and interfacing technologies to provide heterogeneous acceleration for any application. But even more importantly, according to Xilinx, the Versal ACAP’s hardware and software can be programmed and optimized by software developers, data scientists and hardware developers alike. This is enabled by a host of tools, software, libraries, IP, middleware and frameworks that facilitate industry-standard design flows.

Built on TSMC’s 7-nm FinFET process technology, the Versal portfolio combines software programmability with domain-specific hardware acceleration and adaptability. The portfolio includes six series of devices architected to deliver scalability and AI inference capabilities for a host of applications across different markets, from cloud to networking to wireless communications to edge computing and endpoints.

The portfolio includes the Versal Prime series, Premium series and HBM series, which are designed to deliver high performance, connectivity, bandwidth, and integration for the most demanding applications. It also includes the AI Core series, AI Edge series and AI RF series, which feature the AI Engine (Figure 1). The AI Engine is a new hardware block designed to address the emerging need for low-latency AI inference for a wide variety of applications and also supports advanced DSP implementations for applications like wireless and radar.

Figure 1
Xilinx’s AI Engine is a new hardware block designed to address the emerging need for low-latency AI inference for a wide variety of applications. It also supports advanced DSP implementations for applications like wireless and radar.

It is tightly coupled with the Versal Adaptable Hardware Engines to enable whole application acceleration, meaning that both the hardware and software can be tuned to ensure maximum performance and efficiency. The portfolio debuts with the Versal Prime series, delivering broad applicability across multiple markets and the Versal AI Core series, delivering an estimated 8x AI inference performance boost compared to industry-leading GPUs, according to Xilinx.

Low-Power AI Solution

Following the AI trend, back in May Lattice Semiconductor unveiled Lattice sensAI, a technology stack that combines modular hardware kits, neural network IP cores, software tools, reference designs and custom design services. In September the company unveiled expanded features of the sensAI stack designed for developers of flexible machine learning inferencing in consumer and industrial IoT applications. Building on the ultra-low power (1 mW to 1 W) focus of the sensAI stack, Lattice released new IP cores, reference designs, demos and hardware development kits that provide scalable performance and power for always-on, on-device AI applications.

Embedded system developers can build a variety of solutions enabled by sensAI. They can build stand-alone iCE40 UltraPlus/ECP5 FPGA based always-on, integrated solutions, with latency, security and form factor benefits. Alternatively, they can use CE40 UltraPlus as an always-on processor that detects key phrases or objects, and wakes-up a high-performance AP SoC / ASIC for further analytics only when required, reducing overall system power consumption. And, finally, you can use the scalable performance/power benefits of ECP5 for neural network acceleration, along with I/O flexibility to seamlessly interface to on-board legacy devices including sensors and low-end MCUs for system control.

Figure 2
Human face detection application example. iCE40 UlraPlus enables AI with an always-on image sensor, while consuming less than 1 mW of active power.

Updates to the sensAI stack include a new CNN (convolutional neural networks) Compact Accelerator IP core for improved accuracy on iCE40 UltraPlus FPGA and enhanced CNN Accelerator IP core for improved performance on ECP5 FPGAs. Software tools include an updated neural network compiler tool with improved ease-of-use and both Caffe and TensorFlow support for iCE40 UltraPlus FPGAs. Also provided are reference designs enabling human presence detection and hand gesture recognition reference designs and demos (Figure 2). New iCE40 UltraPlus development platform support includes a Himax HM01B0 UPduino shield and DPControl iCEVision board.. …

Read the full article in the December 341 issue of Circuit Cellar

Don’t miss out on upcoming issues of Circuit Cellar. Subscribe today!

Note: We’ve made the October 2017 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.