Deadline Extended to June 22 — Vote Now!

UPDATE: We’ve extended our 2018 reader survey on open-spec Linux/Android hacker boards through this Friday, June 22.   Vote now!

Circuit Cellar’s sister website LinuxGizmos.com has launched its fourth annual reader survey of open-spec, Linux- or Android-ready single board computers priced under $200. In coordination with Linux.com, LinuxGizmos has identified 116 SBCs that fit its requirements, up from 98 boards in its June 2017 survey.

Vote for your favorites from LG’s freshly updated catalog of 116 sub-$200, hacker-friendly SBCs that run Linux or Android, and you could win one of 15 prizes.

Check out LinuxGizmos’ freshly updated summaries of 116 SBCs, as well as its spreadsheet that compares key features of all the boards.

Explore this great collection of Linux SBC information. To find out how to participate in the survey–and be entered to win a free board–click here:

GO HERE TO TAKE THE SURVEY AND VOTE

 

 

Linux Still Rules IoT, Says Survey, with Raspbian Leading the Way

By Eric Brown

The Eclipse Foundation’s Eclipse IoT Working Group has released the results of its IoT Developer Survey 2018, which surveyed 502 Eclipse developers between January and March 2018. While the sample size is fairly low—LinuxGizmos’  own 2017 Hacker Board survey had 1,705 respondents—and although the IoT technologies covered here extend beyond embedded tech into the cloud, the results sync up pretty well with 2017 surveys of embedded developers from VDC Research and AspenCore (EETimes/Embedded). In short, Linux rules in Internet of Things development, but FreeRTOS is coming on fast. In addition, Amazon Web Services (AWS) is the leading cloud service for IoT.

 Eclipse IoT Developer Survey 2018 results for OS usage (top) and yearly variations for non-Linux platforms (bottom)
(Source: Eclipse Foundation)
(click images to enlarge)
 

When asked what operating systems were used for IoT, a total of 71.8% of the Eclipse survey respondents listed Linux, including Android and Android Things (see farther below). The next highest total was for Windows at 23%, a slight decrease from last year.

The open source, MCU-focused FreeRTOS advanced to 20%. Last December, the FreeRTOS project received major backing from Amazon. In fact, the Eclipse Foundation calls it an “acquisition.” This is never an entirely correct term when referring to a truly open source project such as FreeRTOS, but as with Samsung’s stewardship of Tizen, it appears to be essentially true.

Amazon collaborated with FreeRTOS technical leaders in spinning a new Amazon FreeRTOS variant linked to AWS IoT and AWS Greengrass. The significance of Amazon’s stake in FreeRTOS was one of the reasons Microsoft launched its Linux-based Azure Sphere secure IoT SoC platform, according to a VDC Research analyst.

The growth of FreeRTOS and Linux has apparently reduced the number of developers who code IoT devices without a formal OS or who use bare metal implementations. The “No OS/Bare Metal category” was second place in 2017, but has dropped sharply to share third place with FreeRTOS at 20%.

Other mostly open source RTOSes that had seen increases in 2017, such as mBed, Contiki, TinyOS, and Riot OS, dropped in 2018, with Contiki seeing the biggest dive. All these platforms led the open source Zephyr, however, as well as proprietary RTOSes like Micrium PS. The Intel-backed Zephyr may have declined in part due to Intel killing its Zephyr-friendly Curie module.

Eclipse IoT results for OS usage for constrained devices (top)
and gateways (bottom)

(Source: Eclipse Foundation)
(click images to enlarge)

When the Eclipse Foundation asked what OS was used for constrained devices, Linux still led the way, but had only 38.7%, followed by No OS/Bare Metal at 19.6%, FreeRTOS at 19.3%, and Windows at 14.1%. The others remained in the same order, ranging from Mbed at 7.7% to Riot OS at 4.7% for the next four slots.

When developers were asked about OS usage for IoT gateways, Linux dominated at 64.1% followed by Windows at 14.9%. Not surprisingly, the RTOSes barely registered here, with FreeRTOS leading at 5% and the others running at 2.2% or lower.

Eclipse IoT survey results for most popular Linux distributions
(Source: Eclipse Foundation)
(click image to enlarge)

Raspbian was the most popular Linux distro at 43.3%, showing just how far the Raspberry Pi has come to dominate IoT. The Debian based Ubuntu and more IoT-oriented Ubuntu Core were close behind for a combined 40.2%, and homegrown Debian stacks were used by 30.9%.

Android (19.6%) and the IoT-focused Android Things (7.9%) combined for 27.5%. Surprisingly, the open source Red Hat based distro CentOS came in next at 15.6%. Although CentOS does appear on embedded devices, its cloud server/cloud focus suggests that like Ubuntu, some of the Eclipse score came from developers working in IoT cloud stacks as well as embedded.

Yocto Project, which is not a distribution, but rather a set of standardized tools and recipes for DIY Linux development, came next at 14.2%. The stripped-down, networking focused OpenWrt and its variants, including the forked LEDE OS, combined for 7.9%. The OpenWrt and LEDE OS projects reunited as OpenWrt in January of this year. A version 18, due later this year, will attempt to integrate those elements that have diverged.

AWS and Azure rise, Google Cloud falls

The remainder of the survey dealt primarily with IoT software. Amazon’s AWS, which is the cloud platform used by its AWS IoT data aggregation platform and the related, Linux-based AWS Greengrass gateway and edge platform, led IoT cloud platforms with 51.8%. This was a 21% increase over the 2017 survey. Microsoft Azure’s share increased by 17% to 31.2%, followed by a combined score for private and on-premises cloud providers of 19.4%.

The total that used Google Cloud dropped by 8% to 18.8%. This was followed by Kubernetes, IBM Bluemix, and OpenStack On Premises.

Other survey findings include the continuing popularity of Java and MQTT among Eclipse developers. Usage of open source software of all kinds is increasing — for example, 93% of respondents say they use open source data base software, led by MySQL. Security and data collection/analytics were the leading developer concerns for IoT while interoperability troubles seem to be decreasing.

There were only a few questions about hardware, which is not surprising considering that Eclipse developers are primarily software developers. Cortex M3/M4 chips led among MCU platforms. For gateways there was an inconclusive mix of Intel and various Arm Cortex-A platforms. Perhaps most telling: 24.9% did not know what platform their IoT software would run on.

They did, however, know their favorite IDE. It starts with an E.

Further information

More information on the Eclipse IoT Developer Survey may be found in this blog announcement by Benjamin Cabé, which links to a slides from the full survey.

This article originally appeared on LinuxGizmos.com on April 30.

Eclipse IoT Working Group | iot.eclipse.org/working-group

June Circuit Cellar: Sneak Preview

The June issue of Circuit Cellar magazine is coming soon. And we’ve planted a lovely crop of embedded electronics articles for you to enjoy.

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

 

Here’s a sneak preview of June 2018 Circuit Cellar:

PCB DESIGN AND POWER: MAKING SMART CHOICES

PCB Design and Verification
PCB design tools and methods continue to evolve as they race to keep pace with faster, highly integrated electronics. Automated, rules-based chip placement is getting more sophisticated and leveraging AI in interesting ways. And supply chains are linking tighter with PCB design processes. Circuit Cellar Chief Editor Jeff Child looks at the latest PCB design and verification tools and technologies.

PCB Ground Planes
Tricky design decisions crop up when you’re faced with crafting a printed circuit board (PCB) for any complex system—and many of them involve the ground plane. There is dealing with noisy components and deciding between a common ground plane or separate ones—and that’s just the tip of the iceberg. Robert Lacoste shares his insights on the topic, examining the physics, simulation tools and design examples of ground plane implementations.

Product Focus: AC-DC Converters
To their peril, embedded system developers often treat their choice of power supply as an afterthought. But choosing the right AC-DC converter is critical to the ensuring your system delivers power efficiently to all parts of your system. This Product Focus section updates readers on these trends and provides a product album of representative AC-DC converter products.

SENSORS TAKE MANY FORMS AND FUNCTIONS

Sensors and Measurement
While sensors have always played a key role in embedded systems, the exploding Internet of Things (IoT) phenomenon has pushed sensor technology to the forefront. Any IoT implementation depends on an array of sensors that relay input back to the cloud. Circuit Cellar Chief Editor Jeff Child dives into the latest technology trends and product developments in sensors and measurement.

Passive Infrared Sensors
One way to make sure that lights get turned off when you leave a room is to use Passive Infrared (PIR) sensors. Jeff Bachiochi examines the science and technology behind PIR sensors. He then details how to craft effective program code and control electronics to use PIR sensors is a useful way.

Gesture-Recognition in Boxing Glove
Learn how two Boston University graduate students built a gesture-detection wearable that acts as a building block for a larger fitness telemetry system. Using a Linux-based Gumstix Verdex, the wearable couples an inertial measurement unit with a pressure sensor embedded in a boxing glove to recognize the user’s hits and classify them according to predefined, user-recorded gestures.

SECURITY, RELIABILITY AND MORE

Internet of Things Security (Part 3)
In this next part of his article series on IoT security, Bob Japenga looks at the security features of a specific series of microprocessors: Microchip’s SAMA5D2. He examines these security features and discusses what protection they provide.

Aeronautical Communication Protocols
Unlike ground networks, where data throughout is the priority, avionics networks are all about reliability. As a result, the communications protocols used in for aircraft networking seem pretty obscure to the average engineer. In this article, George Novacek reviews some of the most common aircraft comms protocols including ARINC 429, ARINC 629 and MIL-STD-1553B

DEEP DIVES ON PROCESSOR DESIGN AND DIGITAL SIGNAL PROCESSING

Murphy’s Laws in the DSP World (Part 1)
A Pandora’s box of unexpected issues gets opened the moment you move from the real world of analog signals and enter the world of digital signal processing (DSP). In Part 1 of this new article series, Mike Smith defines six “Murphy’s Laws of DSP” and provides you with methods and techniques to navigate around them.

Processor Design Techniques and Optimizations
As electronics get smaller and more complex day by day, knowing the basic building blocks of processors is more important than ever. In this article, Nishant Mittal explores processor design from various perspectives—including architecture types, pipelining and ALU varieties.

Linux-Driven Modules and SBC Tap i.MX8, i.MX8M and iMX8X

By Eric Brown

Phytec has posted product pages for three PhyCore modules, all of which support Linux and offer a -40°C to 85°C temperature range. The three modules, which employ three different flavors of i.MX8, include a phyCORE-i.MX 8X COM, which is the first product we’ve seen that uses the dual- or quad-core Cortex-A35 i.MX8X.

phyCORE-i.MX 8X (top) and phyCORE-i.MX 8M (bottom – not to scale) (click images to enlarge)

The phyCORE-i.MX 8 taps the high-end, hexa-core -A72 and -A53 i.MX8, including the i.MX8 QuadMax. The phyCORE-i.MX 8M, which uses the more widely deployed dual- or quad-core i.MX8M, is the only module that appears as part of an announced SBC: the sandwich-style phyBoard-Polaris SBC (shown). The phyCORE-i.MX 8 will also eventually appear on an unnamed, crowd-sourced Pico-ITX SBC.

phyCORE-i.MX 8 (left) and NXP i.MX8 block diagram (bottom)
(click images to enlarge)

Development-only carrier boards will be available for the phyCORE-i.MX 8X and phyCORE-i.MX 8. Evaluation kits based on the carrier boards and the phyBoard-Polaris will include BSPs with a Yocto Project based Linux distribution “with pre-installed and configured packages such as QT-Libs, OpenGL and Python.” Android is also available, and QNX, FreeRTOS and other OSes are available on request. BSP documentation will include a hardware manual, quickstart instructions, application guides, and software and application examples.

 

i.MX8M, i.MX8X, and i.MX8 compared (click image to enlarge)

The three modules are here presented in order of ascending processing power.

phyCore-i.MX 8X

The i.MX8X SoC found on the petite phyCORE-i.MX 8X module was announced with other i.MX8 processors in Oct. 2016 and was more fully revealed in Mar. 2017. The industrial IoT focused i.MX8X includes up to 4x cores that comply with Arm’s rarely used Cortex-A35 successor to the Cortex-A7 design.

phyCore-i.MX 8X (top) and block diagram (bottom)
(click images to enlarge)

The 28 nm fabricated, ARMv8 Cortex-A35 cores are claimed to draw about 33 percent less power per core and occupy 25 percent less silicon area than Cortex-A53. Phytec’s comparison chart shows the i.MX8X with 5,040 to 10,800 DMIPS performance, which is surprisingly similar to the 3,450 to 13,800 range provided by the Cortex-A53 based i.MX8M (see above).The i.MX8X SoC is further equipped with a single Cortex-M4 microcontroller, a Tensilica HiFi 4 DSP, and a multi-format VPU that supports up to 4K playback and HD encode. It uses the same Vivante GC7000Lite GPU found on the i.MX8M, with up to 28 GFLOPS.

i.MX8X block diagram
(click image to enlarge)

The i.MX8X features ECC memory support, reduced soft-error-rate (SER) technology, hardware virtualization, and other industrial and automotive safety related features. Crypto features listed for the phyCore-i.MX 8X COM include AES, 3DES, RSA, ECC Ciphers, SHA1/256, and TRNG.

PhyCore-i.MX7

Phytec’s 52 mm x 42 mm phyCore-i.MX 8X is only slightly larger than the i.MX7-based PhyCore-i.MX7, but the layout is different. The module supports all three i.MX8X models: the quad-core i.MX8 QuadXPlus and the dual-core i.MX8 DualXPlus and i.MX8 DualX, all of which can clock up to 1.2 GHz. The DualX model differs in that it has a 2-shader instead of 4-shader Vivante GPU.

The phyCore-i.MX 8X offers a smorgasbord of memories. In addition to the “128 kB multimedia,” and “64 kB Secure” found on the i.MX8X itself, the module can be ordered with 512 MB to 4 GB of LPDDR4 RAM and 64 MB to 256 MB of Micron Octal SPI/DualSPI flash. (Phytec notes that it is an official member of Micron’s Xccela consortium.) You can choose between 128 MB to 1 GB NAND flash or  4GB to 128 GB eMMC.

There’s no onboard wireless, but you get dual GbE controllers (1x onboard, 1x RGMII). You can choose between 2x LVDS and 2x MIPI-DSI. There are MIPI-CSI and parallel camera interfaces, as well as ESAI based audio.

Other I/O available through the 280 pins found on its two banks of dual 70-pin connectors include USB 3.0, USB OTG, PCI/PCIe, and up to 10x I2C. You also get 2x UART, 3x CAN, 6x A/D, and single PWM, keypad, or MMC/SD/SDIO (but only if you choose the eMMC over NAND). For SPI you get a choice of a single Octal connection or 2x “Quad SPI + 3 SPI” interfaces.

 

phyCore-i.MX 8X carrier board
(click image to enlarge)

The 3.3 V module supports an RTC, and offers watchdog and tamper features. Like all the new Phytec modules, you get -40°C to 85°C support. No details were available on the carrier shown in the image above.

phyCORE-i.MX 8M

The 55 mm x 40 mm phyCORE-i.MX 8M joins a growing number of Linux-driven i.MX8M modules including Compulab’s CL-SOM-iMX8, Emcraft’s i.MX 8M SOM, Innocom’s WB10, Seco’s SM-C12, SolidRun’s i.MX8 SOM, and the smallest of the lot to date: Variscite’s 55 x 30mm DART-MX8M. There are also plenty of SBCs to compete with the phyCORE-i.MX 8M-equipped phyBoard-Polaris SBC (see farther below), but like most of the COMs, most have yet to ship.

phyCORE-i.MX 8M top) and block diagram (bottom) (click images to enlarge)

The phyCORE-i.MX 8M supports the NXP i.MX8M Quad and QuadLite, both with 4x Cortex-A53 cores, as well as the dual-core Dual. All are clocked to 1.5 GHz. They all have 266MHz Cortex-M4F cores and Vivante GC7000Lite GPUs, but only the Quad and Dual models support 4Kp60, H.265, and VP9 video capabilities. (NXP also has a Solo model that we have yet to see, which offers a single -A53 core, a Cortex-M4F, and a GC7000nanoUltra GPU.)In addition to the i.MX8M SoC, which offers “128 KB + 32 KB” RAM and the same crypto features found on the i.MX8X, the module ships with the same memory features as the phyCore-i.MX 8X except that it lacks the SPI flash. Once again, you get 512 MB to  4 GB of LPDDR4 RAM and either 128 MB to 1 GB NAND flash or 4 GB to 128 GB eMMC. There is also SPI driven “Nand/QSPI” flash.

There’s a single GbE controller, and although not listed in the spec list, the product page says that precertified WiFi and Bluetooth BLE 4.2 are onboard and accompanied by antennas.

Multimedia support includes MIPI-DSI, HDMI 2.0, 2x MIPI-CSI, and up to 5x SAI audio. The block diagram also lists eDP, possibly as a replacement for HDMI.

Other interfaces expressed via the dual 200-pin connectors include 2x USB 3.0, 4x UART, 4x I2C, 4x PWM, and single SDIO and PCI/PCIe connections. SPI support includes 2x SPI and the aforementioned Nand/QSPI. The 3.3V module supports an RTC, watchdog, and tamper protections.

phyBoard-Polaris SBC

The phyCORE-i.MX 8M is also available soldered onto a carrier board that will be sold as a monolithic phyBoard-Polaris SBC. The 100 mm x 100 mm phyBoard-Polaris SBC features the Quad version of the phyCORE-i.MX 8M clocked to 1.3 GHz, loaded with 1 GB KPDDR4 and 8 GB eMMC. The SBC also adds a microSD slot.

phyBoard-Polaris SBC
(click image to enlarge)

The phyBoard-Polaris SBC is further equipped with single GbE, USB 3.0 and USB OTG ports. There’s also an RS-232 port and MIPI-DSI and SAID audio interfaces made available via A/V connectors. Dual MIPI-CSI interfaces are also onboard.A mini-PCIe slot and GPIO slot are available for expansion. The latter includes SPI, UART, JTAG, NAND, USB, SPDIF and DIO.

Other features include a reset button, RTC with coin cell, and JTAG via a debug adapter (PEB-EVAL). There’s a 12 V – 24 V input and adapter, and the board offers the same industrial temperature support as all the new Phytec modules.

phyCORE-i.MX 8

The phyCORE-i.MX 8, which is said to be “ideal for image and speech recognition,” is the third module we’ve seen to support NXP’s top-of-the-line, 64-bit i.MX8 series. The module supports all three flavors of i.MX8 while the other two COMs we’ve seen have been limited to the high-end QuadMax: Toradex’s Apalis iMX8 and iWave’s iW-RainboW-G27M.

phyCORE-i.MX 8 (top) and block diagram (bottom)
(click images to enlarge)

Like Rockchip’s RK3399, NXP’s hexa-core i.MX8 QuadMax features dual high-end Cortex-A72 cores clocked to up to 1.6 GHz plus four Cortex-A53 cores. The i.MX8 QuadPlus design is the same, but with only one Cortex-A72 core, and the quad has no -A72 cores.All three i.MX8 models provide two Cortex-M4F cores for real-time processing, a Tensilica HiFi 4 DSP, and two Vivante GC7000LiteXS/VX GPUs. The SoC’s “full-chip hardware-based virtualization, resource partitioning and split GPU and display architecture enable safe and isolated execution of multiple systems on one processor,” says Phytec.

The 73 mm x 45 mm phyCORE-i.MX 8 supports up to 8 GB LPDDR4 RAM, according to the product page highlights list, while the spec list itself says 1 GB to 64 GB. Like the phyCORE-i.MX 8X, the module provides 64 MB to 256 MB of Micron Octal SPI/DualSPI flash. There’s no NAND option, but you get 4 GB to 128 GB eMMC.

The phyCORE-i.MX 8 lacks WiFi, but you get dual GbE controllers. Other features expressed via the 480 connection pins include single USB 3.0, USB OTG, and PCIe 2.0 based SATA interfaces. Dual PCIe interfaces are also available

The module provides a 4K-ready HDMI output, 2x LVDS, and 2x MIPI-DSI for up 4x simultaneous HD screens. For image capture you get 2x MIPI-CSI and an HDMI input. Audio features are listed as “2x ESAI up to 4 SAI.”

The phyCORE-i.MX 8 is further equipped with I/O including 2x UART, 2x CAN, 2x MMC/SD/SDIO, 8x A/D, up to 19x I2C, and a PWM interface. For SPI, you get “up to 4x + 1x QSPI.” The module supports an RTC and offers industrial temperature support.

phyCORE-i.MX 8 carrier board (click image to enlarge)

In addition to the unnamed carrier board for the phyCORE-i.MX 8 module shown above, Phytec plans to produce a “Machine Vision and Camera kit” to exploit i.MX8 multimedia features including the VPU, the Vivante GPU’s Vulkan and OGL support, and interfaces including MIPI-DSI, MIPI-CSI, HDMI, and LVDS. In addition, the company will offer rapid prototyping services for customizing customer-specific hardware I/O platforms.Finally, Phytec is planning to develop a smaller, Pico-ITX form factor SBC based on the i.MX8 SoC, and it’s taking a novel approach to do so. The company has launched a Cre-8 community which intends to crowdsource the SBC. The company is seeking developers to join this alpha-stage project to contribute ideas. We saw no promises of open source hardware support, however.

Further information

[As of March 29] No availability information was provided for the phyCORE-i.MX 8X, phyCORE-i.MX 8M, or phyCORE-i.MX 8 modules, but the phyCORE-i.MX 8M-based phyBoard-Polaris is due in the third quarter. More information may be found in Phytec’s phyCORE-i.MX 8X, phyCORE-i.MX 8M, and phyCORE-i.MX 8 product pages as well as the phyBoard-Polaris SBC product page. More on development kits for all these boards may be found here.

This article originally appeared on LinuxGizmos.com on March 29.

Phytec issue a Press Release announcing these products on April 19.
UPDATE: “Early access program sampling for the phyCORE-i.MX8 and phyCORE-i.MX8M is planned for Q3 2018, with general availability expected in Q4 2018.”

Phytec | www.phytec.eu

Tiny, Rugged IoT Gateways Offer 10-Year Linux Support

By Eric Brown

Moxa has announced the UC-2100 Series of industrial IoT gateways along with its new UC 3100 and UC 5100 Series, but it offered details only on the UC-2100. All three series will offer ruggedization features, compact footprints, and on some models, 4G LTE support. They all run Moxa Industrial Linux and optional ThingsPro Gateway data acquisition software on Arm-based SoCs.

 

Moxa UC-2111 or UC-2112 (left) and UC-2101 (click image to enlarge)

Based on Debian 9 and a Linux 4.4 kernel, the new Moxa Industrial Linux (MIL) is a “high-performance, industrial-grade Linux distribution” that features a container-based virtual-machine-like middleware abstraction layer between the OS and applications,” says Moxa. Multiple isolated systems can run on a single control host “so that system integrators and engineers can easily change the behavior of an application without worrying about software compatibility,” says the company.

MIL provides 10-year long-term Linux support, and is aimed principally at industries that require long-term software, such as power, water, oil & gas, transportation and building automation industries. In December, Moxa joined the Linux Foundation’s Civil Infrastructure Platform (CIP) project, which is developing a 10-year SLTS Linux kernel for infrastructure industries. MIL appears to be in alignment with CIP standards.

Diagrams of ThingsPro Gateway (top) and the larger ThingsPro eco-system (bottom) (click images to enlarge)

Moxa’s ThingsPro Gateway software enables “fast integration of edge data into cloud services for large-scale IIoT deployments,” says Moxa. The software supports Modbus data acquisition, LTE connectivity, MQTT communication, and cloud client interfaces such as Amazon Web Services (AWS) and Microsoft Azure. C and Python APIs are also available.

 

Moxa’s UC-3100 (source: Hanser Konstruktion), and at right, the similarly Linux-driven, ThingsPro ready UC-8112 (click images to enlarge)

Although we saw no product pages on the UC-3100 and UC-5100, Hanser Konstruktion posted a short news item on the UC-3100 with a photo (above) and a few details. This larger, rugged system supports WiFi and LTE with two antenna pairs, and offers a USB port in addition to dual LAN and dual serial ports.

The new systems follow several other UC-branded IoT gateways that run Linux on Arm. The only other one to support ThingsPro is the UC-8112, a member of the UC-8100 family. This UC-8100 is similarly ruggedized, and runs Linux on a Cortex-A8 SoC.

UC-2100

The UC-2100 Series gateways runs MIL on an unnamed Cortex-A8 SoC clocked at 600MHz except for the UC-2112, which jumps to 1GHz. There are five different models, all with 9-48 VDC 3-pin terminal blocks and a maximum consumption of 4 Watts when not running cellular modules.

The five UC-2100 models have the following dimensions, weights, and maximum input currents:

  • UC-2101 — 50 x 80 x 28mm; 190 g; 200 mA
  • UC-2102 — 50 x 80 x 28mm; 190 g; 330 mA
  • UC-2104 — 57 x 80 x 30.8mm; 220 g; 800 mA
  • UC-2111 — 77 x 111 x 25.5mm; 290 g; 350 mA
  • UC-2112 — 77 x 111 x 25.5mm; 290 g; 450 mA

All five UC-2100 variants default to a -10 to 60°C operating range except for the UC-2104, which moves up to -10 to 70°C. In addition, they are all available in optional -40 to 75°C versions.

Other ruggedization features are the same, including anti-vibration protection per IEC 60068-2-64 and anti-shock per IEC 60068-2-2. A variety of safety, EMC, EMI, EMS, and hazardous environment standards are also listed.

The first three models ship with 256MB DDR3, while the UC-2111 and UC-2112 offer 512MB. These two are also the only ones to offer micro-SD slots. All five systems ship with 8GB eMMC loaded with the MIL distribution.

The UC-2100 systems vary in the number and type of their auto-sensing, 1.5 kV isolated Ethernet ports. The UC-2101 and UC-2104 each have a single 10/100Mbps port, while the UC-2102 and UC-2111 have two. The UC-2112 has one 10/100 and one 10/100/1000 port. The UC-2104 is the only model with a mini-PCIe socket for 4G or WiFi.

The UC-2111 and UC-2112 offer 2x RS-232/422/48 ports while the UC-2101 has one. It would appear that the UC-2102 and UC-2104 lack serial ports altogether except for the RS-232 console port available on all five systems.

The UC-2100 provides push buttons and dip switches, an RTC, a watchdog, and LEDs, the number of which depend on the model. A wall kit is standard, and DIN-rail mounting is optional. TPM 2.0 is also optional. A 5-year hardware warranty is standard.

Further information

The UC-2100 Series gateways appear to be available for order, with pricing undisclosed. More information may be found on Moxa’s UC-2100 product page. More information about the UC-2100, as well as the related, upcoming UC-3100 and UC-5100 Series, will be on tap at Hannover Messe 2018, April 23-27, at the Arm Booth at Hall 6, Booth A46.

Moxa | www.moxa.com

This article originally appeared on LinuxGizmos.com on April 16.

Microsoft Unveils Secure MCU Platform with a Linux-Based OS

By Eric Brown

Microsoft has announced an “Azure Sphere” blueprint for for hybrid Cortex-A/Cortex-M SoCs that run a Linux-based Azure Sphere OS and include end-to-end Microsoft security technologies and a cloud service. Products based on a MediaTek MT3620 Azure Sphere chip are due by year’s end.

Just when Google has begun to experiment with leaving Linux behind with its Fuchsia OS —new Fuchsia details emerged late last week— long-time Linux foe Microsoft unveiled an IoT platform that embraces Linux. At RSA 2018, Microsoft Research announced a project called Azure Sphere that it bills as a new class of Azure Sphere microcontrollers that run “a custom Linux kernel” combined with Microsoft security technologies. Initial products are due by the end of the year aimed at industries including whitegoods, agriculture, energy and infrastructure.

Based on the flagship, Azure Sphere based MediaTek MT3620 SoC, which will ship in volume later this year, this is not a new class of MCUs, but rather a fairly standard Cortex-A7 based SoC with a pair of Cortex-M4 MCUs backed up by end to end security. It’s unclear if future Azure Sphere compliant SoCs will feature different combinations of Cortex-A and Cortex-M, but this is clearly an on Arm IP based design. Arm “worked closely with us to incorporate their Cortex-A application processors into Azure Sphere MCUs,” says Microsoft. 

Azure Sphere OS architecture (click images to enlarge)

Major chipmakers have signed up to build Azure Sphere system-on-chips including Nordic, NXP, Qualcomm, ST Micro, Silicon Labs, Toshiba, and more (see image below). The software giant has sweetened the pot by “licensing our silicon security technologies to them royalty-free.”

Azure Sphere SoCs “combine both real-time and application processors with built-in Microsoft security technology and connectivity,” says Microsoft. “Each chip includes custom silicon security technology from Microsoft, inspired by 15 years of experience and learnings from Xbox.”

The design “combines the versatility and power of a Cortex-A processor with the low overhead and real-time guarantees of a Cortex-M class processor,” says Microsoft. The MCU includes a Microsoft Pluton Security Subsystem that “creates a hardware root of trust, stores private keys, and executes complex cryptographic operations.”

The IoT oriented Azure Sphere OS provides additional Microsoft security and a security monitor in addition to the Linux kernel. The platform will ship with Visual Studio development tools, and a dev kit will ship in mid-2018.

Azure Sphere security features (click image to enlarge)

The third component is an Azure Sphere Security Service, a turnkey, cloud-based platform. The service brokers trust for device-to-device and device-to-cloud communication through certificate-based authentication. The service also detects “emerging security threats across the entire Azure Sphere ecosystem through online failure reporting, and renewing security through software updates,” says Microsoft.

Azure Sphere eco-system conceptual diagram (top) and list of silicon partners (bottom)

In many ways, Azure Sphere is similar to Samsung’s Artik line of IoT modules, which incorporate super-secure SoCs that are supported by end-to-end security controlled by the Artik Cloud. One difference is that the Artik modules are either Cortex-A applications processors or Cortex-M or -R MCUs, which are designed to be deployed in heterogeneous product designs, rather than a hybrid SoC like the MediaTek MT3620.Hybrid, Linux-driven Cortex-A/Cortex-M SoCs have become common in recent years, led by NXP’s Cortex-A7 based i.MX7 and -A53-based i.MX8, as well as many others including the -A7 based Renesas RZ/N1D and Marvell IAP220.

MediaTek MT3620

The MediaTek MT3620 “was designed in close cooperation with Microsoft for its Azure Sphere Secure IoT Platform,” says MediaTek in its announcement. Its 500MHz Cortex-A7 core is accompanied by large L1 and L2 caches and integrated SRAM. Dual Cortex-M4F chips support peripherals including 5x UART/I2C/SPI, 2x I2S, 8x ADC, up to 12 PWM counters, and up to 72x GPIO.

The Cortex-M4F cores are primarily devoted to real-time I/O processing, “but can also be used for general purpose computation and control,” says MediaTek. They “may run any end-user-provided operating system or run a ‘bare metal app’ with no operating system.”

In addition, the MT3620 features an isolated security subsystem with its own Arm Cortex-M4F core that handles secure boot and secure system operation. A separate Andes N9 32-bit RISC core supports 1×1 dual-band 802.11a/b/g/n WiFi.

The security features and WiFi networking are “isolated from, and run independently of, end user applications,” says MediaTek. “Only hardware features supported by the Azure Sphere Secure IoT Platform are available to MT3620 end-users. As such, security features and Wi-Fi are only accessible via defined APIs and are robust to programming errors in end-user applications regardless of whether these applications run on the Cortex-A7 or the user-accessible Cortex-M4F cores.” MediaTek adds that a development environment is avaialble based on the gcc compiler, and includes a Visual Studio extension, “allowing this application to be developed in C.”

Microsoft learns to love LinuxIn recent years, we’ve seen Microsoft has increasingly softened its long-time anti-Linux stance by adding Linux support to its Azure service and targeting Windows 10 IoT at the Raspberry Pi, among other experiments. Microsoft is an active contributor to Linux, and has even open-sourced some technologies.

It wasn’t always so. For years, Microsoft CEO Steve Ballmer took turns deriding Linux and open source while warning about the threat they posed to the tech industry. In 2007, Microsoft fought back against the growth of embedded Linux at the expense of Windows CE and Windows Mobile by suing companies that used embedded Linux, claiming that some of the open source components were based on proprietary Microsoft technologies. By 2009, a Microsoft exec openly acknowledged the threat of embedded Linux and open source software.

That same year, Microsoft was accused of using its marketing muscle to convince PC partners to stop providing Linux as an optional install on netbooks. In 2011, Windows 8 came out with a new UEFI system intended to stop users from replacing Windows with Linux on major PC platforms.


Azure Sphere promo video

Further information

Azure Sphere is available as a developer preview to selected partners. The MediaTek MT3620 will be the first Azure Sphere MCU, and products based on it should arrive by the end of the year. More information may be found in Microsoft’s Azure Sphere announcement and product page.

Microsoft | www.microsoft.com

This article originally appeared on LinuxGizmos.com on April 16.

And check out this follow up story also from LinuxGizmos.com :
Why Microsoft chose Linux for Azure Sphere

 

SMARC Module Features Hexa-Core i.MX8 QuadMax

By Eric Brown

iWave has unveiled a rugged, wireless enabled SMARC module with 4 GB LPDDR4 and dual GbE controllers that runs Linux or Android on NXP’s i.MX8 QuadMax SoC with 2x Cortex-A72, 4x -A53, 2x -M4F and 2x GPU cores.

iW-RainboW-G27M (front)

iWave has posted specs for an 82 mm x 50 mm, industrial temperature “iW-RainboW-G27M” SMARC 2.0 module that builds on NXP’s i.MX8 QuadMax system-on-chip. The i.MX8 QuadMax was announced in Oct. 2016 as the higher end model of an automotive focused i.MX8 Quad family.

Although the lower-end, quad-core, Cortex-A53 i.MX8M SoC was not fully announced until after the hexa-core Quad, we’ve seen far more embedded boards based on the
i.MX8M , including a recent Seco SM-C12

iW-RainboW-G27M (back)

SMARC module. The only other i.MX8 Quad based product we’ve seen is Toradex’s QuadMax driven Apalis iMX8 module. The Apalis iMX8 was announced a year ago, but is still listed as “coming soon.”

 

 

i.MX8 Quad block diagram (dashed lines indicate model-specific features) (click image to enlarge)

 

Like Rockchip’s RK3399, NXP’s i.MX8 QuadMax features dual high-end Cortex-A72 cores and four Cortex-A53 cores. NXP also offers a similar i.MX8 QuadPlus design with only one Cortex-A72 core.

The QuadMax clock rates are lower than on the RK3399, which clocks to 1.8 GHz (A72) and 1.2 GHz (A53). Toradex says the Apalis iMX8’s -A72 and -A53 cores will clock to 1.6 GHz and 1.2 GHz, respectively.

Close-up of i.MX8 QuadMax on iW-RainboW-G27M

Whereas the i.MX8M has one 266 MHz Cortex-M4F microcontroller, the Quad SoCs have two. A HIFI4 DSP is also onboard, along with a dual-core Vivante GC7000LiteXS/VX GPU, which is alternately referred to as being two GPUs in one or having a split GPU design.

iWave doesn’t specifically name these coprocessors except to list features including a “4K H.265 decode and 1080p H.264 enc/dec capable VPU, 16-Shader 3D (Vec4), and Enhanced Vision Capabilities (via GPU).” The SoC is also said to offer a “dual failover-ready display controller.” The CPUs, meanwhile, are touted for their “full chip hardware virtualization capabilities.”

Inside the iW-RainboW-G27M

Like iWave’s SMARC 2.0 form factor Snapdragon 820 SOM, the iW-RainboW-G27M supports Linux and Android, in this case running Android Nougat (7.0) or higher. (Toradex’s Apalis iMX8 supports Linux, and also supports FreeRTOS running on the Cortex-M4F MCUs.)

Like Toradex, iWave is not promoting the automotive angle that was originally pushed by NXP. iWave’s module is designed to “offer maximum performance with higher efficiency for complex embedded application of consumer, medical and industrial embedded computing applications,” says iWave.

Like the QuadMax based Apalis iMX8, as well as most of the i.MX8M products we’ve seen, the iW-RainboW-G27M supports up to 4 GB LPDDR4 RAM and up to 16 GB eMMC. iWave notes that the RAM and eMMC are “expandable,” but does not say to what capacities. There’s also a microSD slot and 256 MB of optional QSPI flash.

Whereas Apalis iMX8 has a single GbE controller, iWave’s COM has two. It similarly offers onboard 802.11ac Wi-Fi and Bluetooth (4.1). The Microchip ATWILC3000-MR110CA module, which juts out a bit on one side, is listed by Digi-Key as 802.11b/g/n, but iWave has it as 802.11ac.

Interfaces expressed via the SMARC edge connector include 2x GbE, 2x USB 3.0 host (4-port hub), 4x USB 2.0 host, and USB 2.0 OTG. Additional SMARC I/O includes 3x UART (2x with CTS & RTS), 2x CAN, 2x I2C, 12x GPIO, and single PCIe, SATA, debug UART, SD, SPI and QSPI

Media features include an HDMI/DP transmitter, dual-channel LVDS or MIPI-DSI, and an SSI/I2S audio interface. iWave also lists HDMI, 2x LVDS, SPDIF, and ESAI separately under “expansion connector interfaces.” Other expansion I/O is said to include MLB, CAN and GPIO.

The 5 V module supports -40 to 80°C temperatures. There is no mention of a carrier board.

Further information

No pricing or availability was listed for the iW-RainboW-G27M, but a form is available for requesting a quote. More information may be found on iWave’s iW-RainboW-G27M product page.

iWave | www.iwavesystems.com

This article originally appeared on LinuxGizmos.com on March 13.

Movidius AI Acceleration Technology Comes to a Mini-PCIe Card

By Eric Brown

UP AI Core (front)

As promised by Intel when it announced an Intel AI: In Production program for its USB stick form factor Movidius Neural Compute Stick, Aaeon has launched a mini-PCIe version of the device called the UP AI Core. It similarly integrates Intel’s AI-infused Myriad 2 Vision Processing Unit (VPU). The mini-PCIe connection should provide faster response times for neural networking and machine vision compared to connecting to a cloud-based service.

UP AI Core (back)

The module, which is available for pre-order at $69 for delivery in April, is designed to “enhance industrial IoT edge devices with hardware accelerated deep learning and enhanced machine vision functionality,” says Aaeon. It can also enable “object recognition in products such as drones, high-end virtual reality headsets, robotics, smart home devices, smart cameras and video surveillance solutions.”

 

 

UP Squared

The UP AI Core is optimized for Aaeon’s Ubuntu-supported UP Squared hacker board, which runs on Intel’s Apollo Lake SoCs. However, it should work with any 64-bit x86 computer or SBC equipped with a mini-PCIe slot that runs Ubuntu 16.04. Host systems also require 1GB RAM and 4GB free storage. That presents plenty of options for PCs and embedded computers, although the UP Squared is currently the only x86-based community backed SBC equipped with a Mini-PCIe slot.

Myriad 2 architecture

Aaeon had few technical details about the module, except to say it ships with 512MB of DDR RAM, and offers ultra-low power consumption. The UP AI Core’s mini-PCIe interface likely provides a faster response time than the USB link used by Intel’s $79 Movidius Neural Compute Stick. Aaeon makes no claims to that effect, however, perhaps to avoid

Intel’s Movidius
Neural Compute Stick

disparaging Intel’s Neural Compute Stick or other USB-based products that might emerge from the Intel AI: In Production program.

It’s also possible the performance difference between the two products is negligible, especially compared with the difference between either local processing solutions vs. an Internet connection. Cloud-based connections for accessing neural networking services suffer from reduced latency, network bandwidth, reliability, and security, says Aaeon. The company recommends using the Linux-based SDK to “create and train your neural network in the cloud and then run it locally on AI Core.”

Performance issues aside, because a mini-PCIe module is usually embedded within computers, it provides more security than a USB stck. On the other hand, that same trait hinders ease of mobility. Unlike the UP AI Core, the Neural Compute Stick can run on an ARM-based Raspberry Pi, but only with the help of the Stretch desktop or an Ubuntu 16.04 VirtualBox instance.

In 2016, before it was acquired by Intel, Movidius launched its first local-processing version of the Myriad 2 VPU technology, called the Fathom. This Ubuntu-driven USB stick, which miniaturized the technology in the earlier Myriad 2 reference board, is essentially the same technology that re-emerged as Intel’s Movidius Neural Compute Stick.

UP AI Core, front and back

Neural network processors can significantly outperform traditional computing approaches in tasks like language comprehension, image recognition, and pattern detection. The vast majority of such processors — which are often repurposed GPUs — are designed to run on cloud servers.

AIY Vision Kit

The Myriad 2 technology can translate deep learning frameworks like Caffe and TensorFlow into its own format for rapid prototyping. This is one reason why Google adopted the Myriad 2 technology for its recent AIY Vision Kit for the Raspberry Pi Zero W. The kit’s VisionBonnet pHAT board uses the same Movidius MA2450 chip that powers the UP AI Core. On the VisionBonnet, the processor runs Google’s open source TensorFlow machine intelligence library for neural networking, enabling visual perception processing at up to 30 frames per second.

Intel and Google aren’t alone in their desire to bring AI acceleration to the edge. Huawei released a Kirin 970 SoC for its Mate 10 Pro phone that provides a neural processing coprocessor, and Qualcomm followed up with a Snapdragon 845 SoC with its own neural accelerator. The Snapdragon 845 will soon appear on the Samsung Galaxy S9, among other phones, and will also be heading for some high-end embedded devices.

Last month, Arm unveiled two new Project Trillium AI chip designs intended for use as mobile and embedded coprocessors. Available now is Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. Due this summer is a Machine Learning (ML) Processor, which will accelerate AI applications including machine translation and face recognition.

Further information

The UP AI Core is available for pre-order at $69 for delivery in late April. More information may be found at Aaeon’s UP AI Core announcement and its UP Community UP AI Edge page for the UP AI Core.

Aaeon | www.aaeon.com

This article originally appeared on LinuxGizmos.com on March 6.

Linux and Coming Full Circle

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

In terms of technology, the line between embedded computing and IT/desktop computing has always been a moving target. Certainty the computing power in small embedded devices today have vastly more compute muscle than even a server of 15 years ago. While there’s many ways to look at that phenomena, it’s interesting to look at it through the lens of Linux. The quick rise in the popularity of Linux in the 90s happened on the server/IT side pretty much simultaneously with the embrace of Linux in the embedded market.

I’ve talked before in this column about the embedded Linux start-up bubble of the late 90s. That’s when a number of start-ups emerged as “embedded Linux” companies. It was a new business model for our industry, because Linux is a free, open-source OS. As a result, these companies didn’t sell Linux, but rather provided services to help customers create and support implementations of open-source Linux. This market disruption spurred the established embedded RTOS vendors to push back. Like most embedded technology journalists back then, I loved having a conflict to cover. There were spirited debates on the “Linux vs. RTOS topic” on conference panels and in articles of time—and I enjoyed participating in both.

It’s amusing to me to remember that Wind River at the time was the most vocal anti-Linux voice of the day. Fast forward to today and there’s a double irony. Most of those embedded Linux startups are long gone. And yet, most major OS vendors offer full-blown embedded Linux support alongside their RTOS offerings. In fact, in a research report released in January by VDC Research, Wind River was named as the market leader in the global embedded software market for both its RTOS and commercial Linux segments.

According the VDC report, global unit shipments of IoT and embedded OSs, including free/non-commercial OSs, will grow to reach 11.1 billion units by 2021, driven primarily by ECU-targeted RTOS shipments in the automotive market, and free Linux installs on higher-resource systems. After accounting for systems with no OS, bare-metal OS, or an in-house developed OS, the total yearly units shipped will grow beyond 17 billion units in 2021 according to the report. VDC research findings also predict that unit growth will be driven primarily by free and low-cost operating systems such as Amazon FreeRTOS, Express Logic ThreadX and Mentor Graphics Nucleus on constrained devices, along with free, open source Linux distributions for resource-rich embedded systems.

Shifting gears, let me indulge myself by talking about some recent Circuit Cellar news—though still on the Linux theme. Circuit Cellar has formed a strategic partnership with LinuxGizmos.com. LinuxGizmos is a well-establish, trusted website that provides up-to-the-minute, detailed and insightful coverage of the latest developer- and maker-friendly, embedded oriented chips, modules, boards, small systems and IoT devices—and the software technologies that make them tick. As its name in implies, LinuxGizmos features coverage of open source, high-level operating systems including Linux and its derivatives (such as Android), as well as lower-level software platforms such as OpenWRT and FreeRTOS.

LinuxGizmos.com was founded by Rick Lehrbaum—but that’s only the latest of his accolades. I know Rick from way back when I first started writing about embedded computing in 1990. Most people in the embedded computing industry remember him as the “Father of PC/104.” Rick co-founded Ampro Computers in 1983 (now part of ADLINK), authored the PC/104 standard and founded the PC/104 Consortium in 1991, created LinuxDevices.com in 1999 and guided the formation of the Embedded Linux Consortium in 2000. In 2003, he launched LinuxGizmos.com to fill the void created when LinuxDevices was retired by Quinstreet Media.

Bringing things full circle, Rick says he’s long been a fan of Circuit Cellar, and even wrote a series of articles about PC/104 technology for it in the late 90s. I’m thrilled to be teaming up with LinuxGizmos.com and am looking forward to combing our strengths to better serve you.

This appears in the April (333) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

NXP IoT Platform Links ARM/Linux Layerscape SoCs to Cloud

By Eric Brown

NXP’s “EdgeScale” suite of secure edge computing device management tools help deploy and manage Linux devices running on LSx QorIQ Layerscape SoCs, and connects them to cloud services.

NXP has added an EdgeScale suite of secure edge computing tools and services to its Linux-based Layerscape SDK for six of its networking oriented LSx QorIQ Layerscape SoCs. These include the quad-core, 1.6 GHz Cortex-A53 QorIQ LS1043A, which last year received Ubuntu Core support, as well as the octa-core, Cortex-A72 LS2088a (see farther below).



Simplified EdgeScale architecture
(click image to enlarge)
The cloud-based IoT suite is designed to remotely deploy, manage, and update edge computing devices built on Layerscape SoCs. EdgeScale bridges edge nodes, sensors, and other IoT devices to cloud frameworks, automating the provisioning of software and updates to remote embedded equipment. EdgeScale can be used to deploy container applications and firmware updates, as well as build containers and generate firmware.

The technology leverages the NXP Trust Architecture already built into Layerscape SoCs, which offers Hardware Root of Trust features. These include secure boot, secure key storage, manufacturing protection, hardware resource isolation, and runtime tamper detection.

The EdgeScale suite provides three levels of management: a “point-and-click” dashboard, a Command-Line-Interface (CLI), and the RESTful API, which enables “integration with any cloud computing framework,” as well as greater UI customization. The platform supports Ubuntu, Yocto, OpenWrt, or “any custom Linux distribution.”


Detailed EdgeScale architecture (above) and feature list (below)
(click images to enlarge)
EdgeScale supports cloud frameworks including Amazon’s AWS Greengrass, Alibaba’s Aliyun, Google Cloud, and Microsoft’s Azure IoT Edge. The latter was part of a separate announcement released in conjunction with the EdgeScale release that said that all Layerscape SoCs were being enabled with “secure execution for Azure IoT Edge computing running networking, data analytics, and compute-intensive machine learning applications.”

A year ago, NXP announced a Modular IoT Framework, which was described as a set of pre-integrated NXP hardware and software for IoT, letting customers mix and match technologies with greater assurance of interoperability. When asked how this was related to EdgeScale, Sam Fuller, head of system solutions for NXP’s digital networking group, replied: “EdgeScale is designed to manage higher level software that could have a role of processing the data and managing the communication to/from devices built from the Modular IoT Framework.”


LS102A block diagram
(click image to enlarge)
The EdgeScale suite supports the following QorIQ Layerscape processors:

  • LS102A — 80 0MHz single-core, Cortex-A53 with 1 W power consumption found on F&S’ efus A53LS module
  • LS1028A — dual-core ARMv8 with Time-Sensitive Networking (TSN)
  • LS1043A — 1.6 GHz quad-core, Cortex-A53 with 1 0GbE support, found on the QorIQ LS1043A 10G Residential Gateway Reference Design and the X-ES XPedite6401 XMC/PrPMC mezzanine module
  • LS1046A — quad-core, Cortex-A72 with dual 10 GbE support (also available in dual-core LS1026A model)
  • LS1088a — 1.5 GHz octa-core, Cortex-A53 with dual 10 GbE support, which is also supported on the XPedite6401
  • LS2088a — 2.0 GHz octa-core, Cortex-A72 with 128-bit NEON-based SIMD engine for each core, plus a 10GbE XAUI Fat Pipe interface or 4x 10GBASE-KR — found on X-ES XPedite6370 SBC.

Further information

NXP’s EdgeScale will be available by the end of the month. More information may be found on its EdgeScale product page.

NXP Semiconductors | www.nxp.com

This article originally appeared on LinuxGizmos.com on March 16.

April Circuit Cellar: Sneak Preview

The April issue of Circuit Cellar magazine is coming soon. And we’ve got a healthy serving of embedded electronics articles for you. Here’s a sneak peak.

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

 

Here’s a sneak preview of April 2018 Circuit Cellar:

NAVIGATING THE INTERNET-OF-THINGS

IoT: From Gateway to Cloud
In this follow on to our March “IoT: Device to Gateway” feature, this time we look at technologies and solutions for the gateway to cloud side of IoT.  Circuit Cellar Chief Editor Jeff Child examines the tools and services available to get a cloud-connected IoT implementation up and running.

Texting and IoT Embedded Devices (Part 2)
In Part 1, Jeff Bachiochi laid the groundwork for describing a project involving texting. He puts that into action this, showing how to create messages on his Espressif System’s ESP8266EX-based device to be sent to an email account and end up with those messages going as texts to a cell phone.

Internet of Things Security (Part 2)
In this next part of his article series on IoT security, Bob Japenga takes a look at side-channel attacks. What are they? How much of a threat are they? And how can we prevent them?

Product Focus: 32-Bit Microcontrollers
As the workhorse of today’s embedded systems, 32-bit microcontrollers serve a wide variety of embedded applications—including the IoT. This Product Focus section updates readers on these trends and provides a product album of representative 32-bit MCU products.

GRAPHICS, VISION AND DISPLAYS

Graphics, Video and Displays
Thanks to advances in displays and innovations in graphics ICs, embedded systems can now routinely feature sophisticated graphical user interfaces. Circuit Cellar Chief Editor Jeff Child dives into the latest technology trends and product developments in graphics, video and displays.

Color Recognition and Segmentation in Real-time
Vision systems used to require big, multi-board systems—but not anymore. Learn how two Cornell undergraduates designed a hardware/software system that accelerates vision-based object recognition and tracking using an FPGA SoC. They made a min manufacturing line to demonstrate how their system can accurately track and categorize manufactured candies carried along a conveyor belt.

SPECIFICATIONS, QUALIFICATIONS AND MORE

Component tolerance
We perhaps take for granted sometimes that the tolerances of our electronic components fit the needs of our designs. In this article, Robert Lacoste takes a deep look into the subject of tolerances, using the simple resistor as an example. He goes through the math to help you better understand accuracy and drift along with other factors.

Understanding the Temperature Coefficient of Resistance
Temperature coefficient of resistance (TCR) is the calculation of a relative change of resistance per degree of temperature change. Even though it’s an important spec, different resistor manufacturers use different methods for defining TCR. In this article, Molly Bakewell Chamberlin examines TCR and its “best practice” interpretations using Vishay Precision Group’s vast experience in high-precision resistors.

Designing of Complex Systems
While some commercial software gets away without much qualification during development, the situation is very different when safety in involved. For aircraft, vehicles or any complex system where failure unacceptable, this means adhering to established standards throughout the development life cycle. In this article, George Novacek tackles these issues and examines some of these standards namely ARP4754.

AND MORE IN-DEPTH PROJECT ARTICLES

Build a Marginal Oscillator Proximity Switch
A damped or marginal oscillator will switch off when energy is siphoned from its resonant LC tank circuit. In his article, Dev Gualtieri presents a simple marginal oscillator that detects proximity to a small steel screw or steel plate. It lights an LED, and the LED can be part of an optically-isolated solid-state relay.

Obsolescence-Proof Your UI (Part 1)
After years of frustration dealing with graphical interface technologies that go obsolete, Steve Hendrix decided there must be a better way. Knowing that web browser technology is likely to be with us for a long while, he chose to build a web server that could perform common operations that he needed on the IEEE-488 bus. He then built it as a product available for sale to others—and it is basically obsolescence-proof.

 

 

Circuit Cellar and LinuxGizmos.com Form Strategic Partnership

Partnership offers an expanded technical resource for embedded and IoT device developers and enthusiasts

Today Circuit Cellar is announcing a strategic partnership with LinuxGizmos.com to offer an expanded resource of information and know-how on embedded electronics technology for developers, makers, students and educators, early adopters, product strategists, and technical decision makers with a keen interest in emerging embedded and IoT technologies.

The new partnership combines Circuit Cellar’s uniquely in depth, “down-to-the-bits” technical articles with LinuxGizmos.com’s up-to-the-minute, detailed, and insightful coverage of the latest developer-  and maker-friendly, embedded oriented chips, modules, boards, small systems, and IoT devices, and the software technologies that make them tick. Additionally, as its name implies, LinuxGizmos.com’s coverage frequently highlights open source, high-level operating systems including Linux and its derivatives (e.g. Android), as well as lower-level software platforms such as OpenWRT and FreeRTOS.

Circuit Cellar is one of the electronics industry’s most highly technical information resources for professional engineers, academics, and other specialists involved in the design and development of embedded processor- and microcontroller-based systems across a broad range of applications. It gets right down to the bits and bytes and lines of code, at a level its readers revel in. Circuit Cellar is a trusted brand engaging readers every day on its website, each week with its newsletter, and each month through Circuit Cellar magazine’s print and digital formats.

LinuxGizmos.com is a free-to-use website that publishes daily news and analysis on the hardware, software, protocols, and standards used in new and innovative embedded, mobile, and Internet of Things (IoT) devices.  The site is lauded for its detailed and insightful, timely coverage of newly introduced single board computers (SBCs), computer-on-modules (COMs), system-on-chips (SoCs), and small form factor (SFF) systems, along with their software platforms.

“The synergies between LinuxGizmos and Circuit Cellar are great and I’m excited to see the benefits of this partnership passed on to our combined audience,” said Jeff Child, Editor-in-Chief, Circuit Cellar. “LinuxGizmos.com has the kind of rich, detail-oriented structure that I’m a fan of. Over the many years I’ve been following the site, I’ve relied on it as an important information resource, and its integrity has always impressed me.”

“I’ve been a fan of Circuit Cellar magazine since it was first launched, and wrote a series of articles for it in the late 90s about PC/104 embedded modules,” added Rick Lehrbaum, founder and Editor-in-Chief of LinuxGizmos.com. “I’m thrilled to see LinuxGizmos become associated with one of the computing industry’s pioneering publications.”

“I see this partnership as a perfect way to enhance both the Circuit Cellar and LinuxGizmos brands as key information platforms,” stated KC Prescott, President, KCK Media Corp. “In this era where there’s so much compelling technology innovation happening in the industry, our combined strengths will help inform and inspire embedded systems developers.”

Read Announcement on LinuxGizmos.com here:

Circuit Cellar and LinuxGizmos.com join forces

MPU-Based SOM Meets Industrial IoT Linux Needs

Microchip Technology has unveiled a new System on Module (SOM) featuring the SAMA5D2 microprocessor (MPU). The ATSAMA5D27-SOM1 contains the recently released ATSAMA5D27C-D1G-CU System in Package (SiP). The SOM simplifies IoT design by integrating the power management, non-volatile boot memory, Ethernet PHY and high-speed DDR2 memory onto a small, single-sided printed circuit board (PCB). There is a great deal of design effort and complexity associated with creating an industrial-grade MPU-based system running a Linux operating system. Even developers with expertise in the area spend a lot of time on PCB layout to guarantee signal integrity for the high-speed interfaces to DDR memory and PHY while complying with EMC standards.

The SAMA5D2 family of products provides an extremely flexible design experience no matter the level of expertise. For example, the SOM—which integrates multiple external components and eliminates key design challenges around EMI, ESD and signal integrity—can be used to expedite development time. Customers can solder the SOM to their board and take it to production, or it can be used as a reference design along with the free schematics, design and Gerber files and complete bill of materials which are available online. Customers can also transition from the SOM to the SiP or the MPU itself, depending on their design needs. All products are backed by Microchip’s customer-driven obsolescence policy which ensures availability to customers for as long as needed.

The Arm Cortex-A5-based SAMA5D2 SiP, mounted on the SOM PCB or available separately, integrates 1 Gbit of DDR2 memory, further simplifying the design by removing the high- speed memory interface constraints from the PCB. The impedance matching is done in the package, not manually during development, so the system will function properly at normal and low- speed operation. Three DDR2 memory sizes (128 Mb, 512 Mb and 1 Gb) are available for the SAMA5D2 SiP and optimized for bare metal, RTOS and Linux implementations.

Microchip customers developing Linux-based applications have access to the largest set of device drivers, middleware and application layers for the embedded market at no charge. All of Microchip’s Linux development code for the SiP and SOM are mainlined in the Linux communities. This results in solutions where customers can connect external devices, for which drivers are mainlined, to the SOM and SIP with minimal software development.

The SAMA5D2 family features the highest levels of security in the industry, including PCI compliance, providing an excellent platform for customers to create secured designs. With integrated Arm TrustZone and capabilities for tamper detection, secure data and program storage, hardware encryption engine, secure boot and more, customers can work with Microchip’s security experts to evaluate their security needs and implement the level of protection that’s right for their design. The SAMA5D2 SOM also contains Microchip’s QSPI NOR Flash memory, a Power Management Integrated Circuit (PMIC), an Ethernet PHY and serial EEPROM memory with a Media Access Control (MAC) address to expand design options.

The SOM1-EK1 development board provides a convenient evaluation platform for both the SOM and the SiP. A free Board Support Package (BSP) includes the Linux kernel and drivers for the MPU peripherals and integrated circuits on the SOM. Schematics and Gerber files for the SOM are also available.

The ATSAMA5D2 SiP is available in four variants starting with the ATSAMA5D225C-D1M- CU in a 196-lead BGA package for $8.62 each in 10,000 units. The ATSAMA5D27-SOM1 is available now for $39.00 each in 100 units The ATSAMA5D27-SOM1-EK1 development board is available for $245.00.

Microchip Technology | www.microchip.com

SiFive Launches Linux-Capable RISC-V Based SoC

SiFive has launched the industry’s first Linux-capable RISC-V based processor SoC. The company demonstrated the first real-world use of the HiFive Unleashed board featuring the Freedom U540 SoC, based on its U54-MC Core IP, at the FOSDEM open source developer conference.

During the session, SiFive provided updates on the RISC-V Linux effort, surprising attendees with an announcement that the presentation had been run on the HiFive Unleashed development board. With the availability of the HiFive Unleashed board and Freedom U540 SoC, SiFive has brought to market the first multicore RISC-V chip designed for commercialization, and now offers the industry’s widest array of RISC-V based Core IP.

With the Freedom U540, the first RISC-V based, 64-bit 4+1 multicore SoC with support for full featured operating systems such as Linux, the HiFive Unleashed development board will greatly spur open-source software development. The underlying CPU, the U54-MC Core IP, is ideal for applications that need full operating system support such as artificial intelligence, machine learning, networking, gateways and smart IoT devices.

The company also announced its first hackathon, which will be held during the Embedded Linux Conference, March 12 to 14 in Portland, OR. The hackathon will enable registered SiFive Developers to be among the first test out SiFive’s HiFive Unleashed board featuring the U540 SoC.

Freedom U540 processor specs include:

  • 4+1 Multi-Core Coherent Configuration, up to 1.5 GHz
  • 4x U54 RV64GC Application Cores with Sv39 Virtual Memory Support
  • 1x E51 RV64IMAC Management Core
  • Coherent 2MB L2 Cache
  • 64-bit DDR4 with ECC
  • 1x Gigabit Ethernet
  • Built in 28nm process technology

The HiFive Unleased development board specs include:

  • SiFive Freedom U540 SoC
  • 8GB DDR4 with ECC for serious application development
  • Gigabit Ethernet Port
  • 32MB Quad SPI Flash
  • MicroSD Card for removable storage
  • FMC Connector for future expansion with add-in cards

Developers can purchase the HiFive Unleashed development board here. A limited batch of early access boards will ship in late March 2018, with a wider release in June. For more information or to register for the hackathon, visit www.sifive.com/products/hifive-unleashed/.

SiFive | www.sifive.com

A Year in the Drone Age

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

When you’re trying to keep tabs on any young, fast-growing technology, it’s tempting to say “this is the big year” for that technology. Problem is that odds are the following year could be just as significant. Such is the case with commercial drones. Drone technology fascinates me partly because it represents one of the clearest examples of an application that wouldn’t exist without today’s level of chip integration driven by Moore’s law. That integration has enabled 4k HD video capture, image stabilization, new levels of autonomy and even highly compact supercomputing to fly aboard today’s commercial and consumer drones.

Beyond the technology side, drones make for a rich topic of discussion because of the many safety, privacy and regulatory issues surrounding them. And then there are the wide-open questions on what new applications will drones be used for?

For its part, the Federal Aviation Administration has had its hands full this year regarding drones. In the spring, for example, the FAA completed its fifth and final field evaluation of potential drone detection systems at Dallas/Fort Worth International Airport. The evaluation was the latest in a series of detection system evaluations that began in February 2016 at several airports. For the DFW test, the FAA teamed with Gryphon Sensors as its industry partner. The company’s drone detection technologies include radar, radio frequency and electro-optical systems. The FAA intends to use the information gathered during these kinds of evaluations to craft performance standards for any drone detection technology that may be deployed in or around U.S. airports.

In early summer, the FAA set up a new Aviation Rulemaking Committee tasked to help the agency create standards for remotely identifying and tracking unmanned aircraft during operations. The rulemaking committee will examine what technology is available or needs to be created to identify and track unmanned aircraft in flight.

This year as also saw vivid examples of the transformative role drones are playing. A perfect example was the role drones played in August during the flooding in Texas after Hurricane Harvey. In his keynote speech at this year’s InterDrone show, FAA Administrator Michael Huerta described how drones made an incredible impact. “After the floodwaters had inundated homes, businesses, roadways and industries, a wide variety of agencies sought FAA authorization to fly drones in airspace covered by Temporary Flight Restrictions,” said Huerta. “We recognized that we needed to move fast—faster than we have ever moved before. In most cases, we were able to approve individual operations within minutes of receiving a request.”

Huerta went on to described some of the ways drones were used. A railroad company used drones to survey damage to a rail line that cuts through Houston. Oil and energy companies flew drones to spot damage to their flooded infrastructure. Drones helped a fire department and county emergency management officials check for damage to roads, bridges, underpasses and water treatment plants that could require immediate repair. Meanwhile, cell tower companies flew them to assess damage to their towers and associated ground equipment and insurance companies began assessing damage to neighborhoods. In many of those situations, drones were able to conduct low-level operations more efficiently—and more safely—than could have been done with manned aircraft.

“I don’t think it’s an exaggeration to say that the hurricane response will be looked back upon as a landmark in the evolution of drone usage in this country,” said Huerta. “And I believe the drone industry itself deserves a lot of credit for enabling this to happen. That’s because the pace of innovation in the drone industry is like nothing we have seen before. If people can dream up a new use for drones, they’re transforming it into reality.”

Clearly, it’s been significant year for drone technology. And I’m excited for Circuit Cellar to go deeper with our drone embedded technology coverage in 2018. But I don’t think I’ll dare say that “this was the big year” for drones. I have a feeling it’s just one of many to come.

This appears in the December (329) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today: