Microsoft Unveils Secure MCU Platform with a Linux-Based OS

By Eric Brown

Microsoft has announced an “Azure Sphere” blueprint for for hybrid Cortex-A/Cortex-M SoCs that run a Linux-based Azure Sphere OS and include end-to-end Microsoft security technologies and a cloud service. Products based on a MediaTek MT3620 Azure Sphere chip are due by year’s end.

Just when Google has begun to experiment with leaving Linux behind with its Fuchsia OS —new Fuchsia details emerged late last week— long-time Linux foe Microsoft unveiled an IoT platform that embraces Linux. At RSA 2018, Microsoft Research announced a project called Azure Sphere that it bills as a new class of Azure Sphere microcontrollers that run “a custom Linux kernel” combined with Microsoft security technologies. Initial products are due by the end of the year aimed at industries including whitegoods, agriculture, energy and infrastructure.

Based on the flagship, Azure Sphere based MediaTek MT3620 SoC, which will ship in volume later this year, this is not a new class of MCUs, but rather a fairly standard Cortex-A7 based SoC with a pair of Cortex-M4 MCUs backed up by end to end security. It’s unclear if future Azure Sphere compliant SoCs will feature different combinations of Cortex-A and Cortex-M, but this is clearly an on Arm IP based design. Arm “worked closely with us to incorporate their Cortex-A application processors into Azure Sphere MCUs,” says Microsoft. 

Azure Sphere OS architecture (click images to enlarge)

Major chipmakers have signed up to build Azure Sphere system-on-chips including Nordic, NXP, Qualcomm, ST Micro, Silicon Labs, Toshiba, and more (see image below). The software giant has sweetened the pot by “licensing our silicon security technologies to them royalty-free.”

Azure Sphere SoCs “combine both real-time and application processors with built-in Microsoft security technology and connectivity,” says Microsoft. “Each chip includes custom silicon security technology from Microsoft, inspired by 15 years of experience and learnings from Xbox.”

The design “combines the versatility and power of a Cortex-A processor with the low overhead and real-time guarantees of a Cortex-M class processor,” says Microsoft. The MCU includes a Microsoft Pluton Security Subsystem that “creates a hardware root of trust, stores private keys, and executes complex cryptographic operations.”

The IoT oriented Azure Sphere OS provides additional Microsoft security and a security monitor in addition to the Linux kernel. The platform will ship with Visual Studio development tools, and a dev kit will ship in mid-2018.

Azure Sphere security features (click image to enlarge)

The third component is an Azure Sphere Security Service, a turnkey, cloud-based platform. The service brokers trust for device-to-device and device-to-cloud communication through certificate-based authentication. The service also detects “emerging security threats across the entire Azure Sphere ecosystem through online failure reporting, and renewing security through software updates,” says Microsoft.

Azure Sphere eco-system conceptual diagram (top) and list of silicon partners (bottom)

In many ways, Azure Sphere is similar to Samsung’s Artik line of IoT modules, which incorporate super-secure SoCs that are supported by end-to-end security controlled by the Artik Cloud. One difference is that the Artik modules are either Cortex-A applications processors or Cortex-M or -R MCUs, which are designed to be deployed in heterogeneous product designs, rather than a hybrid SoC like the MediaTek MT3620.Hybrid, Linux-driven Cortex-A/Cortex-M SoCs have become common in recent years, led by NXP’s Cortex-A7 based i.MX7 and -A53-based i.MX8, as well as many others including the -A7 based Renesas RZ/N1D and Marvell IAP220.

MediaTek MT3620

The MediaTek MT3620 “was designed in close cooperation with Microsoft for its Azure Sphere Secure IoT Platform,” says MediaTek in its announcement. Its 500MHz Cortex-A7 core is accompanied by large L1 and L2 caches and integrated SRAM. Dual Cortex-M4F chips support peripherals including 5x UART/I2C/SPI, 2x I2S, 8x ADC, up to 12 PWM counters, and up to 72x GPIO.

The Cortex-M4F cores are primarily devoted to real-time I/O processing, “but can also be used for general purpose computation and control,” says MediaTek. They “may run any end-user-provided operating system or run a ‘bare metal app’ with no operating system.”

In addition, the MT3620 features an isolated security subsystem with its own Arm Cortex-M4F core that handles secure boot and secure system operation. A separate Andes N9 32-bit RISC core supports 1×1 dual-band 802.11a/b/g/n WiFi.

The security features and WiFi networking are “isolated from, and run independently of, end user applications,” says MediaTek. “Only hardware features supported by the Azure Sphere Secure IoT Platform are available to MT3620 end-users. As such, security features and Wi-Fi are only accessible via defined APIs and are robust to programming errors in end-user applications regardless of whether these applications run on the Cortex-A7 or the user-accessible Cortex-M4F cores.” MediaTek adds that a development environment is avaialble based on the gcc compiler, and includes a Visual Studio extension, “allowing this application to be developed in C.”

Microsoft learns to love LinuxIn recent years, we’ve seen Microsoft has increasingly softened its long-time anti-Linux stance by adding Linux support to its Azure service and targeting Windows 10 IoT at the Raspberry Pi, among other experiments. Microsoft is an active contributor to Linux, and has even open-sourced some technologies.

It wasn’t always so. For years, Microsoft CEO Steve Ballmer took turns deriding Linux and open source while warning about the threat they posed to the tech industry. In 2007, Microsoft fought back against the growth of embedded Linux at the expense of Windows CE and Windows Mobile by suing companies that used embedded Linux, claiming that some of the open source components were based on proprietary Microsoft technologies. By 2009, a Microsoft exec openly acknowledged the threat of embedded Linux and open source software.

That same year, Microsoft was accused of using its marketing muscle to convince PC partners to stop providing Linux as an optional install on netbooks. In 2011, Windows 8 came out with a new UEFI system intended to stop users from replacing Windows with Linux on major PC platforms.


Azure Sphere promo video

Further information

Azure Sphere is available as a developer preview to selected partners. The MediaTek MT3620 will be the first Azure Sphere MCU, and products based on it should arrive by the end of the year. More information may be found in Microsoft’s Azure Sphere announcement and product page.

Microsoft | www.microsoft.com

This article originally appeared on LinuxGizmos.com on April 16.

And check out this follow up story also from LinuxGizmos.com :
Why Microsoft chose Linux for Azure Sphere

 

Tiny, Rugged IoT Gateways Offer 10-Year Linux Support

By Eric Brown

Moxa has announced the UC-2100 Series of industrial IoT gateways along with its new UC 3100 and UC 5100 Series, but it offered details only on the UC-2100. All three series will offer ruggedization features, compact footprints, and on some models, 4G LTE support. They all run Moxa Industrial Linux and optional ThingsPro Gateway data acquisition software on Arm-based SoCs.

 

Moxa UC-2111 or UC-2112 (left) and UC-2101 (click image to enlarge)

Based on Debian 9 and a Linux 4.4 kernel, the new Moxa Industrial Linux (MIL) is a “high-performance, industrial-grade Linux distribution” that features a container-based virtual-machine-like middleware abstraction layer between the OS and applications,” says Moxa. Multiple isolated systems can run on a single control host “so that system integrators and engineers can easily change the behavior of an application without worrying about software compatibility,” says the company.

MIL provides 10-year long-term Linux support, and is aimed principally at industries that require long-term software, such as power, water, oil & gas, transportation and building automation industries. In December, Moxa joined the Linux Foundation’s Civil Infrastructure Platform (CIP) project, which is developing a 10-year SLTS Linux kernel for infrastructure industries. MIL appears to be in alignment with CIP standards.

Diagrams of ThingsPro Gateway (top) and the larger ThingsPro eco-system (bottom) (click images to enlarge)

Moxa’s ThingsPro Gateway software enables “fast integration of edge data into cloud services for large-scale IIoT deployments,” says Moxa. The software supports Modbus data acquisition, LTE connectivity, MQTT communication, and cloud client interfaces such as Amazon Web Services (AWS) and Microsoft Azure. C and Python APIs are also available.

 

Moxa’s UC-3100 (source: Hanser Konstruktion), and at right, the similarly Linux-driven, ThingsPro ready UC-8112 (click images to enlarge)

Although we saw no product pages on the UC-3100 and UC-5100, Hanser Konstruktion posted a short news item on the UC-3100 with a photo (above) and a few details. This larger, rugged system supports WiFi and LTE with two antenna pairs, and offers a USB port in addition to dual LAN and dual serial ports.

The new systems follow several other UC-branded IoT gateways that run Linux on Arm. The only other one to support ThingsPro is the UC-8112, a member of the UC-8100 family. This UC-8100 is similarly ruggedized, and runs Linux on a Cortex-A8 SoC.

UC-2100

The UC-2100 Series gateways runs MIL on an unnamed Cortex-A8 SoC clocked at 600MHz except for the UC-2112, which jumps to 1GHz. There are five different models, all with 9-48 VDC 3-pin terminal blocks and a maximum consumption of 4 Watts when not running cellular modules.

The five UC-2100 models have the following dimensions, weights, and maximum input currents:

  • UC-2101 — 50 x 80 x 28mm; 190 g; 200 mA
  • UC-2102 — 50 x 80 x 28mm; 190 g; 330 mA
  • UC-2104 — 57 x 80 x 30.8mm; 220 g; 800 mA
  • UC-2111 — 77 x 111 x 25.5mm; 290 g; 350 mA
  • UC-2112 — 77 x 111 x 25.5mm; 290 g; 450 mA

All five UC-2100 variants default to a -10 to 60°C operating range except for the UC-2104, which moves up to -10 to 70°C. In addition, they are all available in optional -40 to 75°C versions.

Other ruggedization features are the same, including anti-vibration protection per IEC 60068-2-64 and anti-shock per IEC 60068-2-2. A variety of safety, EMC, EMI, EMS, and hazardous environment standards are also listed.

The first three models ship with 256MB DDR3, while the UC-2111 and UC-2112 offer 512MB. These two are also the only ones to offer micro-SD slots. All five systems ship with 8GB eMMC loaded with the MIL distribution.

The UC-2100 systems vary in the number and type of their auto-sensing, 1.5 kV isolated Ethernet ports. The UC-2101 and UC-2104 each have a single 10/100Mbps port, while the UC-2102 and UC-2111 have two. The UC-2112 has one 10/100 and one 10/100/1000 port. The UC-2104 is the only model with a mini-PCIe socket for 4G or WiFi.

The UC-2111 and UC-2112 offer 2x RS-232/422/48 ports while the UC-2101 has one. It would appear that the UC-2102 and UC-2104 lack serial ports altogether except for the RS-232 console port available on all five systems.

The UC-2100 provides push buttons and dip switches, an RTC, a watchdog, and LEDs, the number of which depend on the model. A wall kit is standard, and DIN-rail mounting is optional. TPM 2.0 is also optional. A 5-year hardware warranty is standard.

Further information

The UC-2100 Series gateways appear to be available for order, with pricing undisclosed. More information may be found on Moxa’s UC-2100 product page. More information about the UC-2100, as well as the related, upcoming UC-3100 and UC-5100 Series, will be on tap at Hannover Messe 2018, April 23-27, at the Arm Booth at Hall 6, Booth A46.

Moxa | www.moxa.com

This article originally appeared on LinuxGizmos.com on April 16.

SMARC Module Features Hexa-Core i.MX8 QuadMax

By Eric Brown

iWave has unveiled a rugged, wireless enabled SMARC module with 4 GB LPDDR4 and dual GbE controllers that runs Linux or Android on NXP’s i.MX8 QuadMax SoC with 2x Cortex-A72, 4x -A53, 2x -M4F and 2x GPU cores.

iW-RainboW-G27M (front)

iWave has posted specs for an 82 mm x 50 mm, industrial temperature “iW-RainboW-G27M” SMARC 2.0 module that builds on NXP’s i.MX8 QuadMax system-on-chip. The i.MX8 QuadMax was announced in Oct. 2016 as the higher end model of an automotive focused i.MX8 Quad family.

Although the lower-end, quad-core, Cortex-A53 i.MX8M SoC was not fully announced until after the hexa-core Quad, we’ve seen far more embedded boards based on the
i.MX8M , including a recent Seco SM-C12

iW-RainboW-G27M (back)

SMARC module. The only other i.MX8 Quad based product we’ve seen is Toradex’s QuadMax driven Apalis iMX8 module. The Apalis iMX8 was announced a year ago, but is still listed as “coming soon.”

 

 

i.MX8 Quad block diagram (dashed lines indicate model-specific features) (click image to enlarge)

 

Like Rockchip’s RK3399, NXP’s i.MX8 QuadMax features dual high-end Cortex-A72 cores and four Cortex-A53 cores. NXP also offers a similar i.MX8 QuadPlus design with only one Cortex-A72 core.

The QuadMax clock rates are lower than on the RK3399, which clocks to 1.8 GHz (A72) and 1.2 GHz (A53). Toradex says the Apalis iMX8’s -A72 and -A53 cores will clock to 1.6 GHz and 1.2 GHz, respectively.

Close-up of i.MX8 QuadMax on iW-RainboW-G27M

Whereas the i.MX8M has one 266 MHz Cortex-M4F microcontroller, the Quad SoCs have two. A HIFI4 DSP is also onboard, along with a dual-core Vivante GC7000LiteXS/VX GPU, which is alternately referred to as being two GPUs in one or having a split GPU design.

iWave doesn’t specifically name these coprocessors except to list features including a “4K H.265 decode and 1080p H.264 enc/dec capable VPU, 16-Shader 3D (Vec4), and Enhanced Vision Capabilities (via GPU).” The SoC is also said to offer a “dual failover-ready display controller.” The CPUs, meanwhile, are touted for their “full chip hardware virtualization capabilities.”

Inside the iW-RainboW-G27M

Like iWave’s SMARC 2.0 form factor Snapdragon 820 SOM, the iW-RainboW-G27M supports Linux and Android, in this case running Android Nougat (7.0) or higher. (Toradex’s Apalis iMX8 supports Linux, and also supports FreeRTOS running on the Cortex-M4F MCUs.)

Like Toradex, iWave is not promoting the automotive angle that was originally pushed by NXP. iWave’s module is designed to “offer maximum performance with higher efficiency for complex embedded application of consumer, medical and industrial embedded computing applications,” says iWave.

Like the QuadMax based Apalis iMX8, as well as most of the i.MX8M products we’ve seen, the iW-RainboW-G27M supports up to 4 GB LPDDR4 RAM and up to 16 GB eMMC. iWave notes that the RAM and eMMC are “expandable,” but does not say to what capacities. There’s also a microSD slot and 256 MB of optional QSPI flash.

Whereas Apalis iMX8 has a single GbE controller, iWave’s COM has two. It similarly offers onboard 802.11ac Wi-Fi and Bluetooth (4.1). The Microchip ATWILC3000-MR110CA module, which juts out a bit on one side, is listed by Digi-Key as 802.11b/g/n, but iWave has it as 802.11ac.

Interfaces expressed via the SMARC edge connector include 2x GbE, 2x USB 3.0 host (4-port hub), 4x USB 2.0 host, and USB 2.0 OTG. Additional SMARC I/O includes 3x UART (2x with CTS & RTS), 2x CAN, 2x I2C, 12x GPIO, and single PCIe, SATA, debug UART, SD, SPI and QSPI

Media features include an HDMI/DP transmitter, dual-channel LVDS or MIPI-DSI, and an SSI/I2S audio interface. iWave also lists HDMI, 2x LVDS, SPDIF, and ESAI separately under “expansion connector interfaces.” Other expansion I/O is said to include MLB, CAN and GPIO.

The 5 V module supports -40 to 80°C temperatures. There is no mention of a carrier board.

Further information

No pricing or availability was listed for the iW-RainboW-G27M, but a form is available for requesting a quote. More information may be found on iWave’s iW-RainboW-G27M product page.

iWave | www.iwavesystems.com

This article originally appeared on LinuxGizmos.com on March 13.

Movidius AI Acceleration Technology Comes to a Mini-PCIe Card

By Eric Brown

UP AI Core (front)

As promised by Intel when it announced an Intel AI: In Production program for its USB stick form factor Movidius Neural Compute Stick, Aaeon has launched a mini-PCIe version of the device called the UP AI Core. It similarly integrates Intel’s AI-infused Myriad 2 Vision Processing Unit (VPU). The mini-PCIe connection should provide faster response times for neural networking and machine vision compared to connecting to a cloud-based service.

UP AI Core (back)

The module, which is available for pre-order at $69 for delivery in April, is designed to “enhance industrial IoT edge devices with hardware accelerated deep learning and enhanced machine vision functionality,” says Aaeon. It can also enable “object recognition in products such as drones, high-end virtual reality headsets, robotics, smart home devices, smart cameras and video surveillance solutions.”

 

 

UP Squared

The UP AI Core is optimized for Aaeon’s Ubuntu-supported UP Squared hacker board, which runs on Intel’s Apollo Lake SoCs. However, it should work with any 64-bit x86 computer or SBC equipped with a mini-PCIe slot that runs Ubuntu 16.04. Host systems also require 1GB RAM and 4GB free storage. That presents plenty of options for PCs and embedded computers, although the UP Squared is currently the only x86-based community backed SBC equipped with a Mini-PCIe slot.

Myriad 2 architecture

Aaeon had few technical details about the module, except to say it ships with 512MB of DDR RAM, and offers ultra-low power consumption. The UP AI Core’s mini-PCIe interface likely provides a faster response time than the USB link used by Intel’s $79 Movidius Neural Compute Stick. Aaeon makes no claims to that effect, however, perhaps to avoid

Intel’s Movidius
Neural Compute Stick

disparaging Intel’s Neural Compute Stick or other USB-based products that might emerge from the Intel AI: In Production program.

It’s also possible the performance difference between the two products is negligible, especially compared with the difference between either local processing solutions vs. an Internet connection. Cloud-based connections for accessing neural networking services suffer from reduced latency, network bandwidth, reliability, and security, says Aaeon. The company recommends using the Linux-based SDK to “create and train your neural network in the cloud and then run it locally on AI Core.”

Performance issues aside, because a mini-PCIe module is usually embedded within computers, it provides more security than a USB stck. On the other hand, that same trait hinders ease of mobility. Unlike the UP AI Core, the Neural Compute Stick can run on an ARM-based Raspberry Pi, but only with the help of the Stretch desktop or an Ubuntu 16.04 VirtualBox instance.

In 2016, before it was acquired by Intel, Movidius launched its first local-processing version of the Myriad 2 VPU technology, called the Fathom. This Ubuntu-driven USB stick, which miniaturized the technology in the earlier Myriad 2 reference board, is essentially the same technology that re-emerged as Intel’s Movidius Neural Compute Stick.

UP AI Core, front and back

Neural network processors can significantly outperform traditional computing approaches in tasks like language comprehension, image recognition, and pattern detection. The vast majority of such processors — which are often repurposed GPUs — are designed to run on cloud servers.

AIY Vision Kit

The Myriad 2 technology can translate deep learning frameworks like Caffe and TensorFlow into its own format for rapid prototyping. This is one reason why Google adopted the Myriad 2 technology for its recent AIY Vision Kit for the Raspberry Pi Zero W. The kit’s VisionBonnet pHAT board uses the same Movidius MA2450 chip that powers the UP AI Core. On the VisionBonnet, the processor runs Google’s open source TensorFlow machine intelligence library for neural networking, enabling visual perception processing at up to 30 frames per second.

Intel and Google aren’t alone in their desire to bring AI acceleration to the edge. Huawei released a Kirin 970 SoC for its Mate 10 Pro phone that provides a neural processing coprocessor, and Qualcomm followed up with a Snapdragon 845 SoC with its own neural accelerator. The Snapdragon 845 will soon appear on the Samsung Galaxy S9, among other phones, and will also be heading for some high-end embedded devices.

Last month, Arm unveiled two new Project Trillium AI chip designs intended for use as mobile and embedded coprocessors. Available now is Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. Due this summer is a Machine Learning (ML) Processor, which will accelerate AI applications including machine translation and face recognition.

Further information

The UP AI Core is available for pre-order at $69 for delivery in late April. More information may be found at Aaeon’s UP AI Core announcement and its UP Community UP AI Edge page for the UP AI Core.

Aaeon | www.aaeon.com

This article originally appeared on LinuxGizmos.com on March 6.

Linux and Coming Full Circle

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

In terms of technology, the line between embedded computing and IT/desktop computing has always been a moving target. Certainty the computing power in small embedded devices today have vastly more compute muscle than even a server of 15 years ago. While there’s many ways to look at that phenomena, it’s interesting to look at it through the lens of Linux. The quick rise in the popularity of Linux in the 90s happened on the server/IT side pretty much simultaneously with the embrace of Linux in the embedded market.

I’ve talked before in this column about the embedded Linux start-up bubble of the late 90s. That’s when a number of start-ups emerged as “embedded Linux” companies. It was a new business model for our industry, because Linux is a free, open-source OS. As a result, these companies didn’t sell Linux, but rather provided services to help customers create and support implementations of open-source Linux. This market disruption spurred the established embedded RTOS vendors to push back. Like most embedded technology journalists back then, I loved having a conflict to cover. There were spirited debates on the “Linux vs. RTOS topic” on conference panels and in articles of time—and I enjoyed participating in both.

It’s amusing to me to remember that Wind River at the time was the most vocal anti-Linux voice of the day. Fast forward to today and there’s a double irony. Most of those embedded Linux startups are long gone. And yet, most major OS vendors offer full-blown embedded Linux support alongside their RTOS offerings. In fact, in a research report released in January by VDC Research, Wind River was named as the market leader in the global embedded software market for both its RTOS and commercial Linux segments.

According the VDC report, global unit shipments of IoT and embedded OSs, including free/non-commercial OSs, will grow to reach 11.1 billion units by 2021, driven primarily by ECU-targeted RTOS shipments in the automotive market, and free Linux installs on higher-resource systems. After accounting for systems with no OS, bare-metal OS, or an in-house developed OS, the total yearly units shipped will grow beyond 17 billion units in 2021 according to the report. VDC research findings also predict that unit growth will be driven primarily by free and low-cost operating systems such as Amazon FreeRTOS, Express Logic ThreadX and Mentor Graphics Nucleus on constrained devices, along with free, open source Linux distributions for resource-rich embedded systems.

Shifting gears, let me indulge myself by talking about some recent Circuit Cellar news—though still on the Linux theme. Circuit Cellar has formed a strategic partnership with LinuxGizmos.com. LinuxGizmos is a well-establish, trusted website that provides up-to-the-minute, detailed and insightful coverage of the latest developer- and maker-friendly, embedded oriented chips, modules, boards, small systems and IoT devices—and the software technologies that make them tick. As its name in implies, LinuxGizmos features coverage of open source, high-level operating systems including Linux and its derivatives (such as Android), as well as lower-level software platforms such as OpenWRT and FreeRTOS.

LinuxGizmos.com was founded by Rick Lehrbaum—but that’s only the latest of his accolades. I know Rick from way back when I first started writing about embedded computing in 1990. Most people in the embedded computing industry remember him as the “Father of PC/104.” Rick co-founded Ampro Computers in 1983 (now part of ADLINK), authored the PC/104 standard and founded the PC/104 Consortium in 1991, created LinuxDevices.com in 1999 and guided the formation of the Embedded Linux Consortium in 2000. In 2003, he launched LinuxGizmos.com to fill the void created when LinuxDevices was retired by Quinstreet Media.

Bringing things full circle, Rick says he’s long been a fan of Circuit Cellar, and even wrote a series of articles about PC/104 technology for it in the late 90s. I’m thrilled to be teaming up with LinuxGizmos.com and am looking forward to combing our strengths to better serve you.

This appears in the April (333) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

NXP IoT Platform Links ARM/Linux Layerscape SoCs to Cloud

By Eric Brown

NXP’s “EdgeScale” suite of secure edge computing device management tools help deploy and manage Linux devices running on LSx QorIQ Layerscape SoCs, and connects them to cloud services.

NXP has added an EdgeScale suite of secure edge computing tools and services to its Linux-based Layerscape SDK for six of its networking oriented LSx QorIQ Layerscape SoCs. These include the quad-core, 1.6 GHz Cortex-A53 QorIQ LS1043A, which last year received Ubuntu Core support, as well as the octa-core, Cortex-A72 LS2088a (see farther below).



Simplified EdgeScale architecture
(click image to enlarge)
The cloud-based IoT suite is designed to remotely deploy, manage, and update edge computing devices built on Layerscape SoCs. EdgeScale bridges edge nodes, sensors, and other IoT devices to cloud frameworks, automating the provisioning of software and updates to remote embedded equipment. EdgeScale can be used to deploy container applications and firmware updates, as well as build containers and generate firmware.

The technology leverages the NXP Trust Architecture already built into Layerscape SoCs, which offers Hardware Root of Trust features. These include secure boot, secure key storage, manufacturing protection, hardware resource isolation, and runtime tamper detection.

The EdgeScale suite provides three levels of management: a “point-and-click” dashboard, a Command-Line-Interface (CLI), and the RESTful API, which enables “integration with any cloud computing framework,” as well as greater UI customization. The platform supports Ubuntu, Yocto, OpenWrt, or “any custom Linux distribution.”


Detailed EdgeScale architecture (above) and feature list (below)
(click images to enlarge)
EdgeScale supports cloud frameworks including Amazon’s AWS Greengrass, Alibaba’s Aliyun, Google Cloud, and Microsoft’s Azure IoT Edge. The latter was part of a separate announcement released in conjunction with the EdgeScale release that said that all Layerscape SoCs were being enabled with “secure execution for Azure IoT Edge computing running networking, data analytics, and compute-intensive machine learning applications.”

A year ago, NXP announced a Modular IoT Framework, which was described as a set of pre-integrated NXP hardware and software for IoT, letting customers mix and match technologies with greater assurance of interoperability. When asked how this was related to EdgeScale, Sam Fuller, head of system solutions for NXP’s digital networking group, replied: “EdgeScale is designed to manage higher level software that could have a role of processing the data and managing the communication to/from devices built from the Modular IoT Framework.”


LS102A block diagram
(click image to enlarge)
The EdgeScale suite supports the following QorIQ Layerscape processors:

  • LS102A — 80 0MHz single-core, Cortex-A53 with 1 W power consumption found on F&S’ efus A53LS module
  • LS1028A — dual-core ARMv8 with Time-Sensitive Networking (TSN)
  • LS1043A — 1.6 GHz quad-core, Cortex-A53 with 1 0GbE support, found on the QorIQ LS1043A 10G Residential Gateway Reference Design and the X-ES XPedite6401 XMC/PrPMC mezzanine module
  • LS1046A — quad-core, Cortex-A72 with dual 10 GbE support (also available in dual-core LS1026A model)
  • LS1088a — 1.5 GHz octa-core, Cortex-A53 with dual 10 GbE support, which is also supported on the XPedite6401
  • LS2088a — 2.0 GHz octa-core, Cortex-A72 with 128-bit NEON-based SIMD engine for each core, plus a 10GbE XAUI Fat Pipe interface or 4x 10GBASE-KR — found on X-ES XPedite6370 SBC.

Further information

NXP’s EdgeScale will be available by the end of the month. More information may be found on its EdgeScale product page.

NXP Semiconductors | www.nxp.com

This article originally appeared on LinuxGizmos.com on March 16.

April Circuit Cellar: Sneak Preview

The April issue of Circuit Cellar magazine is coming soon. And we’ve got a healthy serving of embedded electronics articles for you. Here’s a sneak peak.

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

 

Here’s a sneak preview of April 2018 Circuit Cellar:

NAVIGATING THE INTERNET-OF-THINGS

IoT: From Gateway to Cloud
In this follow on to our March “IoT: Device to Gateway” feature, this time we look at technologies and solutions for the gateway to cloud side of IoT.  Circuit Cellar Chief Editor Jeff Child examines the tools and services available to get a cloud-connected IoT implementation up and running.

Texting and IoT Embedded Devices (Part 2)
In Part 1, Jeff Bachiochi laid the groundwork for describing a project involving texting. He puts that into action this, showing how to create messages on his Espressif System’s ESP8266EX-based device to be sent to an email account and end up with those messages going as texts to a cell phone.

Internet of Things Security (Part 2)
In this next part of his article series on IoT security, Bob Japenga takes a look at side-channel attacks. What are they? How much of a threat are they? And how can we prevent them?

Product Focus: 32-Bit Microcontrollers
As the workhorse of today’s embedded systems, 32-bit microcontrollers serve a wide variety of embedded applications—including the IoT. This Product Focus section updates readers on these trends and provides a product album of representative 32-bit MCU products.

GRAPHICS, VISION AND DISPLAYS

Graphics, Video and Displays
Thanks to advances in displays and innovations in graphics ICs, embedded systems can now routinely feature sophisticated graphical user interfaces. Circuit Cellar Chief Editor Jeff Child dives into the latest technology trends and product developments in graphics, video and displays.

Color Recognition and Segmentation in Real-time
Vision systems used to require big, multi-board systems—but not anymore. Learn how two Cornell undergraduates designed a hardware/software system that accelerates vision-based object recognition and tracking using an FPGA SoC. They made a min manufacturing line to demonstrate how their system can accurately track and categorize manufactured candies carried along a conveyor belt.

SPECIFICATIONS, QUALIFICATIONS AND MORE

Component tolerance
We perhaps take for granted sometimes that the tolerances of our electronic components fit the needs of our designs. In this article, Robert Lacoste takes a deep look into the subject of tolerances, using the simple resistor as an example. He goes through the math to help you better understand accuracy and drift along with other factors.

Understanding the Temperature Coefficient of Resistance
Temperature coefficient of resistance (TCR) is the calculation of a relative change of resistance per degree of temperature change. Even though it’s an important spec, different resistor manufacturers use different methods for defining TCR. In this article, Molly Bakewell Chamberlin examines TCR and its “best practice” interpretations using Vishay Precision Group’s vast experience in high-precision resistors.

Designing of Complex Systems
While some commercial software gets away without much qualification during development, the situation is very different when safety in involved. For aircraft, vehicles or any complex system where failure unacceptable, this means adhering to established standards throughout the development life cycle. In this article, George Novacek tackles these issues and examines some of these standards namely ARP4754.

AND MORE IN-DEPTH PROJECT ARTICLES

Build a Marginal Oscillator Proximity Switch
A damped or marginal oscillator will switch off when energy is siphoned from its resonant LC tank circuit. In his article, Dev Gualtieri presents a simple marginal oscillator that detects proximity to a small steel screw or steel plate. It lights an LED, and the LED can be part of an optically-isolated solid-state relay.

Obsolescence-Proof Your UI (Part 1)
After years of frustration dealing with graphical interface technologies that go obsolete, Steve Hendrix decided there must be a better way. Knowing that web browser technology is likely to be with us for a long while, he chose to build a web server that could perform common operations that he needed on the IEEE-488 bus. He then built it as a product available for sale to others—and it is basically obsolescence-proof.

 

 

Circuit Cellar and LinuxGizmos.com Form Strategic Partnership

Partnership offers an expanded technical resource for embedded and IoT device developers and enthusiasts

Today Circuit Cellar is announcing a strategic partnership with LinuxGizmos.com to offer an expanded resource of information and know-how on embedded electronics technology for developers, makers, students and educators, early adopters, product strategists, and technical decision makers with a keen interest in emerging embedded and IoT technologies.

The new partnership combines Circuit Cellar’s uniquely in depth, “down-to-the-bits” technical articles with LinuxGizmos.com’s up-to-the-minute, detailed, and insightful coverage of the latest developer-  and maker-friendly, embedded oriented chips, modules, boards, small systems, and IoT devices, and the software technologies that make them tick. Additionally, as its name implies, LinuxGizmos.com’s coverage frequently highlights open source, high-level operating systems including Linux and its derivatives (e.g. Android), as well as lower-level software platforms such as OpenWRT and FreeRTOS.

Circuit Cellar is one of the electronics industry’s most highly technical information resources for professional engineers, academics, and other specialists involved in the design and development of embedded processor- and microcontroller-based systems across a broad range of applications. It gets right down to the bits and bytes and lines of code, at a level its readers revel in. Circuit Cellar is a trusted brand engaging readers every day on its website, each week with its newsletter, and each month through Circuit Cellar magazine’s print and digital formats.

LinuxGizmos.com is a free-to-use website that publishes daily news and analysis on the hardware, software, protocols, and standards used in new and innovative embedded, mobile, and Internet of Things (IoT) devices.  The site is lauded for its detailed and insightful, timely coverage of newly introduced single board computers (SBCs), computer-on-modules (COMs), system-on-chips (SoCs), and small form factor (SFF) systems, along with their software platforms.

“The synergies between LinuxGizmos and Circuit Cellar are great and I’m excited to see the benefits of this partnership passed on to our combined audience,” said Jeff Child, Editor-in-Chief, Circuit Cellar. “LinuxGizmos.com has the kind of rich, detail-oriented structure that I’m a fan of. Over the many years I’ve been following the site, I’ve relied on it as an important information resource, and its integrity has always impressed me.”

“I’ve been a fan of Circuit Cellar magazine since it was first launched, and wrote a series of articles for it in the late 90s about PC/104 embedded modules,” added Rick Lehrbaum, founder and Editor-in-Chief of LinuxGizmos.com. “I’m thrilled to see LinuxGizmos become associated with one of the computing industry’s pioneering publications.”

“I see this partnership as a perfect way to enhance both the Circuit Cellar and LinuxGizmos brands as key information platforms,” stated KC Prescott, President, KCK Media Corp. “In this era where there’s so much compelling technology innovation happening in the industry, our combined strengths will help inform and inspire embedded systems developers.”

Read Announcement on LinuxGizmos.com here:

Circuit Cellar and LinuxGizmos.com join forces

MPU-Based SOM Meets Industrial IoT Linux Needs

Microchip Technology has unveiled a new System on Module (SOM) featuring the SAMA5D2 microprocessor (MPU). The ATSAMA5D27-SOM1 contains the recently released ATSAMA5D27C-D1G-CU System in Package (SiP). The SOM simplifies IoT design by integrating the power management, non-volatile boot memory, Ethernet PHY and high-speed DDR2 memory onto a small, single-sided printed circuit board (PCB). There is a great deal of design effort and complexity associated with creating an industrial-grade MPU-based system running a Linux operating system. Even developers with expertise in the area spend a lot of time on PCB layout to guarantee signal integrity for the high-speed interfaces to DDR memory and PHY while complying with EMC standards.

The SAMA5D2 family of products provides an extremely flexible design experience no matter the level of expertise. For example, the SOM—which integrates multiple external components and eliminates key design challenges around EMI, ESD and signal integrity—can be used to expedite development time. Customers can solder the SOM to their board and take it to production, or it can be used as a reference design along with the free schematics, design and Gerber files and complete bill of materials which are available online. Customers can also transition from the SOM to the SiP or the MPU itself, depending on their design needs. All products are backed by Microchip’s customer-driven obsolescence policy which ensures availability to customers for as long as needed.

The Arm Cortex-A5-based SAMA5D2 SiP, mounted on the SOM PCB or available separately, integrates 1 Gbit of DDR2 memory, further simplifying the design by removing the high- speed memory interface constraints from the PCB. The impedance matching is done in the package, not manually during development, so the system will function properly at normal and low- speed operation. Three DDR2 memory sizes (128 Mb, 512 Mb and 1 Gb) are available for the SAMA5D2 SiP and optimized for bare metal, RTOS and Linux implementations.

Microchip customers developing Linux-based applications have access to the largest set of device drivers, middleware and application layers for the embedded market at no charge. All of Microchip’s Linux development code for the SiP and SOM are mainlined in the Linux communities. This results in solutions where customers can connect external devices, for which drivers are mainlined, to the SOM and SIP with minimal software development.

The SAMA5D2 family features the highest levels of security in the industry, including PCI compliance, providing an excellent platform for customers to create secured designs. With integrated Arm TrustZone and capabilities for tamper detection, secure data and program storage, hardware encryption engine, secure boot and more, customers can work with Microchip’s security experts to evaluate their security needs and implement the level of protection that’s right for their design. The SAMA5D2 SOM also contains Microchip’s QSPI NOR Flash memory, a Power Management Integrated Circuit (PMIC), an Ethernet PHY and serial EEPROM memory with a Media Access Control (MAC) address to expand design options.

The SOM1-EK1 development board provides a convenient evaluation platform for both the SOM and the SiP. A free Board Support Package (BSP) includes the Linux kernel and drivers for the MPU peripherals and integrated circuits on the SOM. Schematics and Gerber files for the SOM are also available.

The ATSAMA5D2 SiP is available in four variants starting with the ATSAMA5D225C-D1M- CU in a 196-lead BGA package for $8.62 each in 10,000 units. The ATSAMA5D27-SOM1 is available now for $39.00 each in 100 units The ATSAMA5D27-SOM1-EK1 development board is available for $245.00.

Microchip Technology | www.microchip.com

SiFive Launches Linux-Capable RISC-V Based SoC

SiFive has launched the industry’s first Linux-capable RISC-V based processor SoC. The company demonstrated the first real-world use of the HiFive Unleashed board featuring the Freedom U540 SoC, based on its U54-MC Core IP, at the FOSDEM open source developer conference.

During the session, SiFive provided updates on the RISC-V Linux effort, surprising attendees with an announcement that the presentation had been run on the HiFive Unleashed development board. With the availability of the HiFive Unleashed board and Freedom U540 SoC, SiFive has brought to market the first multicore RISC-V chip designed for commercialization, and now offers the industry’s widest array of RISC-V based Core IP.

With the Freedom U540, the first RISC-V based, 64-bit 4+1 multicore SoC with support for full featured operating systems such as Linux, the HiFive Unleashed development board will greatly spur open-source software development. The underlying CPU, the U54-MC Core IP, is ideal for applications that need full operating system support such as artificial intelligence, machine learning, networking, gateways and smart IoT devices.

The company also announced its first hackathon, which will be held during the Embedded Linux Conference, March 12 to 14 in Portland, OR. The hackathon will enable registered SiFive Developers to be among the first test out SiFive’s HiFive Unleashed board featuring the U540 SoC.

Freedom U540 processor specs include:

  • 4+1 Multi-Core Coherent Configuration, up to 1.5 GHz
  • 4x U54 RV64GC Application Cores with Sv39 Virtual Memory Support
  • 1x E51 RV64IMAC Management Core
  • Coherent 2MB L2 Cache
  • 64-bit DDR4 with ECC
  • 1x Gigabit Ethernet
  • Built in 28nm process technology

The HiFive Unleased development board specs include:

  • SiFive Freedom U540 SoC
  • 8GB DDR4 with ECC for serious application development
  • Gigabit Ethernet Port
  • 32MB Quad SPI Flash
  • MicroSD Card for removable storage
  • FMC Connector for future expansion with add-in cards

Developers can purchase the HiFive Unleashed development board here. A limited batch of early access boards will ship in late March 2018, with a wider release in June. For more information or to register for the hackathon, visit www.sifive.com/products/hifive-unleashed/.

SiFive | www.sifive.com

A Year in the Drone Age

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

When you’re trying to keep tabs on any young, fast-growing technology, it’s tempting to say “this is the big year” for that technology. Problem is that odds are the following year could be just as significant. Such is the case with commercial drones. Drone technology fascinates me partly because it represents one of the clearest examples of an application that wouldn’t exist without today’s level of chip integration driven by Moore’s law. That integration has enabled 4k HD video capture, image stabilization, new levels of autonomy and even highly compact supercomputing to fly aboard today’s commercial and consumer drones.

Beyond the technology side, drones make for a rich topic of discussion because of the many safety, privacy and regulatory issues surrounding them. And then there are the wide-open questions on what new applications will drones be used for?

For its part, the Federal Aviation Administration has had its hands full this year regarding drones. In the spring, for example, the FAA completed its fifth and final field evaluation of potential drone detection systems at Dallas/Fort Worth International Airport. The evaluation was the latest in a series of detection system evaluations that began in February 2016 at several airports. For the DFW test, the FAA teamed with Gryphon Sensors as its industry partner. The company’s drone detection technologies include radar, radio frequency and electro-optical systems. The FAA intends to use the information gathered during these kinds of evaluations to craft performance standards for any drone detection technology that may be deployed in or around U.S. airports.

In early summer, the FAA set up a new Aviation Rulemaking Committee tasked to help the agency create standards for remotely identifying and tracking unmanned aircraft during operations. The rulemaking committee will examine what technology is available or needs to be created to identify and track unmanned aircraft in flight.

This year as also saw vivid examples of the transformative role drones are playing. A perfect example was the role drones played in August during the flooding in Texas after Hurricane Harvey. In his keynote speech at this year’s InterDrone show, FAA Administrator Michael Huerta described how drones made an incredible impact. “After the floodwaters had inundated homes, businesses, roadways and industries, a wide variety of agencies sought FAA authorization to fly drones in airspace covered by Temporary Flight Restrictions,” said Huerta. “We recognized that we needed to move fast—faster than we have ever moved before. In most cases, we were able to approve individual operations within minutes of receiving a request.”

Huerta went on to described some of the ways drones were used. A railroad company used drones to survey damage to a rail line that cuts through Houston. Oil and energy companies flew drones to spot damage to their flooded infrastructure. Drones helped a fire department and county emergency management officials check for damage to roads, bridges, underpasses and water treatment plants that could require immediate repair. Meanwhile, cell tower companies flew them to assess damage to their towers and associated ground equipment and insurance companies began assessing damage to neighborhoods. In many of those situations, drones were able to conduct low-level operations more efficiently—and more safely—than could have been done with manned aircraft.

“I don’t think it’s an exaggeration to say that the hurricane response will be looked back upon as a landmark in the evolution of drone usage in this country,” said Huerta. “And I believe the drone industry itself deserves a lot of credit for enabling this to happen. That’s because the pace of innovation in the drone industry is like nothing we have seen before. If people can dream up a new use for drones, they’re transforming it into reality.”

Clearly, it’s been significant year for drone technology. And I’m excited for Circuit Cellar to go deeper with our drone embedded technology coverage in 2018. But I don’t think I’ll dare say that “this was the big year” for drones. I have a feeling it’s just one of many to come.

This appears in the December (329) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

Hop on the Moving Train

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

We work pretty far in advance to get Circuit Cellar produced and in your hands on-time and at the level of quality you expect and deserve. Given that timing, as we go to press on this issue we’re getting into the early days of fall. In my 27 years in the technology magazine business, this part of the year has always included time set aside to finalize next year’s editorial calendar. The process for me over years has run the gamut from elaborate multi-day summer meetings to small one-on-one conversations with a handful of staff. But in every case, the purpose has never been only about choosing the monthly section topics. It’s also a deeper and broader discussion about “directions.” By that I mean the direction embedded systems technologies are going in—and how it’s impacting you our readers. Because these technologies change so rapidly, getting a handle on it is a bit like jumping onto a moving train.

A well thought out editorial calendar helps us plan out and select which article topics are most important—for both staff-written and contributed articles. And because we want to include all of the most insightful, in-depth stories we can, we will continue to include a mix of feature articles beyond the monthly calendar topics. Beyond its role for article planning, a magazine’s editorial calendar also makes a statement on what the magazine’s priorities are in terms of technology, application segments and product areas. In our case, it speaks to the kind of magazine that Circuit Cellar is—and what it isn’t.

An awareness of what types of product areas are critical to today’s developers is important. But because Circuit Cellar is not just a generic product magazine, we’re always looking at how various chips, boards and software solutions fit together in a systems context. This applies to our technology trend features as well as our detailed project-based articles that explore a microcontroller-based design in all its interesting detail. On the other hand, Circuit Cellar isn’t an academic style technical journal that’s divorced from any discussion of commercial products. In contrast, we embrace the commercial world enthusiastically. The deluge of new chip, board and software products often help inspire engineers to take a new direction in their system designs. New products serve as key milestones illustrating where technology is trending and at what rate of change.

Part of the discussion—for 2018 especially—is looking at how the definition of a “system” is changing. Driven by Moore’s Law, chip integration has shifted the level of system functionally at the IC, board and box level. We see an FPGA, SoC or microcontroller of today doing what used to require a whole embedded board. In turn, embedded boards can do what once required a box full of slot-card boards. Meanwhile, the high-speed interconnects between those new “system” blocks constantly have to keep those processing elements fed. The new levels of compute density, functionality and networking available today are opening up new options for embedded applications. Highly integrated FPGAs, comprehensive software development tools, high-speed fabric interconnects and turnkey box-level systems are just a few of the players in this story of embedded system evolution.

Finally, one of the most important new realities in embedded design is the emergence of intelligent systems. Using this term in a fairly broad sense, it’s basically now easier than ever to apply high-levels of embedded intelligence into any device or system. In some cases, this means adding a 32-bit MCU to an application that never used such technology. At the other extreme are full supercomputing-level AI technologies installed in a small drone or a vehicle. Such systems can meet immense throughput and processing requirements in space-constrained applications handling huge amounts of real-time incoming data. And at both those extremes, there’s connectivity to cloud-based computing analytics that exemplifies the cutting edge of the IoT. In fact, the IoT phenomenon is so important and opportunity rich that we plan to hit it from a variety of angles in 2018.

Those are the kinds of technology discussions that informed our creation of Circuit Cellar’s 2018 Ed Cal. Available now on www.circuitcellar.com, the structure of the calendar has been expanded for 2018 to ensure we cover all the critical embedded technology topics important to today’s engineering professional. Technology changes rapidly, so we invite you to hop on this moving train and ride along with us.

This appears in the November (328) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

Declaration of Embedded Independence

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

There’s no doubt that we’re living in an exciting era for embedded systems developers. Readers like you that design and develop embedded systems no longer have to compromise. Most of you probably remember when the processor or microcontroller you chose dictated both the development tools and embedded operating system (OS) you had to use. Today more than ever, there are all kinds of resources available to help you develop prototypes—everything from tools to chips to information resources on-line. There’s inexpensive computing modules available aimed at makers and DIY experts that are also useful for professional engineers working on high-volume end products.

The embedded operating systems market is one particular area where customers no longer have to compromise. That wasn’t always the case. Most people identify the late 90s with the dot.com bubble … and that bubble bursting. But closer to our industry was the embedded Linux start-up bubble. The embedded operating systems market began to see numerous start-ups appearing as “embedded Linux” companies. Since Linux is a free, open-source OS, these companies didn’t sell Linux, but rather provided services to help customers create and support implementations of open-source Linux. But, as often happens with disruptive technology, the establishment then pushed back. The establishment in that case were the commercial “non-open” embedded OS vendors. I recall a lot of great spirited debates at the time—both in print and live during panel discussions at industry trade shows—arguing for and against the very idea of embedded Linux. For my part, I can’t help remembering, having both written some of those articles and having sat on those panels myself.

Coinciding with the dot-com bubble bursting, the embedded Linux bubble burst as well. That’s not to say that embedded Linux lost any luster. It continued its upward rise, and remains an incredibly important technology today. Case in point: The Android OS is based on the Linux kernel. What burst was the bubble of embedded Linux start-up companies, from which only a handful of firms survived. What’s interesting is that all the major embedded OS companies shifted to a “let’s not beat them, let’s join them” approach to Linux. In other words, they now provide support for users to develop systems that use Linux alongside their commercial embedded operating systems.

The freedom not to have to compromise in your choices of tools, OSes and systems architectures—all that is a positive evolution for embedded system developers like you. But in my opinion, I think it’s possible to misinterpret the user-centric model and perhaps declare victory too soon. When you’re developing an embedded system aimed at a professional, commercial application, not everything can be done in DIY mode. There’s value in having the support of sophisticated technology vendors to help you develop and integrate your system. Today’s embedded systems routinely use millions of lines of code, and in most systems these days software running on a processor is what provides most of the functionality. If you develop that software in-house, you need high quality tools to makes sure it’s running error free. And if you out-source some of that embedded software, you have to be sure the vendor of that embedded software is providing a product you can rely on.

The situation is similar on the embedded board-level computing side. Yes, there’s a huge crop of low-cost embedded computer modules available to purchase these days. But not all embedded computing modules are created equal. If you’re developing a system with a long shelf life, what happens when the DRAMs, processors or I/O chips go end-of-life? Is it your problem? Or does the board vendor take on that burden? Have the boards been tested for vibration or temperature so that they can be used in the environment your application requires? You have to weigh the costs versus the kinds of support a vendor provides.

All in all, the trend toward a ”no compromises” situation for embedded systems developers is a huge win. But when you get beyond the DIY project level of development, it’s important to keep in mind that the vendor-customer relationship is still a critical part of the system design process. With all that in mind, it’s cool that we can today make a declaration of independence for embedded systems technology. But I’d rather think of it as a declaration of interdependence.

This appears in the October (327) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

SBC is Drop-In Replacement for Raspberry Pi 3 Model B

A Kickstarter project by the Libre Computer Project, code name Le Potato, is designed as a drop in hardware replacement for the Raspberry Pi 3 Model B and offers faster performance, more memory, lower power, higher I/O throughput, 4K capabilities, open market components, improved media acceleration, removal of the vendor locked-in interfaces and Android 7.1 support. This platform uses the latest technologies and is built upon proven long term available chips. It is supported by upstream Linux and has a downstream development package based on Linux 4.9 LTS that offers ready-to-go 4K media decoding, 3D acceleration and more. dbedba7f6223adc66b712249125e66cb_original

It can be used to tinker with electronics, teach programming, build media centers, create digital signage solutions, play retro games, establish bi-directional video, and unlock imaginations. It is available in 1 GB and 2 GB configurations.

For connectivity I/O the board provides:

  • HDMI 2.0
  • 4 USB 2.0 Type A
  • RJ45 100Mb Fast Ethernet
  • CVBS
  • Infrared Receiver
  • S/PDIF Header
  • UART Header
  • I2S + ADC Header
  • 40 Pin Header for PWM, I2C, I2S, SPI, GPIO
  • eMMC Daughter Board Connector
  • MicroSD Card Slot with UHS Support

The board features these improvements over Raspberry Pi 3 Model B:

  • 50% Faster CPU and GPU
  • Double RAM Available
  • Lower Power Consumption
  • Better Android 7.1 and Kodi Support
  • Much Better Hardware Accelerated Codec Support
  • 4K UHD with HDR over HDMI 2.0
  • MicroSD Card UHS Support
  • eMMC Daughter Board Support
  • IR Receiver
  • ADC + I2S Headers
  • Non-Shared Bandwidth for LAN and USB

Libre Computer Project | https://libre.computer/

Cloud Platform Supports BeagleBone Black Dev Kit

Anaren IoT Group has announced the release of version 2.1 of its innovative Anaren Atmosphere online development platform. Atmosphere affords embedded, mobile and cloud developers an exceptionally fast way to create IoT applications with an easy-to-use IoT development environment. The new version of Atmosphere 2.1, now offers support for the BeagleBone Black Embedded Linux Development Kit, as well as a new cloud-only project type that allows users to build libraries for C#/.Net, C/C++, and Python to enable connections to their own embedded solutions in Atmosphere Cloud.

AtmosphereIntroCloudMonitor

As with version 2.0, users of Atmosphere 2.1 are able to simultaneously create and deploy corresponding hosted web applications. All design functions, including cloud visualization, use a drag-and-drop approach that does not require the need for command line coding – although code can be customized if desired. Atmosphere 2.1 also provides access to a large and growing library of sensors and other IoT elements for easy application creation. Atmosphere’s unique approach immediately accelerates design cycles, lowers risk, while removing cost in the development process as no specialized knowledge in hardware embedded coding, mobile application creation or web development is needed.

Atmosphere 2.1 can also host device and sensor data in its cloud-based environment and offers a highly customizable web-based user interface. The Atmosphere Cloud™ hosting option allows each user to host up to five devices at once – free of charge. The Atmosphere toolset is ideal for a variety of developers – from those who are simply looking to record single sensor data to those developing rich, complex device monitoring and control applications.

Anaren IoT | www.anaren.com/iot