Hop on the Moving Train

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

We work pretty far in advance to get Circuit Cellar produced and in your hands on-time and at the level of quality you expect and deserve. Given that timing, as we go to press on this issue we’re getting into the early days of fall. In my 27 years in the technology magazine business, this part of the year has always included time set aside to finalize next year’s editorial calendar. The process for me over years has run the gamut from elaborate multi-day summer meetings to small one-on-one conversations with a handful of staff. But in every case, the purpose has never been only about choosing the monthly section topics. It’s also a deeper and broader discussion about “directions.” By that I mean the direction embedded systems technologies are going in—and how it’s impacting you our readers. Because these technologies change so rapidly, getting a handle on it is a bit like jumping onto a moving train.

A well thought out editorial calendar helps us plan out and select which article topics are most important—for both staff-written and contributed articles. And because we want to include all of the most insightful, in-depth stories we can, we will continue to include a mix of feature articles beyond the monthly calendar topics. Beyond its role for article planning, a magazine’s editorial calendar also makes a statement on what the magazine’s priorities are in terms of technology, application segments and product areas. In our case, it speaks to the kind of magazine that Circuit Cellar is—and what it isn’t.

An awareness of what types of product areas are critical to today’s developers is important. But because Circuit Cellar is not just a generic product magazine, we’re always looking at how various chips, boards and software solutions fit together in a systems context. This applies to our technology trend features as well as our detailed project-based articles that explore a microcontroller-based design in all its interesting detail. On the other hand, Circuit Cellar isn’t an academic style technical journal that’s divorced from any discussion of commercial products. In contrast, we embrace the commercial world enthusiastically. The deluge of new chip, board and software products often help inspire engineers to take a new direction in their system designs. New products serve as key milestones illustrating where technology is trending and at what rate of change.

Part of the discussion—for 2018 especially—is looking at how the definition of a “system” is changing. Driven by Moore’s Law, chip integration has shifted the level of system functionally at the IC, board and box level. We see an FPGA, SoC or microcontroller of today doing what used to require a whole embedded board. In turn, embedded boards can do what once required a box full of slot-card boards. Meanwhile, the high-speed interconnects between those new “system” blocks constantly have to keep those processing elements fed. The new levels of compute density, functionality and networking available today are opening up new options for embedded applications. Highly integrated FPGAs, comprehensive software development tools, high-speed fabric interconnects and turnkey box-level systems are just a few of the players in this story of embedded system evolution.

Finally, one of the most important new realities in embedded design is the emergence of intelligent systems. Using this term in a fairly broad sense, it’s basically now easier than ever to apply high-levels of embedded intelligence into any device or system. In some cases, this means adding a 32-bit MCU to an application that never used such technology. At the other extreme are full supercomputing-level AI technologies installed in a small drone or a vehicle. Such systems can meet immense throughput and processing requirements in space-constrained applications handling huge amounts of real-time incoming data. And at both those extremes, there’s connectivity to cloud-based computing analytics that exemplifies the cutting edge of the IoT. In fact, the IoT phenomenon is so important and opportunity rich that we plan to hit it from a variety of angles in 2018.

Those are the kinds of technology discussions that informed our creation of Circuit Cellar’s 2018 Ed Cal. Available now on www.circuitcellar.com, the structure of the calendar has been expanded for 2018 to ensure we cover all the critical embedded technology topics important to today’s engineering professional. Technology changes rapidly, so we invite you to hop on this moving train and ride along with us.

This appears in the November (328) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

Declaration of Embedded Independence

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

There’s no doubt that we’re living in an exciting era for embedded systems developers. Readers like you that design and develop embedded systems no longer have to compromise. Most of you probably remember when the processor or microcontroller you chose dictated both the development tools and embedded operating system (OS) you had to use. Today more than ever, there are all kinds of resources available to help you develop prototypes—everything from tools to chips to information resources on-line. There’s inexpensive computing modules available aimed at makers and DIY experts that are also useful for professional engineers working on high-volume end products.

The embedded operating systems market is one particular area where customers no longer have to compromise. That wasn’t always the case. Most people identify the late 90s with the dot.com bubble … and that bubble bursting. But closer to our industry was the embedded Linux start-up bubble. The embedded operating systems market began to see numerous start-ups appearing as “embedded Linux” companies. Since Linux is a free, open-source OS, these companies didn’t sell Linux, but rather provided services to help customers create and support implementations of open-source Linux. But, as often happens with disruptive technology, the establishment then pushed back. The establishment in that case were the commercial “non-open” embedded OS vendors. I recall a lot of great spirited debates at the time—both in print and live during panel discussions at industry trade shows—arguing for and against the very idea of embedded Linux. For my part, I can’t help remembering, having both written some of those articles and having sat on those panels myself.

Coinciding with the dot-com bubble bursting, the embedded Linux bubble burst as well. That’s not to say that embedded Linux lost any luster. It continued its upward rise, and remains an incredibly important technology today. Case in point: The Android OS is based on the Linux kernel. What burst was the bubble of embedded Linux start-up companies, from which only a handful of firms survived. What’s interesting is that all the major embedded OS companies shifted to a “let’s not beat them, let’s join them” approach to Linux. In other words, they now provide support for users to develop systems that use Linux alongside their commercial embedded operating systems.

The freedom not to have to compromise in your choices of tools, OSes and systems architectures—all that is a positive evolution for embedded system developers like you. But in my opinion, I think it’s possible to misinterpret the user-centric model and perhaps declare victory too soon. When you’re developing an embedded system aimed at a professional, commercial application, not everything can be done in DIY mode. There’s value in having the support of sophisticated technology vendors to help you develop and integrate your system. Today’s embedded systems routinely use millions of lines of code, and in most systems these days software running on a processor is what provides most of the functionality. If you develop that software in-house, you need high quality tools to makes sure it’s running error free. And if you out-source some of that embedded software, you have to be sure the vendor of that embedded software is providing a product you can rely on.

The situation is similar on the embedded board-level computing side. Yes, there’s a huge crop of low-cost embedded computer modules available to purchase these days. But not all embedded computing modules are created equal. If you’re developing a system with a long shelf life, what happens when the DRAMs, processors or I/O chips go end-of-life? Is it your problem? Or does the board vendor take on that burden? Have the boards been tested for vibration or temperature so that they can be used in the environment your application requires? You have to weigh the costs versus the kinds of support a vendor provides.

All in all, the trend toward a ”no compromises” situation for embedded systems developers is a huge win. But when you get beyond the DIY project level of development, it’s important to keep in mind that the vendor-customer relationship is still a critical part of the system design process. With all that in mind, it’s cool that we can today make a declaration of independence for embedded systems technology. But I’d rather think of it as a declaration of interdependence.

This appears in the October (327) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

SBC is Drop-In Replacement for Raspberry Pi 3 Model B

A Kickstarter project by the Libre Computer Project, code name Le Potato, is designed as a drop in hardware replacement for the Raspberry Pi 3 Model B and offers faster performance, more memory, lower power, higher I/O throughput, 4K capabilities, open market components, improved media acceleration, removal of the vendor locked-in interfaces and Android 7.1 support. This platform uses the latest technologies and is built upon proven long term available chips. It is supported by upstream Linux and has a downstream development package based on Linux 4.9 LTS that offers ready-to-go 4K media decoding, 3D acceleration and more. dbedba7f6223adc66b712249125e66cb_original

It can be used to tinker with electronics, teach programming, build media centers, create digital signage solutions, play retro games, establish bi-directional video, and unlock imaginations. It is available in 1 GB and 2 GB configurations.

For connectivity I/O the board provides:

  • HDMI 2.0
  • 4 USB 2.0 Type A
  • RJ45 100Mb Fast Ethernet
  • CVBS
  • Infrared Receiver
  • S/PDIF Header
  • UART Header
  • I2S + ADC Header
  • 40 Pin Header for PWM, I2C, I2S, SPI, GPIO
  • eMMC Daughter Board Connector
  • MicroSD Card Slot with UHS Support

The board features these improvements over Raspberry Pi 3 Model B:

  • 50% Faster CPU and GPU
  • Double RAM Available
  • Lower Power Consumption
  • Better Android 7.1 and Kodi Support
  • Much Better Hardware Accelerated Codec Support
  • 4K UHD with HDR over HDMI 2.0
  • MicroSD Card UHS Support
  • eMMC Daughter Board Support
  • IR Receiver
  • ADC + I2S Headers
  • Non-Shared Bandwidth for LAN and USB

Libre Computer Project | https://libre.computer/

Cloud Platform Supports BeagleBone Black Dev Kit

Anaren IoT Group has announced the release of version 2.1 of its innovative Anaren Atmosphere online development platform. Atmosphere affords embedded, mobile and cloud developers an exceptionally fast way to create IoT applications with an easy-to-use IoT development environment. The new version of Atmosphere 2.1, now offers support for the BeagleBone Black Embedded Linux Development Kit, as well as a new cloud-only project type that allows users to build libraries for C#/.Net, C/C++, and Python to enable connections to their own embedded solutions in Atmosphere Cloud.

AtmosphereIntroCloudMonitor

As with version 2.0, users of Atmosphere 2.1 are able to simultaneously create and deploy corresponding hosted web applications. All design functions, including cloud visualization, use a drag-and-drop approach that does not require the need for command line coding – although code can be customized if desired. Atmosphere 2.1 also provides access to a large and growing library of sensors and other IoT elements for easy application creation. Atmosphere’s unique approach immediately accelerates design cycles, lowers risk, while removing cost in the development process as no specialized knowledge in hardware embedded coding, mobile application creation or web development is needed.

Atmosphere 2.1 can also host device and sensor data in its cloud-based environment and offers a highly customizable web-based user interface. The Atmosphere Cloud™ hosting option allows each user to host up to five devices at once – free of charge. The Atmosphere toolset is ideal for a variety of developers – from those who are simply looking to record single sensor data to those developing rich, complex device monitoring and control applications.

Anaren IoT | www.anaren.com/iot

Software Targets Data Acq for Desktop Python under Linux

Microstar Laboratories has released DAPtools for Python software, an API that enables high-performance data acquisition applications using the Python programming language on desktop GNU/Linux systems. This is not a reduced or specialized language variant—it supports the complete, full-featured Python environment and complements the Accel64 for Linux software that provides access to DAP board features and functions. Typical applications are one-time diagnostic tests, academic research, and automatically-configurable scripting for test automation.

MicroStar

The DAPIO programming interface behind DAPtools for Python provides the same stable DAPL system services that all other high-level programming environments have used over the last 20 years. Access to that interface is through a Linux dynamic library, which Python applications can load and access using the ctypes library. DAPtools for Python presents the low-level interface as a simple “interface object” and some utility functions to make the DAP board interactions work like familiar Python objects and functions. The programming is a lot like connecting to a networked resource: open a connection, specify the data acquisition actions required, run the configuration, take the requested data, and close the connection when finished.

Microstar Laboratories | www.mstarlabs.com

The Most Technical

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShotIt is truly a thrill and an honor for me to be joining the Circuit Cellar team as the magazine’s new Editor-in-Chief. And in this—my first editorial in my new role—I want to seize the opportunity to talk about Circuit Cellar. A lot of factors attracted me to this publication. But in a nutshell its position in the marketplace is compelling. It intersects with two converging trends happening in technology today.

First, there’s the phenomenon of the rich set of tools, chips, and information resources available today. They put more power into the hands of makers and electronics DIY experts than ever before. You’ve got hardware such as Arduino and Raspberry Pi. Open source software ranging from Linux to Eclipse make integrating and developing software easier than ever. And porting back and forth between open source software and commercial embedded software is no longer prohibitive now that commercial software vendors are in a “join them, not beat them” phase of their thinking. Easy access has even reached processors thanks to the emergence of RISC-V for example (click here for more). Meanwhile, powerful FPGA chips enable developers to use one chip where an entire board or box was previously required.

The second big trend is how system-level chip technologies—like SoC-style processors and the FPGAs I just mentioned—are enabling some of the most game-changing applications driving today’s markets: including commercial drones, driverless cars, Internet-of-Things (IoT), robotics, mobile devices and more. This means that exciting and interesting new markets are attracting not just big corporations looking for high volume play, but also small start-up vendors looking to find their own niche within those market areas. And there are a lot of compelling opportunities in those spaces. Ideas that start as small embedded systems projects can—and are—blossoming into lucrative new enterprises.

What’s so exciting is that Circuit Cellar readers are at the center of both those two trends. There’s a particular character this magazine has that separates it from other technology magazines. There are a variety of long-established publications that cover electronics and whose stated missions are to serve engineers. I’ve worked for some of them, and they all have their strengths. But you can tell just by looking at the features and columns of Circuit Cellar that we don’t hold back or curtail our stories when it comes to technical depth. We get right down to the bits and bytes and lines code. Our readers are engineers and academics who want to know not only the rich details of a microcontroller’s on-board peripherals, but also how other like-minded geeks applied that technology to their DIY or commercial project. They want to know if the DC-DC converter they are considering has a wide enough input voltage to serve their needs.

Another cool thing for me about Circuit Cellar is the magazine’s origin story. Back when I was in high school and in my early days studying Computer Science in college, Steve Ciarcia had a popular column called Circuit Cellar in BYTE magazine. I was a huge fan of BYTE. I would take my issue and bring it to a coffee shop and read it intently. (Mind you this was pre-Internet. Coffee shops didn’t have Wi-Fi.) What I appreciated most about BYTE was that it had far more technical depth than the likes of PC World and PC Computing. I felt like it was aimed at a person with a technical bent like myself. When Steve later went on to found this magazine—nearly 30 years ago—he gave it the Circuit Cellar name but he also maintained that unique level of technical depth that entices engineers.

With all that in mind, I plan to uphold the stature and legacy in the electronics industry that I and all of you have long admired about Circuit Cellar. We will work to continue being the Most Technical information resource for professional engineers, academics, and other electronics specialists world-wide. Meanwhile, you can look forward to expanded coverage of those exciting market-spaces I discussed earlier. Those new applications really exemplify how embedded computing technology is changing the world. Let’s have some fun.

Renesas Expands HMI Support with RZ/G MPUs

Renesas has expanded its RZ microprocessor (MPU) Family to support the growing range of human-machine interface (HMI)- and vision-based systems, with performance scalability from entry-level to highly complex embedded applications. The new RZ/G1C MPUs from the Renesas RZ/G Series enable rapid development of high-performance HMI applications and support 3D graphics with full high-definition (FHD) video. The RZ/G1C is especially optimized for Linux-based application development.

20151007-hmi-solutions-rzg-series

The Renesas RZ/G MPU Series lets system manufacturers right-size their processor selection to support current and next-generation connected devices, ranging from home appliances with touch-based displays to industrial equipment with integrated embedded vision-equipped HMI that enables image recognition and artificial intelligence.

About the RZ/G1C MPUs

Based on the power-efficient ARM® Cortex-A7 CPU, the RZ/G1C offers a balance of performance and power for connected HMI-based systems. Support for multiple interfaces, including USB and Gigabit Ethernet (GbE), and full pin compatibility between parts provides customers the flexibility to scale up or down the RZ Family to address current and future embedded development needs. The new MPU features a PowerVR SGX531 3D graphics engine and an FHD H.264 video codec to support video encoding and decoding. Additionally, the RZ/G1C offers one analog and two digital camera inputs to facilitate embedded vision and other video applications.

The RZ/G1C MPU is offered in single and dual-core varieties, and brings a number of advantages over competing technology. For instance, RZ/G1C can be designed into a four-layer board to minimize cost and PCB design complexity. Competing parts  typically require a minimum of six to eight layers. Moreover, no special power sequencing or power management IC (PMIC) is needed on board, which reduces the bill of material (BOM) costs, streamlines manufacturing, and simplifies board bring-up.

Samples of the RZ/G1C MPUs are available now. The RZ/G1C is available in dual core or single core depending on customer requirements. Mass production will begin in December 2017.

Renesas Electronics America | www.renesas.com

Accel32 for Linux Software Supports 4.xx Kernel

Microstar Laboratories recently released version 3.00 of the Accel32 for Linux software. The software compiles a Loadable Kernel Module (LKM) for the GNU/Linux system, extending capabilities for control of the Data Acquisition Processor (DAP) boards to systems using GNU/Linux operating systems with kernel versions in the 4.xx series.

Photo caption: Real time acquisition on generic platforms: Accel32 for Linux v.3.0 supports GNU/Linux 4.xx kernels. Penguin: Julien Tromeur/Shutterstock.com

Real-time acquisition on generic platforms: Accel32 for Linux v.3.0 supports GNU/Linux 4.xx kernels. Penguin: Julien Tromeur/Shutterstock.com

Accel32 for Linux is offered under the BSD license for free download. DAP boards provide an Intel x86-family embedded processor to support operation of the embedded DAPL 2000 system and data acquisition hardware devices. The DAPL 2000 system is part of the DAPtools software, which Microstar Laboratories provides for free for operating the DAP boards. The DAPL 2000 system provides the configuration scripting and the multitasking real-time control of data acquisition hardware devices. A host system must provide PCI or PCI-X (extended) I/O bus slots to host the DAP boards. This software runs under 32-bit versions of the GNU/Linux system, which you can install on 32- or 64-bit hardware platforms.

Source: Microstar Laboratories

STMicroclectronics Offers Free Dev Tools to Linux Users

STMicroelectronics now offers free high-productivity tools to Linux users interested in working with STM32 microcontrollers. The STM32CubeMX configurator and initialization tool and the System Workbench for STM32—which is an IDE created by Ac6 Tools and supported by the openSTM32.org community—are now both available to run on Linux OS. Thus, Linux users can work on embedded projects with STM32 devices without leaving their favorite desktop environment.

System Workbench for STM32 supports the ST-LINK/V2 debugging tool under Linux through an adapted version of the OpenOCD community project. You can use the tools STMicro hardware such as STM32 Nucleo boards, Discovery kits, and Evaluation boards, as well as microcontroller firmware within the STM32Cube embedded-software packages or Standard Peripheral Library.

Source: STMicroelectronics

Microchip Joins Linux Foundation & Automotive Grade Linux

Microchip Technology recently announced that it joined The Linux Foundation and Automotive Grade Linux (AGL), which is an open-source project developing a common, Linux-based software stack for the connected car. Additionally, Microchip has begun enabling designers to use the Linux operating system with its portfolio of MOST network interface controllers.Microchip MOST

AGL was built on top of a stable Linux stack that is already being used in embedded and mobile devices. The combination of MOST technology and Linux provides a solution for the increasing complexity of in-vehicle-infotainment (IVI) and advanced-driver-assistance systems (ADAS).

The MOST network technology is a time-division-multiplexing (TDM) network that transports different data types on separate channels at low latency and high quality-of-service. Microchip’s MOST network interface controllers offer separate hardware interfaces for different data types. In addition to the straight streaming of audio or video data via dedicated hardware interfaces, Microchip’s new Linux driver enables easy and harmonized access to all data types. Besides IP-based communication over the standard Linux Networking Stack, all MOST network data types are accessible via the regular device nodes of the Linux Virtual File System (VFS). Additionally, high-quality and multi-channel synchronous audio data can be seamlessly delivered by the Advanced Linux Sound System Architecture (ALSA) subsystem.

Support is currently available for beta customers. The full version is expected for broad release in October.

Source: Microchip Technology

Embedded SOM with Linux-Based RTOS

National Instruments has introduced an embedded system-on-module (SOM) development board with integrated Linux-based real-time operating system (RTOS).NIsom

Processing power in the 2” x 3” SOM comes from a Xilinx Zync-7020 all programmable SOC running a dual core ARM Cortex-A9 at 667 MHz. A built-in, low-power Artix-7 FPGA offers 160 single-ended I/Os and Its dedicated processor I/O include Gigabit Ethernet USB 2.0 host, USB 2.0 host/device, SDHC, RS-232, and Tx/Rx. The SOM’s power requirements are typically 3 to 5 W.

The SOM integrates a validated board support package (BSP) and device drivers together with the National Instruments Linux real-time OS. The SOM board is supplied with a full suite of middleware for developing an embedded OS, custom software drivers, and other common software components.

The LabVIEW FPGA graphical development platform eliminates the need for expertise in the design approach using a hardware description language.

[Via Elektor]

 

Linux System Configuration (Part 1)

In Circuit Cellar’s June issue, Bob Japenga, in his Embedded in Thin Slices column, launches a series of articles on Linux system configuration. Part 1 of the series focuses on configuring the Linux kernel. “Linux kernels have hundreds of parameters you can configure for your specific application,” he says.

Linux system configurationPart 1 is meant to help designers of embedded systems plan ahead. “Many of the options I discuss cost little in terms of memory and real-time usage,” Japenga says in Part 1. “This article will examine the kinds of features that can be configured to help you think about these things during your system design. At a minimum, it is important for you to know what features you have configured if you are using an off-the-shelf Linux kernel or a Linux kernel from a reference design. Of course, as always, I’ll examine this only in thin slices.”

In the following excerpt from Part 1, Japenga explains why it’s important to be able to configure the kernel. (You can read the full article in the June issue, available online for single-issue purchase or membership download.)

Why Configure the Kernel?
Certainly if you are designing a board from scratch you will need to know how to configure and build the Linux kernel. However, most of us don’t build a system from scratch. If we are building our own board, we still use some sort of reference design provided by the microprocessor manufacturer. My company thinks these are awesome. The reference designs usually come with a prebuilt kernel and file system.

Even if you use a reference design, you almost always change something. You use different memory chips, physical layers (PHY), or real-time clocks (RTCs). In those cases, you need to configure the kernel to add support for these hardware devices. If you are fortunate enough to use the same hardware, the reference design’s kernel may have unnecessary features and you are trying to reduce the memory footprint (which is needed not just because of your on-board memory but also because of the over-the-air costs of updating, as I mentioned in the introduction). Or, the reference design’s kernel may not have all of the software features you want.

For example, imagine you are using an off-the-shelf Linux board (e.g., a Raspberry Pi or BeagleBoard.org’s BeagleBone). It comes with everything you need, right? Not necessarily. As with the reference design, it may use too many resources and you want to trim it, or it may not have some features you want. So, whether you are using a reference design or an off-the-shelf single-board computer (SBC), you need to be able to configure the kernel.

Linux Kernel Configuration
Many things about the Linux kernel can be tweaked in real time. (This is the subject of a future article series.) However, some options (e.g., handling Sleep mode and support for new hardware) require a separate compilation and kernel build. The Linux kernel is written in the C programming language, which supports code that can be conditionally compiled into the software through what is called a preprocessor #define

A #define is associated with each configurable feature. Configuring the kernel involves selecting the features you want with the associated #define, recompiling, and rebuilding the kernel.

Okay, I said I wasn’t going to tell you how to configure the Linux kernel, but here is a thin slice: One file contains all the #defines. Certainly, one could edit that file. But the classic way is to invoke menuconfig. Generally you would use the make ARCH=arm menuconfig command to identify the specific architecture.

There are other ways to configure the kernel—such as xconfig (QT based), gconfig (GTK+ based), and nconfig (ncurses based)—that are graphical and purport to be a little more user-friendly. We have not found anything unfriendly with using the classical method. In fact, since it is terminal-based, it works well when we remotely log in to the device.

Photo 1—This opening screen includes well-grouped options for easy menu navigation.

Photo 1—This opening screen includes well-grouped options for easy menu navigation.

Photo 1 shows the opening screen for one of our configurations. The options are reasonably well grouped to enable you to navigate the menus. Most importantly, the mutual dependencies of the #defines are built into the tool. Thus if you choose a feature that requires another to be enabled, that feature will also automatically be selected.

In addition to the out-of-the-box version, you can easily tailor all the configuration tools if you are adding your own drivers or drivers you obtain from a chip supplier. This means you can create your own unique menus and help system. It is so simple that I will leave it to you to find out how to do this. The structure is defined as Kconfig, for kernel configuration.

Flexible I/O Expansion for Rugged Applications

WynSystemsThe SBC35-CC405 series of multi-core embedded PCs includes on-board USB, gigabit Ethernet, and serial ports. These industrial computers are designed for rugged embedded applications requiring extended temperature operation and long-term availability.

The SBC35-CC405 series features the latest generation Intel Atom E3800 family of processors in an industry-standard 3.5” single-board computer (SBC) format COM Express carrier. A Type 6 COM Express module supporting a quad-, dual-, or single-core processor is used to integrate the computer. For networking and communications, the SBC35-CC405 includes two Intel I210 gigabit Ethernet controllers with IEEE 1588 timestamping and 10-/100-/1,000-Mbps multispeed operation. Four Type-A connectors support three USB 2.0 channels and one high-speed USB 3.0 channel. Two serial ports support RS-232/-422/-485 interface levels with clock options up to 20 Mbps in the RS-422/-485 mode and up to 1 Mbps in the RS-232 mode.

The SBC35-CC405 series also includes two MiniPCIe connectors and one IO60 connector to enable additional I/O expansion. Both MiniPCIe connectors support half-length and full-length cards with screw-down mounting for improved shock and vibration durability. One MiniPCIe connector also supports bootable mSATA solid-state disks while the other connector includes USB. The IO60 connector provides access to the I2C, SPI, PWM, and UART signals enabling a simple interface to sensors, data acquisition, and other low-speed I/O devices.

The SBC35-CC405 runs over a 10-to-50-VDC input power range and operates at temperatures from –40°C to 85°C. Enclosures, power supplies, and configuration services are also available.

Linux, Windows, and other x86 OSes can be booted from the CFast, mSATA, SATA, or USB interfaces, providing flexible data storage options. WinSystems provides drivers for Linux and Windows 7/8 as well as preconfigured embedded OSes.
The single-core SBC35-CC405 costs $499.

Winsystems, Inc.
www.winsystems.com

Low-Power Micromodule

The ECM-DX2 is a highly integrated, low-power consumption micromodule. Its fanless operation and extended temperature are supported by the DMP Vortex86DX2 system-on-a-chip (SoC) CPU. The micromodule is targeted for industrial automation, transportation/vehicle construction, and aviation applications.
The ECM-DX2 withstands industrial operation environments for –40-to-75°C temperatures and supports 12-to-26-V voltage input. Multiple OSes, including Windows 2000/XP and Linux, can be used in a variety of embedded designs.

AvalueThe micromodule includes on-board DDR2 memory that supports up to 32-bit, 1-GB, and single-channel 24-bit low-voltage differential signaling (LVDS) as well as video graphics array (VGA) + LVDS or VGA + TTL multi-display configurations. The I/O deployment includes one SATA II interface, four COM ports, two USB 2.0 ports, 8-bit general-purpose input/output (GPIOs), two Ethernet ports, and one PS/2 connector for a keyboard and a mouse. The ECM-DX2 also provides a PC/104 expansion slot and one MiniPCIe card slot.

Contact Avalue Technology for pricing.

Avalue Technology, Inc.
www.avalue.com.tw

Specialized Linux File Systems

Since Linux was released in 1991, it has become the operating system for “98% of the world’s super computers, most of the servers powering the Internet, the majority of financial trades worldwide, and tens of millions of Android mobile phones and consumer devices,” according to the Linux Foundation. ”In short, Linux is everywhere.”

Linux offers a variety of file systems that are relatively easy to implement. Circuit Cellar columnist Bob Japenga, co-founder of MicroTools, writes about these specialized Linux file systems as part of his ongoing series examining embedded file systems. His latest article, which also discusses the helpful Samba networking protocol, appears in the magazine’s April issue.

The following article excerpts introduce the file systems and when they should be used. For more details, including instructions on how to use these file systems and the Samba protocol, refer to Japenga’s full article in the April issue.

CRAMFS
What It Is—Our systems demand more and more memory (or file space) and a compressed read-only file system (CRAMFS) can be a useful solution in some instances.

CRAMFS is an open-source file system available for Linux. I am not sure where CRAMFS gets its name. Perhaps it gets its name because CRAMFS is one way to cram your file system into a smaller footprint. The files are compressed one page at a time using the built-in zlib compression to enable random access of all of the files. This makes CRAMFS relatively fast. The file metadata (e.g., information about when the file was created, read and write privileges, etc.) is not compressed but uses a more compact notation than is present in most file systems.

When to Use It—The primary reason my company has used CRAMFS is to cut down on the flash memory used by the file system. The first embedded Linux system we worked on had 16 MB of RAM and 32 MB of flash. There was a systems-level requirement to provide a means for the system to recover should the primary partition become corrupt or fail to boot in any way. (Refer to Part 3 of this article series “Designing Robust Flash Memory Systems,” Circuit Cellar 283, 2014, for more detail.) We met this requirement by creating a backup partition that used CRAMFS.

The backup partition’s only function was to enable the box to recover from a corrupted primary partition… We were able to have the two file systems identical in file content, which made it easy to maintain. Using CRAMFS enabled us to cut our backup file system space requirements in half.

A second feature of CRAMFS is its read-only nature. Given that it is read-only, it does not require wear leveling. This keeps the overhead for using CRAMFS files very low. Due to the typical data retention of flash memory, this also means that for products that will be deployed for more than 10 years, you will need to rewrite the CRAMFS partition every three to five years…

RAM FILE SYSTEMS
What It Is—Linux provides two types of RAM file systems: ramfs and tmpfs. Both are full-featured file systems that reside in RAM and are thus very fast and volatile (i.e., the data is not retained across power outages and system restarts).

When the file systems are created with the mount command, you specify the ramfs size. However, it can grow in size to exceed that amount of RAM. Thus ramfs will enable you to use your entire RAM and not provide you with any warning that it is doing it. tmpfs does not enable you to write more than the space allocated at mount time. An error is returned when you try to use more than you have allocated. Another difference is that tmpfs uses swap space and can swap seldom used files out to a flash drive. ramfs does not use swapping. This difference is of little value to us since we disable swapping in our embedded systems.

When to Use It—Speed is one of the primary reasons to use a RAM file system. Disk writes are lightning fast when you have a RAM disk. We have used a RAM file system when we are recording a burst of high-speed data. In the background, we write the data out to flash.

A second reason to use a RAM file system is that it reduces the wear and tear on the flash file system, which has a limited write life. We make it a rule that all temporary files should be kept on the RAM disk. We also use it for temporary variables that are needed across threads/processes.

Figure 1: An example of a network file system is shown.

Figure 1: An example of a network file system is shown.



NETWORK FILE SYSTEM (NFS)
What It Is—In the early 1990s I started working with a company that developed embedded controllers for machine control. These controllers had a user interface that consisted of a PC located on the factory floor. The company called this the production line console (PLC). The factory floor was hot, very dirty, and had a lot of vibration. The company had designed a control room console (CRC) that networked together several PLCs. The CRC was located in a clean and cool environment. The PLC and the CRC were running QNX and the PLC was diskless. The PLC booted from and stored all of its data on the CRC (see Figure 1).

This was my first exposure to a Network File System (NFS). It was simple and easy to configure and worked flawlessly. The PLCs could only access their “file system.” The CRC could access any PLC’s files.

QNX was able to do this using the NFS protocol. NFS is a protocol developed initially by Sun Microsystems (which is now owned by Oracle). Early in its lifetime, Sun turned the specification into an open standard, which was quickly implemented in Unix and its derivatives (e.g., Linux and QNX).

When to Use It—One obvious usage of NFS is for environments where a hard drive cannot easily survive, as shown in my earlier example. However, my example was before flash file systems became inexpensive and reliable so that is not a typical use for today.
Another use for NFS would be to simplify software updates. All of the software could be placed in one central location. Each individual controller would obtain the new software once the central location was updated.

The major area in which we use NFS today is during software development. Even though flash file systems are fast and new versions of your code can be seamlessly written to flash, it can be time consuming. For example, you can use a flash memory stick over USB to update the flash file system on several of our designs. This is simple but can take anywhere from several seconds to minutes.

With NFS, all of your development tools can be on a PC and you never have to transfer the target code to the target system. You use all of your PC tools to change the file on your PC, and when the embedded device boots up or the application is restarted, those changed files will be used on the device.

SAMBA
What It Is—Although we don’t like to admit it, many of us still have Windows machines on our desks and on our laptops. And many of us are attached to some good development tools on our Windows machines.

Samba is not exactly a file system but rather a file system/networking protocol that enables you to write to your embedded system’s file system from your Windows machine as if it were a Windows file system. Samba can also be used  to access your embedded system’s files from other OSes that support the SMB/CIFS networking protocol.

When to Use It—Although I primarily see Samba, like NFS, as a development tool, you could certainly use it in an environment where you needed to talk to your embedded device from a Windows machine. We have never had a need for this, but I can imagine it could be useful in certain scenarios. The Linux community documents a lot of Samba users for embedded Linux.