VectorCAST Test Automation Platform Integration with Lauterbach TRACE32

Vector Software has announced an advanced integration with the Lauterbach TRACE32 product. VectorCAST now enables development, test, and certification teams, to set and continuously collect, practically unlimited volumes of test data from RAM constrained embedded systems.vector_software_Lauterbach

This latest integration benefits customers in aerospace, defense, automotive, medical devices, industrial control, and commercial environments where software quality and industry compliance are critical. Vector Software’s development team worked closely with Lauterbach and the TRACE32 implementation of IEEE Standard 1149.1-1990 JTAG and IEEE-ISTO 5001-2003 NEXUS.

VectorCAST’s Unit Test Automation and Structural Code Coverage is data driven, making it the only Test Automation Platform capable of minimizing the on-target memory requirement for all aspects of test case data where cycles of 100,000 to 1 million or more are required. The impact of this combined capability, VectorCAST and TRACE32, affects all of the supported hardware from microprocessors—from companies such as ARM, Imagination Technologies MIPS, Freescale/NXP, Fujitsu, Intel, and AMD—microcontrollers—from companies such as TI (Including Hercules), STMicro, and more.

Source: Lauterbach

Specialized Linux File Systems

Since Linux was released in 1991, it has become the operating system for “98% of the world’s super computers, most of the servers powering the Internet, the majority of financial trades worldwide, and tens of millions of Android mobile phones and consumer devices,” according to the Linux Foundation. ”In short, Linux is everywhere.”

Linux offers a variety of file systems that are relatively easy to implement. Circuit Cellar columnist Bob Japenga, co-founder of MicroTools, writes about these specialized Linux file systems as part of his ongoing series examining embedded file systems. His latest article, which also discusses the helpful Samba networking protocol, appears in the magazine’s April issue.

The following article excerpts introduce the file systems and when they should be used. For more details, including instructions on how to use these file systems and the Samba protocol, refer to Japenga’s full article in the April issue.

What It Is—Our systems demand more and more memory (or file space) and a compressed read-only file system (CRAMFS) can be a useful solution in some instances.

CRAMFS is an open-source file system available for Linux. I am not sure where CRAMFS gets its name. Perhaps it gets its name because CRAMFS is one way to cram your file system into a smaller footprint. The files are compressed one page at a time using the built-in zlib compression to enable random access of all of the files. This makes CRAMFS relatively fast. The file metadata (e.g., information about when the file was created, read and write privileges, etc.) is not compressed but uses a more compact notation than is present in most file systems.

When to Use It—The primary reason my company has used CRAMFS is to cut down on the flash memory used by the file system. The first embedded Linux system we worked on had 16 MB of RAM and 32 MB of flash. There was a systems-level requirement to provide a means for the system to recover should the primary partition become corrupt or fail to boot in any way. (Refer to Part 3 of this article series “Designing Robust Flash Memory Systems,” Circuit Cellar 283, 2014, for more detail.) We met this requirement by creating a backup partition that used CRAMFS.

The backup partition’s only function was to enable the box to recover from a corrupted primary partition… We were able to have the two file systems identical in file content, which made it easy to maintain. Using CRAMFS enabled us to cut our backup file system space requirements in half.

A second feature of CRAMFS is its read-only nature. Given that it is read-only, it does not require wear leveling. This keeps the overhead for using CRAMFS files very low. Due to the typical data retention of flash memory, this also means that for products that will be deployed for more than 10 years, you will need to rewrite the CRAMFS partition every three to five years…

What It Is—Linux provides two types of RAM file systems: ramfs and tmpfs. Both are full-featured file systems that reside in RAM and are thus very fast and volatile (i.e., the data is not retained across power outages and system restarts).

When the file systems are created with the mount command, you specify the ramfs size. However, it can grow in size to exceed that amount of RAM. Thus ramfs will enable you to use your entire RAM and not provide you with any warning that it is doing it. tmpfs does not enable you to write more than the space allocated at mount time. An error is returned when you try to use more than you have allocated. Another difference is that tmpfs uses swap space and can swap seldom used files out to a flash drive. ramfs does not use swapping. This difference is of little value to us since we disable swapping in our embedded systems.

When to Use It—Speed is one of the primary reasons to use a RAM file system. Disk writes are lightning fast when you have a RAM disk. We have used a RAM file system when we are recording a burst of high-speed data. In the background, we write the data out to flash.

A second reason to use a RAM file system is that it reduces the wear and tear on the flash file system, which has a limited write life. We make it a rule that all temporary files should be kept on the RAM disk. We also use it for temporary variables that are needed across threads/processes.

Figure 1: An example of a network file system is shown.

Figure 1: An example of a network file system is shown.

What It Is—In the early 1990s I started working with a company that developed embedded controllers for machine control. These controllers had a user interface that consisted of a PC located on the factory floor. The company called this the production line console (PLC). The factory floor was hot, very dirty, and had a lot of vibration. The company had designed a control room console (CRC) that networked together several PLCs. The CRC was located in a clean and cool environment. The PLC and the CRC were running QNX and the PLC was diskless. The PLC booted from and stored all of its data on the CRC (see Figure 1).

This was my first exposure to a Network File System (NFS). It was simple and easy to configure and worked flawlessly. The PLCs could only access their “file system.” The CRC could access any PLC’s files.

QNX was able to do this using the NFS protocol. NFS is a protocol developed initially by Sun Microsystems (which is now owned by Oracle). Early in its lifetime, Sun turned the specification into an open standard, which was quickly implemented in Unix and its derivatives (e.g., Linux and QNX).

When to Use It—One obvious usage of NFS is for environments where a hard drive cannot easily survive, as shown in my earlier example. However, my example was before flash file systems became inexpensive and reliable so that is not a typical use for today.
Another use for NFS would be to simplify software updates. All of the software could be placed in one central location. Each individual controller would obtain the new software once the central location was updated.

The major area in which we use NFS today is during software development. Even though flash file systems are fast and new versions of your code can be seamlessly written to flash, it can be time consuming. For example, you can use a flash memory stick over USB to update the flash file system on several of our designs. This is simple but can take anywhere from several seconds to minutes.

With NFS, all of your development tools can be on a PC and you never have to transfer the target code to the target system. You use all of your PC tools to change the file on your PC, and when the embedded device boots up or the application is restarted, those changed files will be used on the device.

What It Is—Although we don’t like to admit it, many of us still have Windows machines on our desks and on our laptops. And many of us are attached to some good development tools on our Windows machines.

Samba is not exactly a file system but rather a file system/networking protocol that enables you to write to your embedded system’s file system from your Windows machine as if it were a Windows file system. Samba can also be used  to access your embedded system’s files from other OSes that support the SMB/CIFS networking protocol.

When to Use It—Although I primarily see Samba, like NFS, as a development tool, you could certainly use it in an environment where you needed to talk to your embedded device from a Windows machine. We have never had a need for this, but I can imagine it could be useful in certain scenarios. The Linux community documents a lot of Samba users for embedded Linux.

ARM Cortex A8 System on Module

ArtilaThe M-5360A is an application-ready solution for multimedia and machine-to-machine (M2M) applications. The credit card-size System on Module (SOM) is powered by a Freescale 800-MHz i.MX537 ARM Cortex A8 processor with 1-GB DDR3 RAM and 4-GB eMMC flash.

The M-5360A features two independent low-voltage differential signaling (LVDS) channels for dual LCDs and one VGA port for external monitor connection. The i.MX537’s multimedia and graphic engine supports OpenGLE 2.0, OpenVG 1.1 graphics acceleration, and 1080P video decoding.

The M-5360A also provides powerful communication functionality (e.g., Ethernet, RS-232, RS-485, CAN 2.0, 1-Wire, and USB). This makes the SOM suitable for multimedia applications as well as embedded networking devices.

The M-5360A uses 128-pin 2-mm pin headers, which simplifies application board design.   The SOM includes a preinstalled Ubuntu OS. Android and Windows CE are available by request. In addition to the hardware building blocks, software utility and device drivers are available for user applications.

Contact Artila for pricing.

Artila Electronics Co., Ltd.

Do Small-RAM Devices Have a Future? (CC 25th Anniversary Preview)

What does the future hold for small-RAM microcontrollers? Will there be any reason to put up with the constraints of parts that have little RAM, no floating point, and 8-bit registers? The answer matters to engineers who have spent years programming small-RAM MCUs. It also matters to designers who are hoping to keep their skills relevant as their careers progress in the 21st century.

In the upcoming Circuit Cellar 25th Anniversary Issue—which is slated for publication in early 2013—University of Utah professor John Regehr shares his thoughts on the future of small-RAM devices. He writes:

For the last several decades, the role of small-RAM microcontrollers has been clear: they are used to perform fixed (though sometimes very sophisticated) functionality in environments where cost, power consumption, and size need to be minimized. They exploit the low marginal cost of additional transistors to integrate volatile RAM, nonvolatile RAM, and numerous peripherals into the same package as the processor core, providing a huge amount of functionality in a small, cheap package. Something that is less clear is the future of small-RAM microcontrollers. The same fabrication economics that make it possible to put numerous peripherals on a single die also permit RAM to be added at little cost. This was brought home to me recently when I started using Raspberry Pi boards in my embedded software class at the University of Utah. These cost $25 to $35 and run a full-sized Linux distribution including GCC, X Windows, Python, and everything else—all on a system-on-chip with 256 MB of RAM that probably costs a few dollars in quantity.

We might ask: Given that it is already the case that a Raspberry Pi costs about the same as an Arduino board, in the future will there be any reason to put up with the constraints of an architecture like Atmel’s AVR, where we have little RAM, no floating point, and 8-bit registers? The answer matters to those of us who enjoy programming small-RAM MCUs and who have spent years fine-tuning our skills to do so. It also matters to those of us who hope to keep our skills relevant through the middle of the 21st century. Can we keep writing C code, or do we need to start learning Java, Python, and Haskell? Can we keep writing stand-alone “while (true)” loops, or will every little MCU support a pile of virtual machines, each with its own OS?

Long & Short Term

In the short term, it is clear that inertia will keep the small-RAM parts around, though increasingly they will be of the more compiler-friendly varieties, such as AVR and MSP430, as opposed to earlier instruction sets like Z80, HC11, and their descendants. But will small-RAM microcontrollers exist in the longer term (e.g., 25 or 50 years)? I’ll attempt to tackle this question by separately discussing the two things that make small-RAM parts attractive today: their low cost and their simplicity.

If we assume a cost model where packaging and soldering costs are fixed but the marginal cost of a transistor (not only in terms of fabrication, but also in terms of power consumption) continues to drop, then small-RAM parts will eventually disappear. In this case, several decades from now even the lowliest eight-pin package, costing a few pennies, will contain a massive amount of RAM and will be capable of running a code base containing billions of lines…

Circuit Cellar’s Circuit Cellar 25th Anniversary Issue will be available in early 2013. Stay tuned for more updates on the issue’s content.