Touch-Sensor Development Kit for ESP32

The ESP32-Sense Kit is a new touch-sensor development kit produced by Espressif Systems. It can be used for evaluating and developing the touch-sensing functionality of ESP32. The ESP32-Sense Kit consists of one motherboard and several daughterboards. The motherboard is made up of a display unit, a main control unit and a debug unit. The daughterboards can be used in different application scenarios, since the ESP32-Sense Kit supports a linear slider, a duplex slider, a wheel slider, matrix buttons, and spring buttons. Users can even design and add their own daughterboards for special use cases. The photo provides an overview of the ESP32-Sense Kit. The wheel slider, linear slider, duplex slider, motherboard, spring buttons, and matrix buttons, are shown in a clockwise direction.

The ESP32 SoC offers up to 10 capacitive I/Os that detect changes in capacitance on touch sensors due to finger contact or proximity. The chip’s internal capacitance detection circuit features low noise and high sensitivity. It allows users to use touch pads with smaller area to implement the touch detection function. Users can also use the touch panel array to detect a larger area or more test points.

The follow related resources are available to support ESP Sense Kit:

  • ESP32 t=Touch-Sensor Design: The reference design manual of the ESP32 touch-sensing system.
  • ESP32-Sense Project: Contains programs for the ESP32-Sense Kit, which can be downloaded to the development board to enable the touch-sensing function.
  • ESP-IDF: The SDK for ESP32. Provides information on how to set up the ESP32 software environment.
  • ESP-Prog: The ESP32 debugger.

Espressif Systems | www.espressif.com

 

Wireless MCUs are Bluetooth Mesh Certified

Cypress Semiconductor has announced its single-chip solutions for the Internet of Things (IoT) are Bluetooth mesh connectivity certified by the Bluetooth Special Interest Group (SIG) to a consumer product. LEDVANCE announced the market’s first Bluetooth mesh qualified LED lighting products, which leverage Cypress’ Bluetooth mesh technology. Three Cypress wireless combo chips and the latest version of its Wireless Internet Connectivity for Embedded Devices (WICED) software development kit (SDK) support Bluetooth connectivity with mesh networking capability. Cypress’ solutions enable a low-cost, low-power mesh network of devices that can communicate with each other–and with smartphones, tablets and voice-controlled home assistants–via simple, secure and ubiquitous Bluetooth connectivity.

Previously, users needed to be in the immediate vicinity of a Bluetooth device to control it without an added hub. With Bluetooth mesh networking technology, the devices within the network can communicate with each other to easily provide coverage throughout even the largest homes, allowing users to conveniently control all of the devices via apps on their smartphones and tablets.

Market research firm ABI Research forecasts there will be more than 57 million Bluetooth smart lightbulbs by 2021. Cypress’ CYW20719, CYW20706, and CYW20735 Bluetooth and Bluetooth Low Energy (BLE) combo solutions and CYW43569 and CYW43570 Wi-Fi and Bluetooth combo solutions offer fully compliant Bluetooth mesh. Cypress also offers Bluetooth mesh certified modules and an evaluation kit. The solutions share a common, widely-deployed Bluetooth stack and are supported in version 6.1 of Cypress’ all-inclusive WICED SDK, which streamlines the integration of wireless technologies for developers of smart home lighting and appliances, as well as healthcare applications.

Cypress Semiconductor | www.cypress.com

Quantum Leaps

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

Throughout my career, I’ve always been impressed by Intel’s involvement in a wide spectrum of computing and electronics technologies. These range from the mundane and practical on one hand, to forward-looking and disruptive advances on the other. A lot of these weren’t technologies for which Intel ever intended to take direct advantage of over the long term. I think a lot about how Intel facilitated the creation of and early advances in USB. Intel even sold USB chips in the first couple years of USB’s emergence, but stepped aside from that with the knowledge that their main focus was selling processors.

USB made computers and a myriad of consumer electronic devices better and easier to use, and that, Intel knew, advanced the whole industry in which their microprocessors thrived. Today, look around your home, your office and even your car and count the number of USB connectors there are. It’s pretty obvious that USB’s impact has been truly universal.

Aside from mainstream, practical solutions like USB, Intel also continues to participate in the most forward-looking compute technologies. Exemplifying that, in January at the Consumer Electronics Show (CES) show in Las Vegas, Intel announced two major milestones in its efforts to develop future computing technologies. In his keynote address, Intel CEO Brian Krzanich announced the successful design, fabrication and delivery of a 49-qubit superconducting quantum test chip. The keynote also focused on the promise of neuromorphic computing.

In his speech, Krzanich explained that, just two months after delivery of a 17-qubit superconducting test chip, Intel that day unveiled “Tangle Lake,” a 49-qubit superconducting quantum test chip. The chip is named after a chain of lakes in Alaska, a nod to the extreme cold temperatures and the entangled state that quantum bits (or “qubits”) require to function.

According to Intel, achieving a 49-qubit test chip is an important milestone because it will allow researchers to assess and improve error correction techniques and simulate computational problems.

Krzanich predicts that quantum computing will solve problems that today might take our best supercomputers months or years to resolve, such as drug development, financial modeling and climate forecasting. While quantum computing has the potential to solve problems conventional computers can’t handle, the field is still nascent.

Mike Mayberry, VP and managing director of Intel Labs weighed in on the progress of the efforts. “We expect it will be 5 to 7 years before the industry gets to tackling engineering-scale problems, and it will likely require 1 million or more qubits to achieve commercial relevance,” said Mayberry.

Krzanich said the need to scale to greater numbers of working qubits is why Intel, in addition to investing in superconducting qubits, is also researching another type called spin qubits in silicon. Spin qubits could have a scaling advantage because they are much smaller than superconducting qubits. Spin qubits resemble a single electron transistor, which is similar in many ways to conventional transistors and potentially able to be manufactured with comparable processes. In fact, Intel has already invented a spin qubit fabrication flow on its 300-mm process technology.

At CES, Krzanich also showcased Intel’s research into neuromorphic computing—a new computing paradigm inspired by how the brain works that could unlock exponential gains in performance and power efficiency for the future of artificial intelligence. Intel Labs has developed a neuromorphic research chip, code-named “Loihi,” which includes circuits that mimic the brain’s basic operation.

While the concepts seem futuristic and abstract, Intel is thinking of the technology in terms of real-world uses. Intel says Neuromorphic chips could ultimately be used anywhere real-world data needs to be processed in evolving real-time environments. For example, these chips could enable smarter security cameras and smart-city infrastructure designed for real-time communication with autonomous vehicles. In the first half of this year, Intel plans to share the Loihi test chip with leading university and research institutions while applying it to more complex data sets and problems.

For me to compare quantum and neuromorphic computing to USB is as about as apples and oranges as you can get. But, who knows? When the day comes when quantum or neuromorphic chips are in our everyday devices, maybe my comparison won’t seem far-fetched at all.

This appears in the February (331) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

Rad-Hard MCU Family Meets Space Needs

A new microcontroller that combines specified radiation performance with low-cost development associated with Commercial Off-The-Shelf (COTS) devices is now available from Microchip Technology. Developing radiation-hardened systems for space applications has a history of long lead times and high costs to achieve the highest level of reliability for multi-year missions in a harsh environment. Today, space and other critical aerospace applications require faster development and reduced costs.

The ATmegaS64M1 is the second 8-bit megaAVR MCU from Microchip that uses a development approach called COTS-to-radiation-tolerant. This approach takes a proven automotive-qualified device, the ATmega64M1 in this case, and creates pinout compatible versions in both high-reliability plastic and space-grade ceramic packages. The devices are designed to meet radiation tolerances with the following targeted performances:

  • Fully immune from Single-Event Latchup (SEL) up to 62 MeV.cm²/mg
  • No Single-Event Functional Interrupts (SEFI) which secure memory integrity
  • Accumulated Total Ionizing Dose (TID) between 20 to 50 Krad(Si)
  • Single Event Upset (SEU) characterization for all functional blocks

The new device joins the ATmegaS128, a radiation-tolerant MCU that has already been designed into several critical space missions including a Mars exploration plus a megaconstellation of several hundred Low Earth Orbit (LEO) satellites.

The ATmega64M1 COTS device, along with its full development toolchain including development kits and code configurator, can be used to begin development of hardware, firmware and software. When the final system is ready for the prototype phase or production, the COTS device can be replaced with a pin-out compatible, radiation-tolerant version in a 32-lead ceramic package (QFP32) with the same functionality as the original device. This leads to significant cost savings while also reducing development time and risk.

The ATmegaS64M1 meets the high operating temperature range of -55°C to +125°C. It is the first COTS-to-radiation-tolerant MCU to combine a Controller Area Network (CAN) bus, Digital-to-Analog Converter (DAC) and motor control capabilities. These features make it ideal for a variety of subsystems like remote terminal controllers and data handling functions for satellites, constellations, launchers or critical avionic applications.

To ease the design process and accelerate time to market, Microchip offers the STK 600 complete development board for the ATmegaS64M1, giving designers a quick start to develop code with advanced features for prototyping and testing new designs. The device is supported by Atmel Studio Integrated Development Environment (IDE) for developing, debugging and software libraries.

Microchip Technology | www.microchip.com

The Quest for Extreme Low Power

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

Over the next couple years, power will clearly rank as a major design challenge for the myriad of edge devices deployed in Internet of Things (IoT) implementations. Such IoT devices are wireless units that need to be always on and connected. At the same time, they need low power consumption, while still being capable of doing the processing power needed to enable machine intelligence. The need for extreme low power in these devices goes beyond the need for long battery life. Instead the hope is for perpetually powered solutions providing uninterrupted operation—and, if possible, without any need for battery power. For their part, microcontroller vendors have been doing a lot in recent years within their own labs to craft extreme low power versions of their MCUs. But the appetite for low power at the IoT edge is practically endless.

Offering a fresh take on the topic, I recently spoke with Paul Washkewicz, vice president and co-founder of Eta Compute about the startup’s extreme low power technology for microcontrollers. The company claims to offer the lowest power MCU intellectual property (IP) available today, with voltages as low as 0.3 V. Eta Compute has developed and implemented a unique low power design methodology that delivers up to a 10x improvement in power efficiency. Its IP and custom designs operate over severe variations in conditions such as temperature, process, voltage and power supply variation. Eta Compute’s approach is a self-timed technology supporting dynamic voltage scaling (DVS) that is insensitive to process variations, inaccurate device models and path delay variations.

The technology has been implemented in a variety of chip functions. Among these are M0+ and M3 ARM cores scaling 0.3 V to 1.2 V operation with additional low voltage logic support functions such as real-time clocking (RTC), Advanced Encryption Standard (AES) and digital signal processing. The technology has also been implemented in an A-D converter sensor interface that consumes less than 5 µW. The company has also crafted an efficient power management device that supports dynamic voltage scaling down to 0.25 V with greater than 80% efficiency.

According to the company, Eta Compute’s technology can be implemented in any standard foundry process with no modifications to the process. This allows ease of adoption of any IP and is immune to delays and changes in process operations. Manufacturing is straightforward with the company’s IP able to port to technology nodes at any foundry. Last fall at ARM TechCon, David Baker, Ph.D. and Fellow at Eta Compute, did a presentation that included a demonstration of a small wireless sensor board that can operate perpetually on a small 1 square inch solar cell.

Attacking the problem from a different direction, another startup, Nikola Labs, is applying its special expertise in antenna design and advanced circuitry to build power harvesting into products ranging from wearables to sensors to battery-extending phone cases. Wi-Fi routers, mobile phones and other connected devices are continually emitting RF waves for communication. According to the company, radio wave power is strongest near the source—but devices transmit in all directions, saturating the surrounding area with stray waves. Nikola Labs’ high-performance, compact antennae capture this stray RF energy. Efficient electronics are then used to convert it into DC electricity that can be used to charge batteries or energize ultra-low power devices.

Nikola’s technology can derive usable energy from a wide band of frequencies, ranging from LTE (910 MHz) to Wi-Fi (2.4 GHz) and beyond (up to 6 GHz). Microwatts of power can be harvested in an active ambient RF area and this can rise to milliwatts for harvesters placed directly on transmitting sources. Nikola Labs has demonstrated energy harvesting from a common source of RF communication waves: an iPhone. Nikola engineers designed a case for iPhone 6 that captures waste RF transmissions, producing up to 30 mW of power to extend battery life by as much as 16% without impacting the phone’s ability to send and receive data.

Whether you address the challenge of extreme low power from the inside out or the outside in—or by advancing battery capabilities—there’s no doubt that the demand for such technologies will only grow within the coming years. With all that in mind, I look forward to covering developments on this topic in Circuit Cellar throughout 2018.

This appears in the January (330) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

Infineon MCUs Serve Audi’s Autonomous Car Functionality

Infineon Technologies has announced that it supplies key components for the Audi A8, the first series production car featuring level 3 automated driving. The ability of cars to self-drive is split into a number of different levels: With level 3, drivers can temporarily take their hands off the steering wheel under certain conditions.  The Audi A8 allows this when parking and exiting, in slow-moving traffic or in traffic congestion. Using microelectronics from Infineon Technologies, a car can take over in this kind of driving situation.

Various types of chips from Infineon serve the safe automated driving in the Audi A8: sensors, microcontrollers and power semiconductors. Radar sensor chips from the RASIC family are installed in the front and corner radar. They send and receive high-frequency 77-GHz signals and forward these on to the central driver assistance controller (zFAS).

A microcontroller from the AURIX family is a key component of the zFAS for reliable automated driving. AURIX enables to secure the connection to the vehicle data bus. It assesses and prioritizes data packets and initiates their processing in the fastest possible time. For example, it initiates emergency braking based on data from radar and other sensor systems. The AURIX family of microcontrollers is especially ideal for this purpose thanks to high processing power and extensive safety features.

AURIX microcontrollers are used in several controllers in the Audi A8: On the one hand, they control the functions for the engine. On the other, they operate in the Audi AI active chassis and in the electronic chassis platform, which controls the shock absorption. The microcontrollers also support activation of the airbag.

In addition to the electronics for drive, driver assistance and chassis, other semiconductor solutions from Infineon are installed in the comfort and body electronics, such as for example LED drivers from the LITIX Basic family in the tail lights as well as bridge drivers from the Embedded Power family in the windscreen wipers.

Infineon Technologies | www.infineon.com

MCUs Feature Core Independent Peripherals

Microchip has expanded its PIC18 product line to include a new line of 8-bit microcontrollers that combine a Controller Area Network (CAN) bus with an extensive array of Core Independent Peripherals (CIPs). The CIPs increase system capabilities 37088982634_91c19d5d2b_owhile making it easier for designers to create CAN-based applications without the complexity of added software. A key advantage of using a K83 MCU in CAN-based systems is that the CIPs provide deterministic response to real time events, shorten design time and can be easily configured through the MPLAB Code Configurator (MCC) tool. The new family is well suited for applications using CAN in the medical, industrial and automotive markets, such as motorized surgical tables, asset tracking, ultrasound machines, automated conveyors and automotive accessories.

The PIC18 K83 devices contain 15 time-saving CIPs including: Cyclic Redundancy Check (CRC) with memory scan for ensuring the integrity of non-volatile memory; Direct Memory Access (DMA) enabling data transfers between memory and peripherals without CPU involvement; Windowed Watchdog Timer (WWDT) for triggering system resets; 12-bit Analog-to-Digital Converter with Computation (ADC2) for automating analog signal analysis for real-time system response; and Complementary Waveform Generator (CWG) enabling high-efficiency synchronous switching for motor control.

The new products are supported by MPLAB Code Configurator (MCC), a free software plug-in that provides a graphical interface to configure peripherals and functions specific to your application. MCC is incorporated into Microchip’s downloadable MPLAB X Integrated Development Environment (IDE) and the cloud-based MPLAB Xpress IDE. The family is also supported by the Curiosity High Pin Count (HPC) Development Board.

The PIC18F25K83 with 32 kB of Flash memory is available today for sampling and in volume production starting at $1.35 each in 10,000 unit quantities. The PIC18F26K83 with 64 KB of Flash memory is available today for sampling and in volume production starting at $1.44 each in 10,000 unit quantities. Each of these parts is available in 28-pin SPDIP, SOIC, SSOP, UQFN and QFN packages.

Mircochip Technology | www.microchip.com

A Year in the Drone Age

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

When you’re trying to keep tabs on any young, fast-growing technology, it’s tempting to say “this is the big year” for that technology. Problem is that odds are the following year could be just as significant. Such is the case with commercial drones. Drone technology fascinates me partly because it represents one of the clearest examples of an application that wouldn’t exist without today’s level of chip integration driven by Moore’s law. That integration has enabled 4k HD video capture, image stabilization, new levels of autonomy and even highly compact supercomputing to fly aboard today’s commercial and consumer drones.

Beyond the technology side, drones make for a rich topic of discussion because of the many safety, privacy and regulatory issues surrounding them. And then there are the wide-open questions on what new applications will drones be used for?

For its part, the Federal Aviation Administration has had its hands full this year regarding drones. In the spring, for example, the FAA completed its fifth and final field evaluation of potential drone detection systems at Dallas/Fort Worth International Airport. The evaluation was the latest in a series of detection system evaluations that began in February 2016 at several airports. For the DFW test, the FAA teamed with Gryphon Sensors as its industry partner. The company’s drone detection technologies include radar, radio frequency and electro-optical systems. The FAA intends to use the information gathered during these kinds of evaluations to craft performance standards for any drone detection technology that may be deployed in or around U.S. airports.

In early summer, the FAA set up a new Aviation Rulemaking Committee tasked to help the agency create standards for remotely identifying and tracking unmanned aircraft during operations. The rulemaking committee will examine what technology is available or needs to be created to identify and track unmanned aircraft in flight.

This year as also saw vivid examples of the transformative role drones are playing. A perfect example was the role drones played in August during the flooding in Texas after Hurricane Harvey. In his keynote speech at this year’s InterDrone show, FAA Administrator Michael Huerta described how drones made an incredible impact. “After the floodwaters had inundated homes, businesses, roadways and industries, a wide variety of agencies sought FAA authorization to fly drones in airspace covered by Temporary Flight Restrictions,” said Huerta. “We recognized that we needed to move fast—faster than we have ever moved before. In most cases, we were able to approve individual operations within minutes of receiving a request.”

Huerta went on to described some of the ways drones were used. A railroad company used drones to survey damage to a rail line that cuts through Houston. Oil and energy companies flew drones to spot damage to their flooded infrastructure. Drones helped a fire department and county emergency management officials check for damage to roads, bridges, underpasses and water treatment plants that could require immediate repair. Meanwhile, cell tower companies flew them to assess damage to their towers and associated ground equipment and insurance companies began assessing damage to neighborhoods. In many of those situations, drones were able to conduct low-level operations more efficiently—and more safely—than could have been done with manned aircraft.

“I don’t think it’s an exaggeration to say that the hurricane response will be looked back upon as a landmark in the evolution of drone usage in this country,” said Huerta. “And I believe the drone industry itself deserves a lot of credit for enabling this to happen. That’s because the pace of innovation in the drone industry is like nothing we have seen before. If people can dream up a new use for drones, they’re transforming it into reality.”

Clearly, it’s been significant year for drone technology. And I’m excited for Circuit Cellar to go deeper with our drone embedded technology coverage in 2018. But I don’t think I’ll dare say that “this was the big year” for drones. I have a feeling it’s just one of many to come.

This appears in the December (329) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

Hop on the Moving Train

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

We work pretty far in advance to get Circuit Cellar produced and in your hands on-time and at the level of quality you expect and deserve. Given that timing, as we go to press on this issue we’re getting into the early days of fall. In my 27 years in the technology magazine business, this part of the year has always included time set aside to finalize next year’s editorial calendar. The process for me over years has run the gamut from elaborate multi-day summer meetings to small one-on-one conversations with a handful of staff. But in every case, the purpose has never been only about choosing the monthly section topics. It’s also a deeper and broader discussion about “directions.” By that I mean the direction embedded systems technologies are going in—and how it’s impacting you our readers. Because these technologies change so rapidly, getting a handle on it is a bit like jumping onto a moving train.

A well thought out editorial calendar helps us plan out and select which article topics are most important—for both staff-written and contributed articles. And because we want to include all of the most insightful, in-depth stories we can, we will continue to include a mix of feature articles beyond the monthly calendar topics. Beyond its role for article planning, a magazine’s editorial calendar also makes a statement on what the magazine’s priorities are in terms of technology, application segments and product areas. In our case, it speaks to the kind of magazine that Circuit Cellar is—and what it isn’t.

An awareness of what types of product areas are critical to today’s developers is important. But because Circuit Cellar is not just a generic product magazine, we’re always looking at how various chips, boards and software solutions fit together in a systems context. This applies to our technology trend features as well as our detailed project-based articles that explore a microcontroller-based design in all its interesting detail. On the other hand, Circuit Cellar isn’t an academic style technical journal that’s divorced from any discussion of commercial products. In contrast, we embrace the commercial world enthusiastically. The deluge of new chip, board and software products often help inspire engineers to take a new direction in their system designs. New products serve as key milestones illustrating where technology is trending and at what rate of change.

Part of the discussion—for 2018 especially—is looking at how the definition of a “system” is changing. Driven by Moore’s Law, chip integration has shifted the level of system functionally at the IC, board and box level. We see an FPGA, SoC or microcontroller of today doing what used to require a whole embedded board. In turn, embedded boards can do what once required a box full of slot-card boards. Meanwhile, the high-speed interconnects between those new “system” blocks constantly have to keep those processing elements fed. The new levels of compute density, functionality and networking available today are opening up new options for embedded applications. Highly integrated FPGAs, comprehensive software development tools, high-speed fabric interconnects and turnkey box-level systems are just a few of the players in this story of embedded system evolution.

Finally, one of the most important new realities in embedded design is the emergence of intelligent systems. Using this term in a fairly broad sense, it’s basically now easier than ever to apply high-levels of embedded intelligence into any device or system. In some cases, this means adding a 32-bit MCU to an application that never used such technology. At the other extreme are full supercomputing-level AI technologies installed in a small drone or a vehicle. Such systems can meet immense throughput and processing requirements in space-constrained applications handling huge amounts of real-time incoming data. And at both those extremes, there’s connectivity to cloud-based computing analytics that exemplifies the cutting edge of the IoT. In fact, the IoT phenomenon is so important and opportunity rich that we plan to hit it from a variety of angles in 2018.

Those are the kinds of technology discussions that informed our creation of Circuit Cellar’s 2018 Ed Cal. Available now on www.circuitcellar.com, the structure of the calendar has been expanded for 2018 to ensure we cover all the critical embedded technology topics important to today’s engineering professional. Technology changes rapidly, so we invite you to hop on this moving train and ride along with us.

This appears in the November (328) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

Microcontrollers Target Smart Water Meters

Texas Instruments has unveiled a new family of MSP430 microcontrollers with an integrated ultrasonic sensing analog front end that enables smart water meters to deliver higher accuracy and lower power consumption. In addition, TI introduced two new reference designs that make it easier to design modules for adding automated meter reading (AMR) capabilities to existing mechanical water meters. The new MCUs and reference designs support the growing demand for more accurate water meters and remote meter reading to enable efficient water resource management, accurate measurement and timely billing.

New ultrasonic MCUs and new reference designs make both electronic and mechanical water meters smarter (PRNewsfoto/Texas Instruments Incorporated)

New ultrasonic MCUs and new reference designs make both electronic and mechanical water meters smarter.

As part of the ultra-low-power MSP430 MCU portfolio for sensing and measurement, the new MSP430FR6047 MCU family lets developers add more intelligence to flow meters by taking advantage of a complete waveform capture feature and analog-to-digital converter (ADC)-based signal processing. This technique enables more accurate measurement than competitive devices, with precision of 25 ps or better, even at flow rates less than 1 liter per hour. In addition, the integrated MSP430FR6047 devices reduce water meter system component count by 50 percent and power consumption by 25 percent, enabling a meter to operate without having to charge the battery for 10 or more years. The new MCUs also integrate a low-energy accelerator module for advanced signal processing, 256 KB of ferroelectric random access memory (FRAM), a LCD driver and a metering test interface.

The MSP430 Ultrasonic Sensing Design Center offers a comprehensive development ecosystem that allows developers to get to market in months. The design center provides tools for quick development and flexibility for customization, including software libraries, a GUI, evaluation modules with metrology and DSP libraries.

TI’s new Low-Power Water Flow Measurement with Inductive Sensing Reference Design is a compact solution for the electronic measurement of mechanical flow meters with low power consumption for longer battery life. Enabled by the single-chip SimpleLink dual-band CC1350 wireless MCU, this reference design also gives designers the ability to add dual-band wireless communications for AMR networks. Designers can take advantage of the reference design’s small footprint to easily retrofit existing mechanical flow meters, enabling water utilities to add AMR capability while avoiding expensive replacement of deployed meters. The CC1350 wireless MCU consumes only 4 µA while measuring water flow rates, enabling longer product life.

A second new reference design is an ultra-low power solution based on the SimpleLink Sub-1 GHz CC1310 wireless MCU. The Low-Power Wireless M-Bus Communications Module Reference Design uses TI’s wireless M-Bus software stack and supports all wireless M-Bus operating modes in the 868-MHz band. This reference design provides best-in-class power consumption and flexibility to support wireless M-Bus deployments across multiple regions.

Texas Instruments | www.ti.com

Embedded Analytics Firm Makes ‘Self-Aware Chip’ Push

UltraSoC has announced a significant global expansion to address the increasing demand for more sophisticated, ‘self-aware’ silicon chips in a range of electronic products, from lightweight sensors to the server farms that power the Internet. The company’s growth plans are centering on shifts in applications such as server optimization, the IoT, and UltraSoC_EmbeddedAnalyticsautomotive safety and security, all of which demand significant improvements in the intelligence embedded inside chips.

UltraSoC’s semiconductor intellectual property (SIP) simplifies development and provides valuable embedded analytic features for designers of SoCs (systems on chip). UltraSoC has developed its technology—originally designed as a chip development tool to help developers make better products—to now fulfill much wider, pressing needs in an array of applications: safety and security in the automotive industry, where the move towards autonomous vehicles is creating unprecedented change and risk; optimization in big data applications, from Internet search to data centers; and security for the Internet of Things.

These developments will be accelerated by the addition of a new facility in Bristol, UK, which will be home to an engineering and innovation team headed by Marcin Hlond, newly appointed as Director of System Engineering. Hlond will oversee UltraSoC’s embedded analytics and visualization products, and lead product development and innovation. He has over two decades of experience as system architect and developer, most recently at Blu Wireless, NVidia and Icera. He will focus on fulfilling customers’ needs for more capable analytics and rich information to enable more efficient development of SoCs, and to enhance the reliability and security of a broad range of electronic products. At the same time, the company will continue to expand engineering headcount at its headquarters in Cambridge, UK.

UltraSoC | www.ultrasoc.com

Infineon Invests in Voice-Interface Tech for IoT

Infineon has made a strategic minority investment in XMOS Limited, a Bristol based fabless semiconductor company that provides voice processors for IoT devices. Infineon leads the recent $15 million Series-E funding round. According to Infineon, cars, homes, industrial plants and consumer devices are rapidly becoming connected to the Internet: 3 xcore-microphone-arrayyears from now, 30 billion devices will belong to the IoT. While today the interaction between humans and machines is mostly done by touch, the next evolutionary step of IoT will lead to the omni-presence of high-performance voice control. Infineon Technologies  wants to further develop its capabilities to shape this market segment.

Today, voice controllers, used in voice recognition systems, struggle to differentiate between speech from a person in the room, and a synthesized source such as a radio, TV; they often identify the voice of interest based on the loudest noise. Earlier in 2017 Infineon and XMOS demonstrated an enhanced solution to overcome these issues, using intelligent human-sensing microphones and gesture recognition. The solution featured a combination of Infineon’s radar and silicon microphone sensors to detect the position and the distance of the speaker from the microphones, with XMOS far field voice processing technology used to capture speech.

Infineon Technologies | www.infineon.com

XMOS | www.xmos.com

Emulating Legacy Interfaces

Do It with Microcontrollers

There’s a number of important legacy interface technologies—like ISA and PCI—that are no longer supported by the mainstream computing industry. In his article Wolfgang examines ways to use inexpensive microcontrollers to emulate the bus signals of legacy interconnect schemes.

By Wolfgang Matthes

Many of today’s PC users have never heard of interfaces like the ISA bus or the PCI bus. But in the realm of industrial and embedded computers, they are still very much alive. Large numbers of add-on cards and peripherals are out there. Many of them are even still being manufactured today—especially PCI cards and PC/104 modules for industrial control and measurement applications. In many cases, bandwidth requirements for those applications are low. As a result, it is possible to emulate the interfaces with inexpensive microcontrollers. That essentially means using a microcontroller instead of an industrial or embedded PC host.

Photo 1 - The PC/104 specifications relate to small modules, which can be stacked one above the other.

Photo 1 – The PC/104 specifications relate to small modules, which can be stacked one above the other.

To develop and bring up such a device is a good exercise in engineering education. But it has its practical uses too. Industrial-grade modules and cards are designed and manufactured for reliability and longevity. That makes them far superior to the kits, boards, shields and so on, that are intended primarily for educational purposes and tinkering. Moreover, a microcontroller platform can be programmed independently—without operating systems and device drivers. These industrial-grade boards can operate in environments that consume considerably less power and are free from the noise typical of the interior of personal computers. The projects depicted here are open source developments. Descriptions, schematics, PCB files and program sources are available for downloading.

Fields of Use

The basic idea is to make good use of peripheral modules and add-in cards. Photo 1 shows examples. Typical applications are based on industrial or embedded personal computers. The center of the system is the host—the PC. Peripheral modules or cards are attached to a standardized expansion interface, that is, in principle, an extended processor bus. That means the processor of the PC can directly address the registers within the devices. The programming interface is the processor’s instruction set. As a result, latencies are low and the peripheral modules can be programmed somewhat like microcontroller ports—without regard to complicated communication protocols. For example, if the peripheral was attached to communication interfaces like USB or Ethernet, that would complicate matters. Common expansion interfaces are the legacy ISA bus, the PCI bus and the PCI Express (PCIe) interface. …

We’ve made the October 2017 issue of Circuit Cellar available as a sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.
Don’t miss out on upcoming issues of Circuit Cellar. Subscribe today!

 

Declaration of Embedded Independence

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

There’s no doubt that we’re living in an exciting era for embedded systems developers. Readers like you that design and develop embedded systems no longer have to compromise. Most of you probably remember when the processor or microcontroller you chose dictated both the development tools and embedded operating system (OS) you had to use. Today more than ever, there are all kinds of resources available to help you develop prototypes—everything from tools to chips to information resources on-line. There’s inexpensive computing modules available aimed at makers and DIY experts that are also useful for professional engineers working on high-volume end products.

The embedded operating systems market is one particular area where customers no longer have to compromise. That wasn’t always the case. Most people identify the late 90s with the dot.com bubble … and that bubble bursting. But closer to our industry was the embedded Linux start-up bubble. The embedded operating systems market began to see numerous start-ups appearing as “embedded Linux” companies. Since Linux is a free, open-source OS, these companies didn’t sell Linux, but rather provided services to help customers create and support implementations of open-source Linux. But, as often happens with disruptive technology, the establishment then pushed back. The establishment in that case were the commercial “non-open” embedded OS vendors. I recall a lot of great spirited debates at the time—both in print and live during panel discussions at industry trade shows—arguing for and against the very idea of embedded Linux. For my part, I can’t help remembering, having both written some of those articles and having sat on those panels myself.

Coinciding with the dot-com bubble bursting, the embedded Linux bubble burst as well. That’s not to say that embedded Linux lost any luster. It continued its upward rise, and remains an incredibly important technology today. Case in point: The Android OS is based on the Linux kernel. What burst was the bubble of embedded Linux start-up companies, from which only a handful of firms survived. What’s interesting is that all the major embedded OS companies shifted to a “let’s not beat them, let’s join them” approach to Linux. In other words, they now provide support for users to develop systems that use Linux alongside their commercial embedded operating systems.

The freedom not to have to compromise in your choices of tools, OSes and systems architectures—all that is a positive evolution for embedded system developers like you. But in my opinion, I think it’s possible to misinterpret the user-centric model and perhaps declare victory too soon. When you’re developing an embedded system aimed at a professional, commercial application, not everything can be done in DIY mode. There’s value in having the support of sophisticated technology vendors to help you develop and integrate your system. Today’s embedded systems routinely use millions of lines of code, and in most systems these days software running on a processor is what provides most of the functionality. If you develop that software in-house, you need high quality tools to makes sure it’s running error free. And if you out-source some of that embedded software, you have to be sure the vendor of that embedded software is providing a product you can rely on.

The situation is similar on the embedded board-level computing side. Yes, there’s a huge crop of low-cost embedded computer modules available to purchase these days. But not all embedded computing modules are created equal. If you’re developing a system with a long shelf life, what happens when the DRAMs, processors or I/O chips go end-of-life? Is it your problem? Or does the board vendor take on that burden? Have the boards been tested for vibration or temperature so that they can be used in the environment your application requires? You have to weigh the costs versus the kinds of support a vendor provides.

All in all, the trend toward a ”no compromises” situation for embedded systems developers is a huge win. But when you get beyond the DIY project level of development, it’s important to keep in mind that the vendor-customer relationship is still a critical part of the system design process. With all that in mind, it’s cool that we can today make a declaration of independence for embedded systems technology. But I’d rather think of it as a declaration of interdependence.

This appears in the October (327) issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

CENTRI Demos Chip-to-Cloud IoT Security on ST MCUs

CENTRI has announced compatibility of its IoTAS platform with the STMicroelectronics STM32 microcontroller family based on ARM Cortex-M processor cores. CENTRI successfully completed and demonstrated two proofs of concept on the STM32 platform DJDTab0VoAAB_sKto protect all application data in motion from chipset to public Cloud using CENTRI IoTAS. CENTRI Internet of Things Advanced Security (IoTAS) for secure communications was used in an application on an STM32L476RC device with connected server applications running on both Microsoft Azure and Amazon Elastic Compute Cloud (Amazon EC2) Clouds. The proofs of concept used wireless connections to showcase the real-world applicability of IoT device communications in the field and to highlight the value of IoTAS compression and encryption.

IoTAS uses hardware-based ID to establish secure device authentication on the initial connection. The solution features patented single-pass data encryption and optimization to ensure maximum security while providing optimal efficiency and speed of data transmissions. The small footprint of IoTAS combined with the flexibility and compute power of the STM32 platform with seamless interoperability into the world’s most popular Cloud services provides device makers a complete, secure chip-to-Cloud IoT platform. CENTRI demonstrated IoTAS capabilities at the ST Developers Conference, September 6, 2017 at the Santa Clara Convention Center.

STMicroelectronics | www.st.com