About Circuit Cellar Staff

Circuit Cellar's editorial team comprises professional engineers, technical editors, and digital media specialists. You can reach the Editorial Department at editorial@circuitcellar.com, @circuitcellar, and facebook.com/circuitcellar

NanoPi Neo4 SBC Breaks RK3399 Records for Size and Price

By Eric Brown

In August, FriendlyElec introduced the NanoPi M4, which was then the smallest, most affordable Rockchip RK3399 based SBC yet. The company has now eclipsed the Raspberry Pi style, 85 mm x 5 6 mm NanoPi M4 on both counts, with a 60 mm x 45 mm size and $45 promotional price ($50 standard). The similarly open-spec, Linux and Android-ready NanoPi Neo4, however, is not likely to beat the M4 on performance, as it ships with only 1 GB of DDR3-1866 instead of 2 GB or 4 GB of LPDDR3.

 
NanoPi Neo4 and detail view
(click images to enlarge)

This is the first SBC built around the hexa-core RK3399 that doesn’t offer 2GB RAM at a minimum. That includes the still unpriced Khadas Edge, which will soon launch on Indiegogo, and Vamrs’ $99 and up, 96Boards form factor Rock960, in addition to the many other RK3399 based entries listed in our June catalog of 116 hacker boards.

NanoPi M4

Considering that folks are complaining that the quad -A53, 1.4 GHz Raspberry Pi 3+ is limited to only 1GB, it’s hard to imagine the RK3399 is going to perform up to par with only 1GB. The SoC has a pair of up to 2GHz Cortex-A72 cores and four Cortex -A53 cores clocked to up to 1.5GHz plus a high-end Mali-T864 GPU.

Perhaps size was a determining factor in limiting the board to 1 GB along with price. Indeed, the 60 mm x 45 mm footprint ushers the RK3399 into new space-constrained environments. Still, this is larger than the earlier 40 mm x 40 mm Neo boards or the newer, 52 mm x 40mm NanoPi Neo Plus2, which is based on an Allwinner H5.

We’re not sure why FriendlyElec decided against calling the new SBC the NanoPi Neo 3, but there have been several Neo boards that have shipped since the Neo2, including the NanoPi Neo2-LTS and somewhat Neo-like, 50 x 25.4mm NanoPi Duo.

The NanoPi Neo4 differs from other Neo boards in that it has a coastline video port, in this case an HDMI 2.0a port with support for up to 4K@60Hz video with HDCP 1.4/2.2 and audio out. Another Neo novelty is the 4-lane MIPI-CSI interface for up to a 13-megapixel camera input.


 
NanoPi Neo4 with and without optional heatsink
(click images to enlarge)
You can boot a variety of Linux and Android distributions from the microSD slot or eMMC socket (add $12 for 16GB eMMC). Thanks to the RK3399, you get native Gigabit Ethernet. There’s also a wireless module with 802.11n (now called Wi-Fi 4) limited to 2.4 GHz Wi-Fi and Bluetooth 4.0.

The NanoPi Neo4 is equipped with coastline USB 3.0 and USB 2.0 host ports plus a Type-C power and OTG port and an onboard USB 2.0 header. The latter is found on one of the two smaller GPIO connectors that augment the usual 40-pin header, which like other RK3399 boards, comes with no claims of Raspberry Pi compatibility. Other highlights include an RTC and -20 to 70℃ support.

Specifications listed for the NanoPi Neo4 include:

  • Processor — Rockchip RK3399 (2x Cortex-A72 at up to 2.0 GHz, 4x Cortex-A53 at up to 1.5 GHz); Mali-T864 GPU
  • Memory:
    • 1GB DDR3-1866 RAM
    • eMMC socket with optional ($12) 16GB eMMC
    • MicroSD slot for up to 128GB
  • Wireless — 802.11n (2.4GHz) with Bluetooth 4.0; ext. antenna
  • Networking — Gigabit Ethernet port
  • Media:
    • HDMI 2.0a port (with audio and HDCP 1.4/2.2) for up to 4K at 60 Hz
    • 1x 4-lane MIPI-CSI (up to 13MP);
  • Other I/O:
    • USB 3.0 host port
    • USB 2.0 Type-C port (USB 2.0 OTG or power input)
    • USB 2.0 host port
  • Expansion:
    • GPIO 1: 40-pin header — 3x 3V/1.8V I2C, 3V UART, SPDIF_TX, up to 8x 3V GPIOs, PCIe x2, PWM, PowerKey
    • GPIO 2: 1.8V 8-ch. I2S
    • GPIO 3: Debug UART, USB 2.0
  • Other features — RTC; 2x LEDs; optional $6 heatsink, LCD, and cameras
  • Power — DC 5V/3A input or USB Type-C; optional $9 adapter
  • Operating temperature — -20 to 70℃
  • Dimensions — 60 x 45mm; 8-layer PCB
  • Weight – 30.25 g
  • Operating system — Linux 4.4 LTS with U-boot 2014.10; Android 7.1.2 or 8.1 (requires eMMC module); Lubuntu 16.04 (32-bit); FriendlyCore 18.04 (64-bit), FriendlyDesktop 18.04 (64-bit); Armbian via third party;

Further information

The NanoPi Neo4 is available for a promotional price of $45 (regularly $50) plus shipping, which ranges from $16 to $20. More information may be found on FriendlyElec’s NanoPi Neo4 product page and wiki, which includes schematics, CAD files, and OS download links.

This article originally appeared on LinuxGizmos.com on October 9.

FriendlyElec | www.friendlyarm.com

Tiny, 4K Signage Player Runs on Cortex-A17 SoC

By Eric Brown

Advantech announced a fanless, USM-110 digital signage player with support for Android 6.0 and its WISE-PaaS/SignageCMS digital signage management software. The compact (156 mm x 110 mm x 27 mm) device follows earlier Advantech signage computers such as the slim-height, Intel Skylake based DS-081.

 
USM-110 (left) and mounting options
(click images to enlarge)
Advantech did not reveal the name of the quad-core, Cortex-A17 SoC, which is clocked to 1.6 GHz and accompanied by a Mali-T764. It sounds very close to the Rockchip RK3288, which is found on SBCs such as the Asus Tinker Board, although that SoC instead has a Mali T760 GPU. Other quad -A17 SoCs include the Zhaoxin ZX-2000 found on VIA Technologies’ ALTA DS 4K signage player.

The USM-110, which is also available in a less feature rich USM-110 Delight model, ships with 2GB DDR3L-1333, as well as a microSD slot. You get 16GB of eMMC on the standard version and 8 GB on the Delight. There’s also a GbE port and an M.2 slot with support for an optional WiFi module with antenna kit.

The USM-110 has two HDMI ports, both with locking ports: an HDMI 2.0 port with H.265-encoded, native 4K@60 (3840 x 2160) and a 1.4 port with 1080p resolution. The system enables dual simultaneous HD displays.


USM-110 and USM-110 Delight detail views
(click image to enlarge)
The Delight version lacks the 4K-ready HDMI port, as well as the standard model’s mini-PCIe slot, which is available with an optional 4G module with antenna kit. The Delight is also missing the standard version’s RS232/485/422 port, and it has only one USB 2.0 host port instead of four.

Otherwise, the two models are the same, with a micro-USB OTG port, audio jack, reset, dual LEDs, and a 12V/3A DC input. The 0.43 kg system has a 0 to 40°C range, and offers VESA, wall, desktop, pole, magnet, and DIN-rail mounting.

Advantech’s WISE-PaaS/SignageCMS digital signage management software, also referred to as UShop+ SignageCMS, supports remote, real-time management. It allows users to layout, schedule, and dispatch signage contents to the player over the Internet, enabling remote delivery of media and media content switching via interactive APIs. A WISE Agent framework for data acquisition supports RESTful API web services for accessing and controlling applications.

Further information

The USM-110 appears to be available now at an undisclosed price. More information may be found in Advantech’s USM-110 announcement and product page.

This article originally appeared on LinuxGizmos.com on September 6.

Advantech | www.advantech.com

Wireless Charging

Electric Field of Dreams

The concept of wireless charging can be traced all the way back to Nikola Tesla. Here, Jeff examines the background and principles involved in charging devices today without wires, and takes a hands-on dive into the technology.

By Jeff Bachiochi

________________________________________________________

Nikola Tesla is the recognized inventor of the brushless AC induction motor, radio, fluorescent lighting, the capacitor discharge ignition system for automobile engines and more. His AC power (with Westinghouse) beat out Thomas Edison’s DC power in the bid for the electrification of America. DC transmission is limited to miles due to its relatively low voltage and its transmission line loses. Thanks to the advent of the transformer, AC can be manipulated allowing higher voltages and higher efficiency power transmission. Today’s research in superconducting cable may be challenging these concepts—but that’s a story for another time.

Tesla wanted to provide a method of broadcasting electrical energy without wires. The Wardenclyffe Tower Facility on Long Island Sound was to be used for broadcasting both wireless communications and the transmission of wireless power. Tesla even viewed his research on power transmission as more important than its use as a method for communications. Unfortunately, Nikola was never able to make his vision a reality.
We think of Guglielmo Marconi as being the father of radio for his development of Marconi’s law and a radio telegraph system. He was able to obtain a patent for the radio using some of Tesla’s own ideas. It’s interesting to note that after Tesla’s death in 1946, the U.S. Supreme Court invalidated the Marconi patent because the fundamental radio circuit had been anticipated by Tesla. Again, not the direction of this article.
It was likely that Nikola’s work in far-field power transmission had not been fruitful due to propagation losses (inverse square law). Even today’s work on beam-formed, far-field transmissions are marginally successful. Transformers are successful because they operate in the near field. The close proximity between the primary and secondary coil and a well-designed magnetic energy path result in low energy losses in transformers.

Modern Wireless Charging
Today’s wireless charging systems for our portable devices are based on transformer operation. However, the primary and secondary coils are not in physical contact yet still transfer energy Figure 1. Efforts to maximize the magnetic field’s coupling exist, but this less-than-ideal coupling reduces the efficiency of the transfer—50% to 70% efficient. There are basically two methodologies today: inductive (tight) coupling (near field) and resonant inductive (loose) coupling (mid field). The resonant circuit allows an equivalent power to be transferred at a slightly greater distance.

Figure 1
The device is considered near-field (closely coupled) when the distance between the coils is less than the coils diameter. The mid-field device’s distance exceeds the coils diameter and relies on resonance to improve its power transfer.

Wireless efforts are in total flux with at least three organizations jockeying for position: The Wireless Power Consortium (WPC, induction), the Alliance for Wireless Power (A4WP, resonant) and Power Matters Alliance (PMA, induction). Interestingly, after WPC announced its plans to widen their specs to include resonant technologies, A4WP and PMA merged to become the AirFuel Alliance and now cover both technologies as well.

Beyond induction type, the biggest differences between the technologies is in control communication. Control of the charging process requires communications between transmitters and receivers. Induction technology uses in-band modulation of the RF signal to send and receive communications. Resonant induction technology uses Bluetooth for out-of-band communications. This makes the transmitter/receiver pair simpler but adds the complexity of Bluetooth. Since many receiving devices already have Bluetooth, this may be moot.

The Qi Standard
The WPC has coined the term Qi for their standard. If you search the web for wireless charging, this term pops up all over the place. This is not to say the AirFuel’s standard isn’t available—it seems to be a difference in promotional strategies. AirFuel has invested in getting their receivers into devices and their transmitters installed in public places. And while Qi receivers are also going into devices, their transmitters seem to be aimed at the individual. That means easy access to both Qi transmitters and receivers.
You can get the V1.2.2 specifications for the Qi standard from the WPC website. The current version (1.2.3) is available only to members now but should be public shortly. The two documents I received were “Reference Designs” and “Interface Definitions” for Power Class 0 specifications.

Power Class 0 aims to deliver up to 5 W of energy wirelessly via magnetic induction. This is accomplished by applying a fixed RF signal—generally in the 140 kHz range—into an inductive load (transformer primary). This is much like providing motor control using a half or full bridge, with the (transmitting) coil as the load instead of a motor.
Referring back to Figure 1, a receiver uses a similar coil (the transformer secondary). This coil supplies rectification circuitry with the voltage/current needed to power the receiver. The receiver can vary its load, which modulates the burden on the transmitting coil. Back at the transmitter, a change in the primary’s current can indicate when the secondary’s load is in range. Initially the transmitters remain relatively inactive, except for a periodic “ping” to look for a receiver. A normal ping will occur every
500 ms and last about 70 ms (Photo 1). Once in range the receiver gets secondary current and can self-power. During the last 50 ms of a ping, a receiver has a chance to communicate by modulating its load at 2 kHz rate (Photo 2). There are presently 16 messages it can choose to send.

Photo 1
This oscilloscope screenshot shows the “ping” transmissions of a wireless transmitter with no receiver in range.

Photo 2
Here we see a receiver sending a packet by modulating its load during the transmitter’s RF transmission.

Each message has four parts: a preamble, header, message and checksum. The preamble consists of from 11 to 25 “1” wake-up bits. The header is a 1-byte command value. The message length is fixed for each command, presently 1 to 8 bytes. The checksum is a 1-byte sum of the header and message bytes. All bytes in the header, message and checksum have an 11-bit asynchronous format consisting of a start bit (0), data bits (for example Command, LSB first), odd parity bit (OP) and stop bit (1). Each bit is sent using bi-phase encoding. Each bit begins with a state change in sync with its 2 kHz clock. The value of a bit is “0” when its logic states does not change during a 2 kHz clock period. If the state does change within that period, then the bit is a “1”.

The receiver has control over the transmitter. It initiates communication to send information and request power transfer. Back in Photo 2 you can see a Control Error Packet with a Header=0x03 and data=0x00. The signed value of the data indicates any difference between the requested and received current level.

While the receiver is in charge (ha!), the transmitter can acknowledge requests with 1 of 3 responses: ACK (accept), NAK (deny), or ND (invalid). Responses have no packet per se, but are merely a Frequency-shift keying (FSK) modulated pattern of 0s, 1s or alternating 0s and 1s. The receiver can request the depth of the FSK modulation from a list of choices between +/- 30 to 282 ns. The depth is defined as the difference in ns between the 1/Fop (operating frequency) and 1/Fmod (modulation frequency). The format is again bi-phase encoding in sync with the RF frequency. All bits begin with a change in modulation frequency. A “1” bit is indicated by a change in frequency after 256 cycles, while a ‘”0” bit has no change until the beginning of the next bit time. Responses are therefore easy for a receiver to demodulate.

So, communication is AM back-scatter from the receiver and FM on the base RF from the transmitter. The present specification defines three packets that can be sent by a transmitter in addition to the ACK, NAK and ND. These are informational and are formatted like the receiver packets, less the preamble.

System Control
From the transmitters point of view, it has 4 basic states: ping, ID, power transfer and selection. The transmitter is idle while in the ping state. Without some communication from a receiver, the transmitter will never do anything but ping. Once communication begins the receiver attempts to identify itself and become configured, at which point the transmitter can start power transfer. The transmitter will continue monitoring its feedback and change states when necessary. For instance, if communication is lost, it must cancel the power transfer state and begin to ping. The ability to detect foreign objects (FOD) is required for any system that can exceed 5W of power transfer. This parameter adds an additional 3 states to the basic 4 states: negotiation, calibration and renegotiation. When using FOD the negotiation state is required to complete identification, configuration and calibration. Calibration allows the transmitter to fine tune its ability to FOD. During the power transfer state, the receiver may wish to adjust its configuration. As long as no requests violate operational parameters, the power transfer state can continue. Otherwise the selection state will redirect further action. You can see how this works in the state diagram in Figure 2.

Figure 2
This is a general state diagram of the Qi standard for wireless chargers. Note two potential paths based whether or not foreign object detection is supported (required for greater than 5 W).

From the receiver’s point of view, it could be in an unpowered (dead) state prior to entering the transmitter’s field. Once within range, the short ping from a transmitter is sufficient to charge up its capacitive supply and begin its application programming. Its first order of business is to look for a legal ping, so it can properly time its first request 40ms after the beginning of a ping. The first packet is a signal strength measurement, some indication of transmitted energy. This is sufficient for the transmitter to enter the identification and configuration state by extending its RF timing and look for additional packets from the receiver. The receiver must now identify itself—version, manufacturer and whether or not it accepts the FOD extensions. A configuration packet will transfer its requirements as well as the optional packets it should expect. The transmitter digests all this data and will determined, based on the receiver’s ability to accept FOD extensions, whether it will proceed directly to the power transfer state or enter the negotiation state.

Packets must have a minimum of 7 ms silent period between each. The values sent in the configuration packet denote an official power contract between the transmitter and receiver. When the receiver doesn’t accept FOD extensions, it is this contract that the transmitter will abide by once it enters the power transfer state. If FOD extensions are enabled, it enters the negotiation state in an attempt to change the contract and provide higher power. The transmitter’s response lets the receiver know when a request to change a parameter is acceptable. This way both receiver and transmitter agree on the power contract it will use when negotiations are closed.

Once negotiations have ended, the calibration state is entered. The calibration consists of multiple packets containing received power values measured by the receiver while it enables and disables its load (maximum and minimum power requirements). This provides the transmitter with some real use values so it can better determine FOD.
During the power transfer (PT) state, the receiver must send a control error packet every 250 ms that the transmitter uses to determine its operating (PID) parameters. Meanwhile, received power packets are sent every 1,500 ms. Without this feedback, the transmitter will drop out of the PT state. Other packets can affect the PT state as well—most notably an end power packet. This may be due to a full charge or other safety issue and the transmitter drops out of the PT state. At this point a receiver can cease communication and while the transmitter will begin pinging, the receiver can rest indefinitely.

Sense, Configure, Charge
I’ve found that the Qi receivers with micro USB connectors make it easy to add wireless charging to your phone or tablet. One of these fits inside my Motorola phone with only the smallest bump of the connector on the outside. My Amazon Fire is not so lucky. It had to stay on the outside (Photo 3).

Photo 3
Shown on the left, the Amazon Fire required me to add the Qi receiver to the outside. It’s covered with a very large band-aid. On the right, my Motorola phone had room inside. The only clue is the receiver’s minimally obtrusive micro USB connector.

Adafruit has a module available that has no connector and is not enclosed in a skin (Photo 4). You can see in that photo that a receiver requires very few external components. This one uses a Texas Instruments bq51013B, which is less than $4. One advantage to choosing this device is the non-BGA version which is appealing to the DIYer that wants to hand solder the device onto a PCB.

Photo 4
You can see how few components are required on the Adafruit Qi wireless receiver shown here on a wireless transmitter. The voltmeter shows a voltage output of 4.98 V.

I suggest that you use high strand, flexible wires when making connections to this because a stiff wire can cause undue stress on flexible circuitry. I want to use this wireless charge receiver to keep some of my robots charged. To do this, the robot has to ride over the top of a transmitter. The receiver would then recharge the on-board battery or batteries. I’ve chosen to use Li-ion batteries because they have a high power-to-weight ratio. They also have a relatively flat discharge curve. Unfortunately, a single cell 3.7-V Li-ion battery is not sufficient to power most motors. Therefore, multiple cells must be used.

When multiple cells are in series the charging becomes an issue because the cells should be charged using a balanced charger to prevent charge imbalance. Charging cells in series as a group cannot prevent the over/under charging of individual cells. This means one of two approaches: Use a single cell and use a boost power converter to obtain your necessary voltage, or use a more complicated multi-cell, balanced charger with a boost converter between the wireless receiver and the charger’s input.
Upon contemplating the pros and cons of each method, I’ve decided to use a modular approach by treating each battery as a separate entity. The simplest charging IC I could find was STMicroelectronics’ STC4054. This is a TSOT23-5L (5-pin) device that requires only one external component to set the charging rate. This is important because some chargers will allow very high currents and I will be sharing the current for all chargers via one wireless receiver. While these can handle 1 A, if I want to say, charge four Li-ion batteries in series I need to limit each charging circuit to 250 mA (250 X 4 = 1,000) or I run the risk of the wireless receiver becoming overloaded and everything will shut down.

The STC4054 has a charging voltage of 4.2 V using the maximum of whatever current you set by the resistor you choose from the PROG pin to ground using the following formula:

rearranging we get…

A minimum VCC of 4.25 V is sufficient to sustain a complete charge cycle. Here is a breakdown of the whole charge cycle: If the battery voltage is below 2.9 V it will be trickle charged at 1/10 of IBAT. Once it reaches 2.9 V it enters the constant current mode charging at IBAT. Once it reaches 4.2 V it switches to constant voltage mode to prevent over charging. The cycle ends when the current drops to less than 1/10 of IBAT. Should the battery voltage fall below 4.05 V, a charging cycle will begin to maintain the battery capacity to a value higher than 80%.

The STC4054 is thermally protected by reducing the charging current should the temperature approach 120°C. The package leads are the main heat conductors from the die, so sufficient copper areas on the PCB will help with heat radiation. The device will max at 800 mA, but is spec’d to handle 500 mA at 50°C. You can expect stability without additional compensation unless you have long leads to the battery. A 1 µF to 4.7 µF capacitor can be added to the BAT connection if necessary.

The CHRG pin is an open-collector output which can be monitored to indicate when the IC is in the charging state. It will pull down an LED if you wish a visual status indicator. This IC will cost you about $1.50. With no voltage applied to the IC, it will go into a power down mode with a drain of only 17 µA on the battery.

Now this circuit takes care of charging a single battery, and we might have up to four in series. It’s the series part that is the problem because only the first can have a reference to ground. Since the wireless receiver is designed to produce a 5 V output, this is easily connected to the first charging circuit. We could try and get fancy with a boost converter to get a 20 V output to feed the four chargers with their inputs in series, but this has all kinds of bad karma associated with it. Fortunately, there is a rather inexpensive solution: Use isolated DC-DC converter modules. All converter inputs are in parallel on the wireless chargers output. Each of the converters’ outputs can be tied to a separate charging circuit. Since each of the converters’ outputs are isolated from its inputs, there is no reference to ground—the minus output of the wireless receiver). That means they can then be used to charge batteries which are connected in series.

The circuit given in Figure 3 shows four changing circuits—each (potentially) using its own isolated 5 VDC to 5 VDC converter. These are available in 1 W to 3 W SIP-style packages and cost from $3 to $10 each. Modules with high current (greater than 3W) are available, but the package style changes to DIP. Their inputs are in parallel with a connector meant to go to the wireless charging receiver. There are a lot of jumpers used to select how the outputs of each charging circuit are to be connected to output connectors. Each charger can charge one Li-ion battery using a standard two-pin 1S1P (one series cell, one parallel cell) connector. Or you can jumper them in series, which uses the standard connectors for 2S1P, 3S1P and 4S1P (series cells).

Figure 3
This schematic shows four Li-ion cell charging circuits using the STC4054. The input to each IC can come from an optional isolated source when using a DC-DC converter from RECOM. If each of the charging circuits are isolated, they can be applied to separate Li-ion cells in series.

You’ll note that multi cell Li-ion battery packs usually come with two connectors—one for use and one for charging. The charging connector contains a wire to each battery junction to allow cell-balanced charging (Photo 5). Battery packs that feature balanced charging usually contain the JST HX connector for charging. The power contacts are another story. They may be JST HSNG style, Dean’s connector or other specialty types.

Photo 5
There seems to be some standardization on balanced Li-ion cell chargers. They require a common plus 1 wire for each cell so each cell can be monitored and charged independently. This means any battery pack with more than one cell requires separate connectors for charging and discharging.

Small Bots ‘n Bats
You’ll find plenty of the small robot bases using AA batteries with a UNO or some other micro platform as its controller. There is nothing wrong with these inexpensive platforms for educational purposes. With a fresh set of batteries, you will usually have predictable behavior. In rather short order however, things will begin to go loony. The motor load will begin to affect the controller as the battery voltage dips. Even at 6 V, with a low drop out regulator, the controller operation and any sensors will quickly become unpredictable. This can be truly a frustrating time for the newbie, as one searches their code for a logic error that might produce the inappropriate action observed, when actually there may be nothing wrong!

You can save a lot of heartache if you just add an extra battery (or two) to raise the voltage to 7.5 V or even better 9 V. I’ve seen kids quickly lose interest or give up entirely simply because they don’t understand what’s happening. I’ve found a better solution is to replace the AAs with a couple of Li-ion 18650 type 3.7 V cells. The 18650 looks like an over-sized AA battery and has battery holders that are similar (Photo 6). You can expect about 2,000 mA-hours from AA cells. The 18650 Li-ion cells pack about three times the energy and they can be popped out and recharged in a few hours. Li-ion flat packs can also be used here, but they are not as “universal” as the 18650 single cells.

Photo 6
Replacing four AA cells with two 18650 Li-ion cells can save a lot of head scratching when unexpected behavior is due to battery droop. The AA cells (4 x 1.5 V) does not leave much headroom when a 5 V regulator is used. Not only is the discharge curve relatively flat for Li-ion cells (2 x 3.7 V) preventing drops in regulation, but the 18650 packs 3 times the energy.

It is a good idea to remove batteries from any equipment that will not be used for extended periods of time. Many devices today—like the ones with auto off functions—have parasitic circuits that continue to draw minuscule currents even when “off”. These will continue to draw down your batteries until they are unusable. Even though Li-ion cells have a protective circuit that prevents them from being discharged below a safe level—approximately 2.75 V/cell—this internal circuitry is parasitic and acts as a tiny load. While self-discharge is only a few percent per month, once the cell voltage drops below a critical voltage this circuitry may not allow it to be recharged. So always store a rechargeable in a “charged” state.

Wireless Charging today
Wireless charging is only in its infancy. Today’s phone chargers are typically less than 5 W. But there is work being done on higher rated equipment. It is proper that these low power devices have such safeguards built-in to prevent unwanted catastrophes. We know from the not too distant past that, along with higher power density materials, comes the potential for calamity unless the proper safeguards are in place. Public education can limit the misuse and/or abuse of lithium technology, just as it has for the safe handling and use of gasoline.

In order for the electric vehicle to become useful, we will need to replenish its range-defining battery charge in fairly short order. This requires extreme infrastructure changes. You can tell by the size of the connectors and cable required for this process that this is high power. The holy grail is for this to happen wirelessly and automatically. From a simple pad embedded in ground where you park your vehicle, to a highway infrastructure that transfers power to your vehicle while you drive. Wireless power transfer is here to stay. Nikola Tesla must be at peace knowing that his work is beginning to bear fruit.

Additional materials from the author are available at:www.circuitcellar.com/article-materials

RESOURCES

Adafruit | www.adafruit.com
RECOM | www.recom-power.com
STMicrolectronics | www.st.com
Texas Instruments | www.ti.com

See the article in the May 334 issue of Circuit Cellar

Don’t miss out on upcoming issues of Circuit Cellar. Subscribe today!

Note: We’ve made the October 2017 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

December Circuit Cellar: Sneak Preview

The December issue of Circuit Cellar magazine is coming next week. Don’t miss this last issue of Circuit Cellar in 2018. Pages and pages of great, in-depth embedded electronics articles prepared for you to enjoy.

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

 

Here’s a sneak preview of December 2018 Circuit Cellar:

AI, FPGAs and EMBEDDED SUPERCOMPUTING

Embedded Supercomputing
Gone are the days when supercomputing levels of processing required a huge, rack-based systems in an air-conditioned room. Today, embedded processors, FPGAs and GPUs are able to do AI and machine learning kinds of operation, enable new types of local decision making in embedded systems. In this article, Circuit Cellar’s Editor-in-Chief, Jeff Child, looks at these technology and trends driving embedded supercomputing.

Convolutional Neural Networks in FPGAs
Deep learning using convolutional neural networks (CNNs) can offer a robust solution across a wide range of applications and market segments. In this article written for Microsemi, Ted Marena illustrates that, while GPUs can be used to implement CNNs, a better approach, especially in edge applications, is to use FPGAs that are aligned with the application’s specific accuracy and performance requirements as well as the available size, cost and power budget.

NOT-TO-BE-OVERLOOKED ENGINEERING ISSUES AND CHOICES

DC-DC Converters
DC-DC conversion products must juggle a lot of masters to push the limits in power density, voltage range and advanced filtering. Issues like the need to accommodate multi-voltage electronics, operate at wide temperature ranges and serve distributed system requirements all add up to some daunting design challenges. This Product Focus section updates readers on these technology trends and provides a product gallery of representative DC-DC converters.

Real Schematics (Part 1)
Our magazine readers know that each issue of Circuit Cellar has several circuit schematics replete with lots of resistors, capacitors, inductors and wiring. But those passive components don’t behave as expected under all circumstances. In this article, George Novacek takes a deep look at the way these components behave with respect to their operating frequency.

Do you speak JTAG?
While most engineers have heard of JTAG or have even used JTAG, there’s some interesting background and capabilities that are so well know. Robert Lacoste examines the history of JTAG and looks at clever ways to use it, for example, using a cheap JTAG probe to toggle pins on your design, or to read the status of a given I/O without writing a single line of code.

PUTTING THE INTERNET-OF-THINGS TO WORK

Industrial IoT Systems
The Industrial Internet-of-Things (IIoT) is a segment of IoT technology where more severe conditions change the game. Rugged gateways and IIoT edge modules comprise these systems where the extreme temperatures and high vibrations of the factory floor make for a demanding environment. Here, Circuit Cellar’s Editor-in-Chief, Jeff Child, looks at key technology and product drives in the IIoT space.

Internet of Things Security (Part 6)
Continuing on with his article series on IoT security, this time Bob Japenga returns to his efforts to craft a checklist to help us create more secure IoT devices. This time he looks at developing a checklist to evaluate the threats to an IoT device.

Applying WebRTC to the IoT
Web Real-time Communications (WebRTC) is an open-source project created by Google that facilitates peer-to-peer communication directly in the web browser and through mobile applications using application programming interfaces. In her article, Callstats.io’s Allie Mellen shows how IoT device communication can be made easy by using WebRTC. With WebRTC, developers can easily enable devices to communicate securely and reliably through video, audio or data transfer.

WI-FI AND BLUETOOTH IN ACTION

IoT Door Security System Uses Wi-Fi
Learn how three Cornell students, Norman Chen, Ram Vellanki and Giacomo Di Liberto, built an Internet connected door security system that grants the user wireless monitoring and control over the system through a web and mobile application. The article discusses the interfacing of a Microchip PIC32 MCU with the Internet and the application of IoT to a door security system.

Self-Navigating Robots Use BLE
Navigating indoors is a difficult but interesting problem. Learn how these two Cornell students, Jane Du and Jacob Glueck, used Received Signal Strength Indicator (RSSI) of Bluetooth Low Energy (BLE) 4.0 chips to enable wheeled, mobile robots to navigate towards a stationary base station. The robot detects its proximity to the station based on the strength of the signal and moves towards what it believes to be the signal source.

IN-DEPTH PROJECT ARTICLES WITH ALL THE DETAILS

Sun Tracking Project
Most solar panel arrays are either fixed-position, or have a limited field of movement. In this project article, Jeff Bachiochi set out to tackle the challenge of a sun tracking system that can move your solar array to wherever the sun is coming from. Jeff’s project is a closed-loop system using severs, opto encoders and the Microchip PIC18 microcontroller.

Designing a Display System for Embedded Use
In this project article, Aubrey Kagan takes us through the process of developing an embedded system user interface subsystem—including everything from display selection to GUI development to MCU control. For the project he chose a 7” Noritake GT800 LCD color display and a Cypress Semiconductor PSoC5LP MCU.

Quad Core i3-Based Type 6 COM Express Board

ADLINK has announced the addition of the quad-core Intel Core i3-8100H processor to its recently released Express-CF COM Express Basic size Type 6 module based on the 8th Generation Intel Core i5/i7 and Xeon processors (formerly Coffee Lake). The Express-CF/CFE is the first COM Express COM.0 R3.0 Basic Size Type 6 module supporting the Hexa-core (6 cores) 64-bit 8th Generation Intel Core and Xeon processor (codename “Coffeelake-H”) with Mobile Intel QM370, HM370, CM246 chipset.

Whereas previous generations Intel Core i3 processors supported only dual cores with 3 MB cache, the Intel Core i3-8100H is the first in its class to support 4 CPU cores with 6 MB of cache. This major upgrade results in a more than 80% performance boost in MIPS (million instructions per second), and an almost doubling of memory/caching bandwidth, all at no significant cost increase compared to earlier generations. Intel Core i3 processors are widely recognized as the best valued processor and are therefore preferred in high-volume, cost-sensitive applications. They are popular choices in gaming, medical and industrial control.

These Hexa-core processors support up to 12 threads (Intel Hyper-Threading Technology) as well as an impressive turbo boost of up to 4.4 GHz. These combined features make the Express- CF/CFE well suited to customers who need uncompromising system performance and responsiveness in a long product life solution. The Express-CF/CFE has up to three SODIMM sockets supporting up to 48 GB of DDR4 memory (two on top by default, one on bottom by build option) while still fully complying with PICMG COM.0 mechanical specifications. Modules equipped with the Xeon processor and CM246 Chipset support both ECC and non-ECC SODIMMs.

Integrated Intel Generation 9 Graphics includes features such as OpenGL 4.5, DirectX 12/11, OpenCL 2.1/2.0/1.2, Intel Clear Video HD Technology, Advanced Scheduler 2.0, 1.0, XPDM support, and DirectX Video Acceleration (DXVA) support for full H.265/HEVC 10-bit, MPEG2 hardware codec. In addition, High Dynamic Range is supported for enhanced picture color and quality and digital content protection has been upgraded to HDCP 2.2.

Graphics outputs include LVDS and three DDI ports supporting HDMI/DVI/DisplayPort and eDP/VGA as a build option. The Express-CF/CFE is specifically designed for customers with high-performance processing graphics requirements who want to outsource the custom core logic of their systems for reduced development time. In addition to the onboard integrated graphics, a multiplexed PCIe x16 graphics bus is available for discrete graphics expansion.

Input/output features include eight PCIe Gen3 lanes that can be used for NVMe SSD and Intel Optane memory, allowing applications access to the highest speed storage solutions and include a single onboard Gbit Ethernet port, USB 3.0 ports and USB 2.0 ports, and SATA 6 Gb/s ports. Support is provided for SMBus and I2C. The module is equipped with SPI AMI EFI BIOS with CMOS backup, supporting embedded features such as remote console, hardware monitor and watchdog timer.

ADLINK Technology | www.adlinktech.com

What are the 5 Biggest Myths About Developing Embedded Vision Solutions?

Are embedded vision solutions complex? Expensive? Strictly about software? Get answers to your top questions about developing embedded vision solutions, right from Avnet & Xilinx.


We’re at the moment of truth with embedded vision systems as scores of new applications means designs must go up faster than ever—with new technologies dropping every day.

But isn’t embedded vision complex? Lacking scalability? Rigid in its design capability?

Truth be told, most of those ideas are myths. From the development of the first commercially viable FPGA in the 1980s to now, the amount of progress that’s been made has revolutionized the space.

So while it can be complex to decide how you’ll enter an ever-changing embedded vision market, it’s simpler than it used to be. It’s true: Real-time object detection used to be a strictly research enterprise and image processing a solely software play. Today, though, All Programmable devices enable system architects to create embedded vision solutions in record time.

As far as flexibility goes, you’ll find something quite similar. In the past, programming happened on the software side because hardware was preformatted. But FPGAs are more customizable. They contain logic blocks, the programmable components and reconfigurable interconnects that allow the chip to be programmed which allows for more efficiency of power, temperature and design—all without the need of an additional OS.

Ready to bust some more myths around embedded vision? Watch our video breaking down the five biggest myths around embedded vision development.

WATCH NOW >

Low-Profile Mini-ITX System Targets Signage

AAEON has released the ACS-1U01 Series, a range of turnkey solutions that capitalize on the strength of three of its bestselling SBCs. By enclosing the boards inside a tough 1U chassis, the unit provides a ready-to-go system for use in a variety of applications including digital signage as well as industrial automation, POS, medical equipment and transportation.

The three models—the ACS-1U01-BT4, ACS-1U01-H110B, and ACS-1U01-H81B—feature a tough, 44.45 mm-high chassis with a wallmount kit and 2.5” HDD tray. The low-profile, low-power-consumption systems have full Windows and Linux support, they can be expanded via full- and half-size Mini-Card slots, and heatsinks give them operating temperature ranges of 0°C to 50°C.

The ACS-1U01-BT4 houses AAEON’s EMB-BT4 motherboard, which can be fitted with either an Intel Atom J1900 or N2807 processor. The J1900 can be used with a pair of DDR3L SODIMM sockets for up to 8 GB dual-channel memory, while the N2807 can be used with a single DDR3L SODIMM socket. The board’s extensive I/O interface provides the system with a GbE LAN port, dual independent HDMI and VGA displays, a USB3.0 port, up to seven USB2.0, and up to six COM ports.

The ACS-1U01-H110B contains AAEON’s EMB-H110B, which is built to accommodate up to 65W 6th/7th Generation Intel Core i Series socket-type processors and supports up to 32GB dual-channel memory via a pair of DDR4 SODIMM sockets. Dual independent display is possible through two HDMI ports, or the option of DP connections. The system also features a GbE LAN port, four USB3.0 ports, four USB2.0 ports, and a COM port.

The ACS-1U01-H81B is built around AAEON’s EMB-H81B, which is designed for 4th Generation Intel Core i Series socket-type processors with TDPs of up to 65W. Two SODIMM sockets allow for up to 16GB dual-channel DDR3 memory, and HDMI, DP, and optional VGA ports enable dual independent display. The system has two GbE LAN ports, two USB3.0 ports and six USB2.0 ports.

AAEON | www.aaeon.com

Benchmarks for the IoT

Input Voltage

–Jeff Child, Editor-in-Chief

JeffHeadShot

I remember quite vividly back in 1997 when Marcus Levy founded the Embedded Microprocessor Benchmark Consortium, better known as EEMBC. It was big deal at the time because, while benchmarks where common in the consumer computing world of desktop/laptop processors, no one had ever crafted any serious benchmarks for embedded processors. I was an editor covering embedded systems technology at the time, and Marcus, as an editor with EDN Magazine back then, traveled in the same circles as I did. On both the editorial side and on the processor vendor side, he had enormous respect in the industry—making him an ideal person to spin up an effort like EEMBC.

Creating benchmarks for embedded processors was more complicated than for general purpose processors, but EEMBC was up the challenge. Fast forward to today, and EEEBC now boasts a rich list of performance benchmarks for the hardware and software used in a variety of applications including autonomous driving, mobile imaging, mobile devices and many others. In recent years, the group has taken on the complex challenge of developing benchmarks for the Internet-of-Things (IoT).

I recently had the chance to talk with EEMBC’s current president, Peter Torelli, about the consortium’s latest effort: its IoTMark-BLE benchmark. It’s part of the EEMBC’s IoTMark benchmarking suite for measuring the combined energy consumption of an edge node’s sensor interface, processor and radio interface. IoTMark-BLE focuses on Bluetooth Low Energy (BLE) devices. In late September, EEMBC announced that the IoTMark-BLE benchmark is available for licensing.

The IoTMark-BLE benchmark profile models a real IoT edge node consisting of an I²C sensor and a BLE radio through sleep, advertise and connected-mode operation. The benchmark measures the energy required to power the edge node platform and to run the tests fed by the benchmark. At the center of the benchmark is the IoTConnect framework, a low-cost benchmarking harness used by multiple EEMBC benchmarks. The framework provides an external sensor emulator (the I/O Manager), a BLE gateway (the radio manager) and an Energy Monitor.

Benchmark users interact with the DUT via an interface with which they can set a number of tightly defined parameters, such as connection interval, I²C speed, BLE transmission power and more. Default values are provided to enable direct comparisons between DUTs, or users can change them to analyze a design’s sensitivity to each parameter. IoTMark-BLE’s IoTConnect framework supports microcontrollers (MCUs) and radio modules from any vendor, and it is compatible with any embedded OS, software stack or OEM hardware.

It makes sense that IoT benchmarks focus on power and energy use. IoT edge devices need to work in remote locations near the sensors they’re linked with. With that in mind, Peter Torelli says that the benchmark measures everything inside an IoT system-on-chip (SoC)—including the peripheral I/O reading from the I2C sensor, the transmit and receive amplifiers in the BLE radio—everything except the sensor itself. Torelli says it was important to not use intelligent sensors for the benchmark, the idea being that its important that the MCU’s role performing communication be part of the measurement. Interestingly, in developing the benchmark, it was found that even the software stacks on IoT SoCs have a big impact on performance. “Some are very efficient when they’re in advertise mode or in active mode, and then go to sleep,” says Torelli, “And there are others that remain active for much longer times and burn a lot of power.”

Shifting gears, I want to take moment to praise long time columnist and member of the Circuit Cellar family, Ed Nisley. Over 30 years ago, Steve Ciarcia asked Ed to write a regular column for the brand-new Circuit Cellar INK magazine. After an even 200 articles, Ed decided to make his September column his last. Thank you, Ed, for your many years of insightful, quality work in the pages of this magazine. You’ll be missed. Readers can follow Ed’s continuing series of shop notes, projects and curiosities on his blog at softsolder.com.

Let me welcome Brian Millier as our newest Circuit Cellar columnist—his column Pickup Up Mixed Signals begins this issue. Brian is no stranger to the magazine, penning over 50 guest features in the magazine since the mid-90s on a variety of topics including guitar amplifier electronics, IoT system design, LCDs and many others. I’m thrilled to have Brian joining our team. With his help, we promise to continue fulfilling Circuit Cellar’s role as the leading media platform aimed at inspiring the evolution of embedded system design.

This appears in the November 340 issue of Circuit Cellar magazine

Not a Circuit Cellar subscriber?  Don’t be left out! Sign up today:

Discover Unknown PCB Design Issues with DRC

FREE White Paper –
This paper addresses several of the pervasive myths within the PCB verification market, such as the need for post-layout PCB verification on high-speed designs only. Additionally, it discusses how a designer can seamlessly integrate with the PCB design process to find issues that are often missed by current PCB verification methods.

Get your copy – here

New IDE Version Shrinks Arm MCU Executable Program Sizes

After a successful beta period, Segger Microcontroller has added the new Linker and Link-Time Optimization (LTO) to the latest release build of their powerful cross-platform integrated development environments, Embedded Studio for ARM and Embedded Studio for Cortex-M.

The new product versions deliver on the promise of program size reduction, achieving a significant 5-12% reduction over the previous versions on typical applications, and even higher gains compared to conventional GCC tool chains. These savings are the result of the new LTO, combined with Segger’s Linker and Run-time library emLib-C. Through LTO, it is possible to optimize the entire application, opening the door for optimization opportunities that are simply not available to the compiler.

The Linker adds features such as compression of initialized data and deduplication, as well as the flexibility of dealing with fragmented memory maps that embedded developers have to cope with. Like all Segger software, it is written from scratch for use in deeply embedded computing systems. Additionally, the size required by the included runtime library is significantly lower than that of runtime libraries used by most GCC tool chains.

Segger Microcontroller | www.segger.com

MCU Family Serves Up Ultra-Low Power Functionality

STMicroelectronics has released its STM32L0x0 Value Line microcontrollers that provide an additional, low-cost entry point to the STM32L0 series The MCUs embed the Arm Cortex -M0+ core. With up to 128 KB flash memory, 20 KB SRAM and 512 byte true embedded EEPROM on-chip the MCUs save external components to cut down on board space and BOM cost. In addition to price-sensitive and space-constrained consumer devices such as fitness trackers, computer or gaming accessories and remotes, the new STM32L0x0 Value Line MCUs are well suited for personal medical devices, industrial sensors, and IoT devices such as building controls, weather stations, smart locks, smoke detectors or fire alarms.
The devices leverage ST’s power-saving low-leakage process technology and device features such as a low-power UART, low-power timer, 41µA 10 ksample/s ADC and wake-up from power saving in as little as 5µs. Designers can use these devices to achieve goals such as extending battery runtime without sacrificing product features, increasing wireless mobility, or endowing devices like smart meters or IoT sensors with up to 10-year battery-life leveraging the ultra-frugal 670 nA power-down current with RTC and RAM retention.

The Keil MDK-ARM professional IDE supports STM32L0x0 devices free of charge, and the STM32CubeMX configuration-code generator provides easy-to-use design analysis including a power-consumption calculator. A compatible Nucleo-64 development board (NUCLEO-L010RB) with Hardware Abstraction Layer (HAL) library is already available, to facilitate fast project startup.

The STM32L0x0 Value Line comprises six new parts, giving a choice of 16- KB, 64- KB, or 128- KB of flash memory, 128-byte, 256-byte or 512-byte EEPROM, and various package options. In addition, pin-compatibility with the full STM32 family of more than 800 part numbers offering a wide variety of core performance and integrated features, allows design flexibility and future scalability, with the freedom to leverage existing investment in code, documentation and tools.

STM32L0x0 Value Line microcontrollers are in production now, priced from $0.44 with 16-KB of flash memory and 128-byte EEPROM, for orders of 10,000 pieces. The unit price starting at $0.32 is available for high-volume orders.

STMicroelectronics| www.st.com

New CPU Core Boosts Performance for Renesas MCUs

Renesas Electronics has announced the development of its third-generation 32-bit RX CPU core, the RXv3. The RXv3 CPU core will be employed in Renesas’ new RX microcontroller families that begin rolling out at the end of 2018. The new MCUs are designed to address the real-time performance and enhanced stability required by motor control and industrial applications in next-generation smart factory, smart home and smart infrastructure equipment.

The RXv3 core boosts CPU core architecture performance with up to 5.8 CoreMark/MHz, as measured by EEMBC benchmarks, to deliver industry-leading performance, power efficiency and responsiveness. The RXv3 core is backwards compatible with the RXv2 and RXv1 CPU cores in Renesas’ current 32-bit RX MCU families. Binary compatibility using the same CPU core instruction sets ensures that applications written for the previous-generation RXv2 and RXv1 cores carry forward to the RXv3-based MCUs. Designers working with RXv3-based MCUs can also take advantage of the robust Renesas RX development ecosystem to develop their embedded systems.
The RX CPU core combines a design optimized for power efficiency and a fabrication process producing excellent performance. The new RXv3 CPU core is primarily a CISC (Complex Instruction Set Computer) architecture that offers significant advantages over the RISC (Reduced Instruction Set Computer) architecture in terms of code density. RXv3 utilizes a pipeline to deliver high instructions per cycle (IPC) performance comparable to RISC. The new RXv3 core builds on the proven RXv2 architecture with an enhanced pipeline, options for register bank save functions and double precision floating-point unit (FPU) capabilities to achieve high computing performance, along with power and code efficiency.

The enhanced RX core five-stage superscalar architecture enables the pipeline to execute more instructions simultaneously while maintaining excellent power efficiency. The RXv3 core will enable the first new RX600 MCUs to achieve 44.8 CoreMark/mA with an energy-saving cache design that reduces both access time and power consumption during on-chip flash memory reads, such as instruction fetch.

The RXv3 core achieves significantly faster interrupt response times with a new option for single-cycle register saves. Using dedicated instruction and a save register bank with up to 256 banks, designers can minimize the interrupt handling overhead required for embedded systems operating in real-time applications such as motor control. RTOS context switch time is up to 20 percent faster with the register bank save function.

The model-based development (MBD) approach has penetrated various application developments; it enables the DP-FPU to help reduce the effort of porting high precision control models to the MCU. Similar to the RXv2 core, the RXv3 core performs DSP/FPU operations and memory accesses simultaneously to substantially boost signal processing capabilities.

Renesas plans to start sampling shipments of RXv3-based MCUs before the end of Q4 2018.

Renesas Electronics | www.renesas.com

Low-Cost Flash MCU Eyes IoT Edge Applications

NXP Semiconductors has launched the LPC5500 which the company claims as the industry’s first microcontroller platform with single- and dual-core Arm Cortex-M33 and Arm TrustZone technology. Built on a low-power 40 nm embedded flash process, the LPC5500 MCU achieves 32uA/MHz efficiency at up to 100 MHz core clock frequency. It also provides dual-core Cortex-M33 capability with additional tightly coupled accelerators for signal processing and cryptography, and up to 640 KB flash and 320 KB on-chip SRAM for advanced edge applications.

LPC55S69 integrates a 16-bit Successive Approximation ADC (SAR ADC) with differential pair mode; a rich set of peripherals for system expansion, including, 50 MHz high-speed SPI, a High-Speed USB with integrated physical transceiver, eight flexible communication interfaces; and dual SDIO interfaces for concurrent Wi-Fi connection and external data logging. NXP’s autonomous programmable logic unit for offloading and execution of user defined tasks provides enhanced real-time parallelism.

One of the key features of the Cortex-M33 is its dedicated co-processor interface that extends the processing capability of the CPU by allowing efficient integration of tightly-coupled co-processors, while maintaining full ecosystem and toolchain compatibility. NXP has utilized this capability to implement a co-processor for accelerating key ML and DSP functions, such as, convolution, correlation, matrix operations, transfer functions and filtering; enhancing performance by as much as 10x compared to executing on Cortex-M33. The co-processor further leverages the popular CMSIS-DSP library calls (API) to simplify customer code portability.

Integrated benchmark security features: secure boot with immutable hardware ‘root-of-trust,’ SRAM PUF based unique key storage, certificate based secure debug authentication, AES-256 & SHA2-256 acceleration, and DICE security standard implementation for secure cloud-to-edge communication. The public key infrastructure (PKI), or asymmetric cryptography, is further accelerated by the dedicated asymmetric accelerator for ECC and RSA algorithms.

The LPC5500 MCU series features pin-, software- and peripheral compatibility across seven distinct families, with varying levels of functionality. The lead device family is enabled with LPC55S69-EVK, an evaluation board supported by NXP’s MCUXpresso Integrated Development Environment (IDE) and comprehensive software development kit which includes peripheral drivers, security and connectivity middleware, Amazon FreeRTOS based demos, and Arm TrustZone based security examples. Partner tools from Arm Keil MDK, IAR Embedded Workbench, Segger and others have been enabled to support LPC55S69-EVK.

NXP is sampling LPC55S69 development boards and 100-pin LQFP packages, with associated MCUXpresso based software development kit, through NXP field sales representatives. Direct to customer sampling on NXP eCommerce platform is expected by end of 2018. Volume production commences in Q1-2019. Devices within the LPC55S6x family are starting at a per unit price of $1.99 for 256K B flash and $2.49 for 640 KB flash, in 10,000-unit quantities.

NXP Semiconductors | www.nxp.com

Cypress Semi Teams with Arm for Secure IoT MCU Solution

Cypress Semiconductor has expanded its collaboration with Arm to provide management of IoT edge nodes. The solution integrates the Arm Pelion IoT Platform with Cypress’ low power, dual-core PSoC 6 microcontrollers (MCUs) and CYW4343W Wi-Fi and Bluetooth combo radios. PSoC 6 provides Arm v7-M hardware-based security that adheres to the highest level of device protection defined by the Arm Platform Security Architecture (PSA).
Cypress and Arm demonstrated hardware-secured onboarding and communication through the integration of the dual-core PSoC 6 MCU and Pelion IoT Platform in the Arm booth at Arm TechCon last month. In the demo, the PSoC 6 was running Arm’s PSA-defined Secure Partition Manager to be supported in Arm Mbed OS version 5.11 open-source embedded operating system, which will be available this December. Embedded systems developers can leverage the private key storage and hardware-accelerated cryptography in the PSoC 6 MCU for cryptographically-secured lifecycle management functions, such as over-the-air firmware updates, mutual authentication and device attestation and revocation. According to the company, Cypress is making a strategic push to integrate security into its compute, connect and store portfolio for the IoT.

The PSoC 6 architecture is built on ultra-low-power 40-nm process technology, and the MCUs feature low-power design techniques to extend battery life up to a full week for wearables. The dual-core Arm Cortex-M4 and Cortex-M0+ architecture lets designers optimize for power and performance simultaneously. Using its dual cores combined with configurable memory and peripheral protection units, the PSoC 6 MCU delivers the highest level of protection defined by the Platform Security Architecture (PSA) from Arm.

Designers can use the MCU’s software-defined peripherals to create custom analog front-ends (AFEs) or digital interfaces for innovative system components such as electronic-ink displays. The PSoC 6 MCU features the latest generation of Cypress’ industry-leading CapSense capacitive-sensing technology, enabling modern touch and gesture-based interfaces that are robust and reliable.

Cypress Semiconductor | www.cypress.com