The Future of Embedded FPGAs

The embedded FPGA is not new, but only recently has it started becoming a mainstream solution for designing chips, SoCs, and MCUs. A key driver is today’s high-mask costs of advanced ICs.  For a chip company designing in high nodes, a change in RTL could cost millions of dollars and set the design schedule back by months. Another driver is constantly changing standards. The embedded FPGA is so compelling because it provides designers with the flexibility to update RTL at any time after fabrication, even in-system. Chip designers, management, and even the CFO like it.Tate Fig1

Given these benefits, the embedded FPGA is here to stay. However, like any technology, it will evolve to become better and more widespread. Looking back to the 1990s when ARM and others offered embedded processor IP, the technology evolved to where embedded processors appear widely on most logic chips today. This same trend will happen with embedded FPGAs. In the last few years, the number of embedded FPGA suppliers has increased dramatically: Achronix, Adicsys, Efinix, Flex Logix, Menta, NanoXplore, and QuickLogic. The first sign of market adoption was DARPA’s agreement with Flex Logix to provide TSMC 16FFC embedded FPGA for a wide range of US government applications. This first customer was critical as it validated the technology and paved the way for others to adopt.

There are a number of things driving the adoption of the embedded FPGA:

  • Mask costs are increasing rapidly: approximately $1 million for 40 nm, $2 million for 28 nm, and $4 million for 16 nm.
  • The size of design teams required to design advanced node is increasing. Fewer chips are being designed, but they want the same functions as in the past.
  • Standards are constantly changing.
  • Data centers require programmable protocols.
  • AI and machine learning algorithms

Surprisingly, embedded FPGAs don’t compete with FPGA chips. FPGA chips are used for rapid prototyping and lower-volume products that can’t justify the increasing cost of ASIC development. When systems with FPGAs hit high volume, FPGAs are generally converted to ASICs for cost reduction.

In contrast, embedded FPGAs don’t use external FPGAs and they can do things external FPGAs can’t, such as:

  • They are lower power because SERDES aren’t needed. Standard CMOS interfaces can run 1 GHz+ in 16 nm for embedded FPGA with hundreds and thousands of interconnects available.
  • Embedded FPGA is lower cost per LUT. There is no expensive packaging and a one-third of the die area of an FPGA chip is SERDES, PLLs, DDR PHYs, etc. that are no longer needed.
  • 1-GHz operations in the control path
  • Embedded FPGAs can be optimized: lots of MACs (Multiplier-Accumulators) for DSP or none; exactly the kind of RAM needed or none.
  • Tiny embedded FPGAs of just 100 LUTs up to very large embedded FPGAs of greater than 100K LUTs
  • Embedded FPGAs can be optimized for very low power operation or very high performance.

The following markets are likely to see widespread utilization of embedded FPGAs: the Internet of Things (IoT); MCUs and customizable programmable blocks on the processor bus; defense electronics; networking chips; reconfigurable wireless base stations; flexible, reconfigurable ASICs and SoCs; and AI and deep Learning accelerators.

To integrate embedded FPGAs, chip designers need them to have the following characteristics: silicon proven IP; density in LUTs/square millimeters similar to FPGA chips; a wide range of array sizes from hundreds of LUTs to hundreds of thousands of LUTs; options for a lot of DSP support and the kind of RAM a customer needs; IP proven in the process node a company wants with support of their chosen VT options and metal stack; an IP implementation optimized for power or performance; and proven software tools.

Over time, embedded FPGA IP will be available on every significant foundry from 180 to 7 nm supporting a wide range of applications. This means embedded FPGA suppliers must be capable of cost-effectively “porting” their architecture to new process nodes in a short time (around six months). This is especially true because process nodes keep getting updated over time and each major step requires an IP redesign.

Early adopters of embedded FPGA will have chips with wider market potential, longer life, and higher ROI, giving designers a competitive edge over late adopters. Similar benefits will accrue to systems designers. Clearly, this technology is changing the way chips are designed, and companies will soon learn that they can’t afford to “not” adopt embedded FPGA.

This article appears in Circuit Cellar 323.

Geoff Tate is CEO/Cofounder of Flex Logix Technologies. He earned a BSc in Computer Science from the University of Alberta and an MBA from Harvard University. Prior to cofounding Rambus in 1990, Geoff served as Senior Vice President of Microprocessors and Logic at AMD.

Vesper VM1010 MEMS Microphones

Vesper recently launched the VM1010, which is the first wake-on-sound MEMS microphone that consumes nearly zero power. It allows consumers to voice-activate battery-powered smart speakers, smart earbuds and TV remotes without draining the battery. The inaugural member of Vesper’s ZeroPower Listening MEMS microphone product line, the VM1010 is a tiny, ultra-rugged piezoelectric MEMS microphone that that enables you to offer touchless user interfaces to consumers without any power-consumption penalty.VesperVM1010DevBoardWeb

Consuming a mere 6 µA while in listening mode, the Vesper’s VM1010 extends battery life to months or years by enabling the rest of the system to completely power down while waiting for a keyword. That is a major advantage to designers who can create an entirely new class of rugged, battery-operated, voice-interface devices that work anywhere.

The VM1010 is a low-noise, high dynamic range, single-ended analog output piezoelectric MEMS microphone. It features a configurable voice zone, allowing voice in a 5-foot radius to 20-foot radius-zone to trigger the system and increase to a normal operation mode. When the environment is quiet, the system can enter ZeroPower Listening mode and the entire system can power down.

Like other Vesper piezoelectric MEMS microphones, the VM1010 sets the standard for reliability and stability, even in harsh environments. It is dustproof to IP5X and waterproof to IPX7. Because it is stable in all environments, the VM1010 microphones are also ideally suited to microphone arrays, which are critical to far-field audio applications. Drawing a mere 6 µA of power in always-listening mode, the VM1010 extends battery life 10× or more.

Vesper’s VM1010 is currently sampling and is available online through Digi-Key. Test boards and reference design boards using VM1010, DSP Group’s DSPG DBMD6 and Sensory’s Truly Handsfree wake word algorithms are available from Vesper via an exclusive invitation-only program.

Vesper’s MEMS microphones represent a radical shift from the capacitive MEMS microphones that are shipping by the hundreds of millions in smartphones, hearables, smart speakers, Internet of Things devices and connected cars. Vesper’s piezoelectric design is waterproof, dustproof, particle-resistant and shockproof. Piezoelectric MEMS microphones make voice-interface devices practical in any environment, and they are also ideally suited for far-field applications such as microphone arrays.

Source: Vesper

New Intel Core X-Series Processors and Thunderbolt 3

During the annual Computex 2017 event, Intel unveiled its new Intel Core X-series processor family with 4 to 18 cores, which now includes the new Intel Core i9 Extreme Edition processor, the first consumer desktop CPU with 18 cores and 36 threads. Intel announced plans to integrate Thunderbolt 3 into all future Intel CPUs and to release the Thunderbolt protocol specification to the industry.Intel i9 Web

With Intel focusing its attention on competing with ARM and now saying that they want to focus on something else than PC’s, the world of computing has been stalling and no significant gains on processor performance have been announced. The result was disastrous for Windows PC makers, which among other things also failed to evolve to newer standards on connectivity, like Thunderbolt 3 and USB-C. Apple was also affected, with almost three years without a single upgrade on its popular Mac Mini, iMac desktop and Mac Pro computers. The news from Intel that a new generation of processors is finally coming will bring some hope to the industry, including to many audio professionals that use computers and workstations, and need all the memory, storage and power they can get.

Intel introduced the new Intel Core X-series processor family, which they say is the most scalable, accessible and powerful desktop platform they have ever created. Good! The new Intel Core X-series processor family spans from 4 to 18 cores with price points to match, including Intel’s first teraflop desktop CPUs. The family also introduces the new Intel Core i9 processors, representing the highest performance for extreme performance and extreme mega-tasking. Good! The new Intel Core i9 Extreme Edition processor is the first consumer desktop CPU with 18 cores and 36 threads. An industry-first, its performance capabilities will finally enable data-intensive tasks like VR content creation and heavy data visualization.

Another announcement was the Intel Compute Card, a modular computing platform with all the elements of a full computer in a size just larger than a credit card. According to Intel, the Compute Card will start shipping in August 2017 and will allow devices outside of PCs to be connected, integrating compute into everything from smart screens to interactive appliances to VR headsets. Intel Partners who have products showing at Computex include Contec, ECS, Foxconn, LG Display, MoBits Electronics, NexDock, Sharp, Seneca, SMART Technologies, Suzhou Lehui Display and TabletKiosk. Other partners currently working on solutions include Dell, HP and Lenovo.

The Intel Compute Card will initially be available in four versions, with 7th Gen Intel Core i5 vPro or i3 processors, as well as Pentium N4200 and Celeron N3450 processors. All will feature 4-GB DDR3 memory, 128 GB of SSD or 64GB of eMMC storage, and all support Wi-Fi.11ac and Bluetooth 4.2. In addition, HTC announced a Compute Card-based VR device also using Intel WiGig technology.

Thunderbolt 3

On what is possibly the most interesting front for computing, outside of pure processing power, Intel announced plans to integrate Thunderbolt 3 into all future Intel CPUs and to release the Thunderbolt protocol specification to the industry.

Intel has a long history of leading the industry in I/O innovation. In the late 1990s, Intel developed USB, which made it easier and faster to connect external devices to computers, consolidating a multitude of existing connectors. Intel continued this effort with Thunderbolt 3, one of the most significant cable I/O updates since the advent of USB.

Intel’s vision for Thunderbolt was not just to make a faster computer port, but a simpler and more versatile port available to everyone, allowing for single-cable docks with 4K video support, unlimited and faster-than-ever storage, and external graphics accelerator engines. A world where one USB-C connector does it all – today, and for many years to come.

With this vision in mind, Intel now announced that it plans to drive large-scale mainstream adoption of Thunderbolt by integrating Thunderbolt 3 into future Intel CPUs and by releasing the Thunderbolt protocol specification to the industry next year, under a nonexclusive, royalty-free license. Releasing the Thunderbolt protocol specification in this manner is expected to greatly increase Thunderbolt adoption by encouraging third-party chip makers to build Thunderbolt-compatible chips.

Microsoft has also enhanced Thunderbolt 3 device plug-and-play support in the now available Windows 10 Creators Update. Intel and Microsoft plan to continue to work together to enhance the experience in future versions of the Windows operating system.

In addition to support from Apple and Microsoft, Thunderbolt 3 has already gained significant adoption with more than 120 PC designs on systems with 7th Generation Intel Core processors, the latest MacBook Pros and dozens of peripherals – expected to ramp to nearly 150 by the end of 2017.

Source: Intel


Transducer Class Multi-Grid Strain Sensors for Multi-Axis Force, Axial, and Torsional Load Measurements

Vishay Precision Group’s Micro-Measurements brand recently introduced the S5060 Series of transducer class multi-grid advanced strain sensors. Designed for accurate, cost-effective multi-axis force, torque/axial and torsional load measurements, the Series is well suited for a wide variety of applications, including robotics, factory automation, machinery, materials testing, and more. Micro-Measurements S5060

The S5060 Series’s features, specs, and benefits:

  • Incorporates proprietary Advanced Sensors Technology
  • When installed in pairs, the circuitry pattern can be used to construct both a full-torsion bridge and a full-Poisson bridge via the installation of just two strain sensors.
  • The alignment of a single pair automatically aligns all other grids installed on the common backing
  • The number of required circuit refinements for initial zero balance, as well as temperature compensation for zero balance, are further reduced via improvements in resistance tolerance (±0.2%) and grid-to-grid thermal performance matching specifications.

Sample and production quantities are now available. Prototype sensors can be produced and delivered within six weeks, with standard volumes available in 10 weeks.

Source: Micro-Measurements

Vintage Programming Languages

For the last 30 years, C has been my programming language of choice. As you probably know, C was invented in the early 1970s by Dennis M. Ritchie for the first UNIX kernel and ran on a DEC PDP-11 computer. I am probably a bit old-fashioned. Yes, C is outdated, but I’m simply addicted to it, like plenty of other embedded system programmers. For me, C is a low level but portable language that’s adequate for all my professional and personal projects ranging from optimized code on microcontrollers to signal processing or even PC software. I know that there are many powerful alternatives like Java and C++, but, well, I’m used to C.

C is not the only vintage programming language, and playing with some others is definitively fun. This month, I’ll present several vintage languages and show you that each language has its pros and cons. Maybe you’ll find one of them helpful for a future project? I’m sure you won’t use COBOL in your next device, but what about FORTH or LISP? As you’ll see, thanks to web-based compilers and simulators, playing with programming languages is simple. And after you’re finished with this review of 1970s-era computing technology, give one or two a try!


Like many teenagers in the 1970s, I learned to program with Beginner’s All-purpose Symbolic Instruction Code (BASIC). In 1980, after some early tests with programming calculators, a friend let me try a Rockwell AIM-65 computer. An expanded version of the KIM-1, it had an impressive 1 KB of RAM and a BASIC interpreter in ROM. It was my first contact with a high-level programming language. I was really astonished. This computer seemed to understand me! “Print 1+1.” “Ok, that’s 2.” One year later, I bought my first computer, an Apple II. It came with a much more powerful BASIC interpreter in ROM, AppleSoft Basic. (This interpreter was developed for Apple by a small company named Microsoft, but that’s another story.)

PHOTO 1: An online emulator for my old Apple II

PHOTO 1: An online emulator for my old Apple II

Now let’s launch an Apple II emulator and write some software for it. Look at Photo 1. Nice, isn’t it? This pretty emulator, developed in JavaScript by Will Scullin, is available online. Just launch it, enter this 10-line program, and then type “RUN”. It will calculate for you the factorial of eight: 8! = 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8, which is 40,320.

Since its invention in 1964 at Dartmouth College, BASIC is more of a concept than a well-specified language. Plenty of variants exist up to Microsoft’s Visual Basic. But it has plenty of disadvantages, especially its early versions: a lack of structured data and controls, mandatory line numbering, a lack of type checking, low speed, and so on. Nevertheless, it is ultra-simple to learn and to understand. Even if you have never used BASIC, you’ll understand the code shown in Photo 1 without any problem. The main program starts by initializing a variable N with the value 8. I then calls a subprogram that starts at line 100, displays the result F, and stops. The subprogram initializes F to 1 and multiplies the result by each integer up to N. Straightforward.


Let compare this BASIC with a C version of the same algorithm. For this article, I looked for online compilers and simulators. I found a great option at, which, developed by Sphere Research Labs, supports more than 60 programming languages. You can edit a program using any of them, compile it, and test it without having to install anything on your PC. This is great for experimenting.

PHOTO 2: At, you can enter, compile, and simulate numerous programming languages. Here you see C language.

PHOTO 2: At, you can enter, compile, and simulate numerous programming languages. Here you see C language.

The C variant of the factorial algorithm is depicted in Photo 2. I could have used plenty of different approaches, but I tried to stay as close as possible to the “spirit” of C. So, how does it compare with BASIC? The code is significantly more structured, but a little harder to read. C aficionados loves short forms like f*=i++ (which multiplies f by i and then increments i) even when they can be avoided. While this makes the code shorter and helps the compiler with optimization, it is probably cryptic to someone new to the language.

Of course, C also has great strengths. In particular, it offers you precise control of the data types and memory representation, which helps for low level programming. That’s probably why it has been so widely for nearly 50 years.


Let’s stay in the 1970s. BASIC or assembly language was for hobbyists and experimenters. C was used by early UNIX programmers. The rest of the programming world was divided into two camps. Scientifics used FORTRAN. Business leaders used COBOL.

FORTRAN (from FORmula TRANslation) was actually the first high-level programming language. Developed by an IBM team led by John Backus, the first version of FORTRAN was released in 1957 for the IBM 704 computer. It was followed by several incremental improvements: Fortran 66 (1966), Fortran 77, and Fortran 90, all the way up to Fortran 2008. Refer to Listing 1 for the factorial program using FORTRAN 77.

LISTING 1: This is the factorial program using FORTRAN 77.

LISTING 1: This is the factorial program using FORTRAN 77.

It seems close to BASIC, right? That’s not a surprise as BASIC was in fact based on concepts from FORTRAN and from another disapeared language, ALGOL. I’m sure that you are able to read and understand the FORTRAN in Listing 1, but its equivalent in COBOL is a bit stranger (see Listing 2). I must admit that it took me some time to make it working, even after reading some COBOL tutorials on the web. COBOL is an acronym for Common Business-Oriented Language, so it is not exactly targeting an application like a factorial calculation. It was developed in 1959 by a consortium named CODASYL, based on works from Grace Hopper. Even though its popularity fading, COBOL is still alive. I even read that an object-oriented version was released in 2002 (COBOL 2002) and even upgraded in 2014.

LISTING 2: The COBOL version looks a little stranger, right?

LISTING 2: The COBOL version looks a little stranger, right?


I never actually used FORTRAN or COBOL, but I developed software on my Apple II using PASCAL. Released in 1970 by Niklaus Wirth (ETH Zurich, Swizerland), PASCAL was probably one of the earliest efforts to encourage structured and typed programming. Based on ALGOL-W (also invented by Wirth), it was followed by MODULA-2 and OBERON, which were less known but still influential.

Do you want to calculate a factorial in PASCAL? Here it is Listing 3. It may look familiar to FORTRAN or BASIC, but its advantages are in the details. PASCAL is a so-called strongly typed language. (You can’t add a tomato and a donut, contrarily to C.) It also forbids unstructured programming and it is very easy to read. PASCAL was a limited, but true, success. It was used in particular by Apple for the development of the Lisa computer as well as the first versions of the Macintosh. It is still in use today through one of its object-oriented versions, DELPHI.

LISTING 3: This is the PASCAL version. Easy to read.

LISTING 3: This is the PASCAL version. Easy to read.


In the 1970s, the United States Department of Defense (DoD) conducted a survey and found that they were using no less than 450 different programming languages. So, it decided to define and develop yet another one—that is, a new language to replace all of them. After long specification and selection phases, a proposal from Jean Ichbiah (CII Honeywell Bull, France) was selected. The result was ADA. The name ADA, and its military standard reference (MIL-STD-1815), are in memory of Augusta Ada, Countess of Lovelace (1815–1852), who created of the first actual algorithms intended for a machine.

While ADA is, well, strongly typed and very powerful, it’s complex and quite boring to use (see Listing 4). The key advantage of ADA is that it is well standardized and supports constructs like concurrency. Thanks to its very formal syntax and type checking, it is nearly bug-proof. Based on my minimal experience, it is so strict that the first version of the code usually works, at least after you correct hundreds of compilation errors. That’s probably why it is still largely used for critical applications ranging from airplanes to military systems, even if it failed as a generic language.

LISTING 4: ADA is more verbose.

LISTING 4: ADA is more verbose.


ADA is a difficult language. In my opinion, LISP (List Processing) is far more interesting. It is an old story too. Designed in 1960 by John McCarthy (Stanford University), its concepts are still interesting to learn. McCarthy’s goal was to develop a simple language with full capabilities. That’s quite the opposite of ADA. The result was LISP. The syntax can be frightening, but you must try it. Listing 5 is a version of the factorial calculation in LISP.

LISTING 5: LISP is definitively fun!

LISTING 5: LISP is definitively fun!

In LISP, everything is a list, and a list is enclosed between parentheses. To execute a function, you have to create a list with a pointer to the function as a first element and then the parameters. For example, (- n 1) is a list that calculates n – 1. (if A B C) is a structure which evaluates A, and then evaluates either B or C based on the value of A. If you read this program, you will see that it is not based on a loop like all other versions I’ve presented, but on a concept called recursion. A factorial of a number is calculated as 1 if the number is 0, and as N times the factorial of (N – 1) otherwise. LISP was in fact the first language to support recursion—meaning, the possibility for a function to call itself again and again. It is also the first language to manage storage automatically, using garbage collection. Even more interesting, in LISP everything is a list, even a program. So in LISP, it is possible to develop a program that generates a program and executes it!

Another of my favorites is FORTH. Designed by Charles Moore in 1968, FORTH also supports self-modifying programs like LISP, and it is probably even more minimalist. FORTH is based on the concept of a stack, and operators push and pop data from this stack. It uses a postfix syntax, also named Reversed Polish Notation, like vintage Hewlett-Packard calculators. For example, 1 2 + . means “push 1 on the stack,” “push 2 on the stack,” “get two figures from the stack, add them and put the result back on the stack,” and “get a figure from the stack and display it.”

Here is our factorial program in FORTH:

: fact dup 1 do I * loop ; 8 fact .

The first line defines a new function named fact, and the second line executes it after pushing the value 8 on the stack. The syntax is of course a bit strange due to the postfixing but it is clear after a while. Let’s start with 8 on the stack. The command dup duplicates the top of the stack. The do…loop structure gets count and first index from the stack so it executes I * with I varying from 1 to 7, and each iteration multiplies the top of the stack by the index I. That’s it. You can try it using another web-based programming and simulation host: Look at the result in Photo 3.

PHOTO 3: This is an example of FORTH in the online compiler and simulator.

PHOTO 3: This is an example of FORTH in the online compiler and simulator.


LISP and FORTH are fun, but PROLOG is stranger. Developed by Alain Colmerauer and his team in 1972, PROLOG is the first of the so-called declarative languages. Rather than specifying an algorithm, such a declarative language defines facts and rules. It then lets the system determine if another fact can be deduced from them. An example is welcome.

LISTING 6: The PROLOG version based on a completely different paradigm.

LISTING 6: The PROLOG version based on a completely different paradigm.

Listing 6 is our factorial in PROLOG. The first fact states that the factorial of any number lower than 2 is 1. The second fact states that the factorial of any number X is F only if F is the product of X and another number, named here FM1, and if FM1 is the factorial of X – 1. This looks like a recursion, and this is recursion, but expressed differently. Then the last line states that X is the factorial of 8 and ask PROLOG to display X, and you will have the result. This is a confusing approach, but it is close to the needs of artificial intelligence algorithms.

Lastly, I can’t resist to the pleasure to show you another exotic vintage programming language, A Programming Language (APL). Refer to the factorial example in APL in Photo 4. I can’t even write it in the text of this article because APL uses nonstandard characters.

PHOTO 4: APL looks great, right? It’s unique keyboard alone is fun!

PHOTO 4: APL looks great, right? It’s unique keyboard alone is fun!

In fact, APL-enabled computers had APL-specific keyboards! Published in 1962 by Kenneth Iverson (Harvard University and then IBM), it was firstly a mathematical notation and then a programming language. Based largely on data arrays, APL targets numerical calculations so it isn’t a surprise to see that our factorial example is so compact in this language. Let’s understand it by reading the first line from right to left. The omega Greek symbol is the parameter of the function (that is, 8 in this case). The small symbol just before the omega called “iota” is generating a vector from 0 to N – 1, so here it is generating 0 1 2 3 4 5 6 7. The 1+ is adding one to each element of the array. This gives 1 2 3 4 5 6 7 8. Lastly, the x/ asks to multiply each value of the vector, which is the factorial!


After finishing this article, I searched the web for other interesting languages and found, well, a more than impressive website. Launch your browser right now and enter These crazy guys simply listed 837 programming tasks, and let the community program each of them with all programming languages. Yes, all of them, and no less than 648 different languages are referenced! Of course, I searched for a factorial calculation algorithm and found it. Versions of the factorial code for 220 different languages are provided! So, you can find similar versions to the ones I provided in this article as versions for more recent languages (Java, Python, Perl, etc.). You will also find obscure languages.

My goal with this article was to show you that languages other than C and JAVA can be fun and even helpful for specific projects. Vintage languages are not dead. For example, it seems that FORTH was used for NASA’s Rosetta mission. Moreover, innovation in computing languages goes on, and new and exciting alternatives are proposed every month!

Don’t hesitate to play with and test programming languages. The web is an invaluable tool for discovering new tools, so have fun!

This article appears in Circuit Cellar 323.

Robert Lacoste lives in France, between Paris and Versailles. He has 30 years of experience in RF systems, analog designs, and high speed electronics. Robert has won prizes in more than 15 international design contests. In 2003 he started a consulting company, ALCIOM, to share his passion for innovative mixed-signal designs. Robert’s bimonthly Darker Side column has been published in Circuit cellar since 2007.

IEEE 802.3bt PD Controller Offers 99% Efficiency

Recently acquired by Linear Technology, Analog Devices has announced the LT4294 IEEE 802.3bt Powered Device (PD) interface controller. Intended for applications requiring up to 71 W of delivered power, this new Power over Ethernet (PoE) standard (IEEE 802.3bt) both increases the power budget and supports 10-Gb Ethernet (10GBASE-T). The LT4294’s features, benefits, and specs:

  • Available in 10-lead MSOP and 3 mm x 3 mm DFN Packages.
  • It maintains backward compatibility with older IEEE 802.3af and 802.3at PoE equipment.
  • It provides up to 99% of power available from the RJ-45 connector to the hot swap output.
  • Supports new features: additional PD classes (5, 6, 7, and 8), PD types (Type 3 and Type 4), and five-event classification.
  • It can be married to any high efficiency switching regulator.
  • It controls an external MOSFET to reduce overall PD heat dissipation and maximize power efficiency.
  • An external MOSFET architecture enables you to size the MOSFET to your application’s requirements.
  • It’s available in industrial and automotive grades, supporting operating temperature ranges from –40°C to 85°C and –40°C to 125°C, respectively.Linear LT4294

The LT4294 starts at $1.95 each in 1,000-piece quantities and is available in production quantities. The LT4294 complements the LT4295 802.3bt PD interface controller with integrated switcher, both of which provide an upgrade path from our existing PoE+/LTPoE++ PD controllers, including the LT4276 and LT4275.

Source: Linear Technology

13.6-GHz, Next-Generation Wideband Synthesizer

Analog Devices recently launched the ADF5356, which is a 13.6-GHz next-generation wideband synthesizer with integrated voltage-controlled oscillator (VCO). The ADF5356 is well suited for a variety of applications, including wireless infrastructure, microwave point-to-point links, electronic test and measurement, and satellite terminals. The ADF4356 is a complimentary synthesizer product that operates to 6.8 GHz and is comparable in performance.Analog-ADF5356

The ADF5356/4356’s features, specs, and benefits:

  • Generate RF outputs from 53.125 MHz to 13.6 GHz without gaps in frequency coverage
  • Offer superior PLL figures of merit (FOM), ultra-low VCO phase noise, very low integer-boundary and phase-detector spurs, and high phase-comparison frequency.
  • Feature VCO phase noise (–113 dBc/Hz at 100 kHz offset at 5 GHz) with integrated RMS jitter of just 97 fs (1 kHz to 20 MHz) and integer-channel noise floor of –227 dBc/Hz.
  • Phase detector spurious levels are below –85 dBc (typical), and the phase detector comparison frequency can be as high as 125 MHz.
  • Fully supported by the ADIsimPLL, which is Analog Devices’s easy-to-use PLL synthesizer design and simulation tool. The synthesizers are pin-compatible with Analog Devices’s existing ADF5355 and ADF4355 devices.
  • Specified over the –40°C to 85°C range.
  • Operate from nominal 3.3-V analog and digital power supplies as well as 5-V charge-pump and VCO supplies
  • Features 1.8-V logic-level compatibility.

The ADF5356 costs $39.98 in 1,000-unit quantities. The ADF4356 costs $20.36 in 1,000-piece quantities. The EV-ADF5356SD1Z pre-release boards cost $450 each.

Source: Analog Devices

Infineon Launches “Productive4.0” Research Project

Infineon Technologies recently launched “Productive4.0,” which is a European research initiative relating to the field of Industry 4.0. More than 100 partners (e.g., Bosch, Philips, Thales, NXP, SAP, ABB, Volvo, Ericsson, and Karlsruhe Institute of Technology) from 19 European countries are involved in the project which is focused on digitizing and networking industry. Part of the European funding program for microelectronics (ECSEL), the aim of Productive4.0 is to strengthen expertise in microelectronics with a view to broad digitization.Infineon 200mm Fab

Thirty partners from Germany and 79 participants will work together for three years. With €106 million in funding, the aim is to “create a user platform across value chains and industries, that especially promotes the digital networking of manufacturing companies, production machines and products.”

The Productive4.0 project will run until 30 April 2020. An overview of the project partners is available at

Source: Infineon Technologies

High-Performing, Intelligent Wireless Transceiver Module

The RF Solutions high-performance ZETA module was recently updated to include a simple SPI and UART interface. The ZETAPLUS module doesn’t require external components, which means a fast and effective plug-and-play setup.ZETAPLUS

Available on 433-, 868-, and 915-MHz frequencies, the module is easy to set up and you’ll be sending and receiving data quickly. Furthermore, you’ll find it easy to create networks of ZETAPLUS modules or point-to-point links without the need for time-consuming register configuration.

With an impressive 2-km range, the ZETAPLUS is well-suited for sensor networks, sleepy nodes, and numerous other telemetry, control, and Internet of Things applications.

Source: RF Solutions

Bipolar Transistor Biasing

Going back to the basics is never a bad idea. Many electronics engineers are fluent with complex systems—such as microcontrollers, embedded OSes, or FPGAs—but seem to have more difficulties with single transistors. What a shame! A transistor can be a more adequate and cost-effective solution than an IC in many projects. Moreover, understanding what’s going on with simple parts can’t hurt, and transistors can even be fun! That’s why this month I will provide a refresher on how to use a one-cent bipolar junction transistor (BJT) to build an amplifier.


The BJT is an old invention. In 1947, it was discovered at the Bell Laboratories by Walter H. Brattain and John Bardeen, who were on William Shockley’s team. The BJT comes in two flavors, NPN and PNP. For simplicity, I will focus on the NPN version. However, by reversing the power supply rails, everything would be applicable to its PNP cousin. BJT transistors have three terminals: collector (C), emitter (E), and base (B). Due to their internal semiconductor structure, the currents circulating through each of these terminals, as well as the voltages between them, are all linked together.

Let’s focus on the basic “common emitter” circuit (see Figure 1). With this setting, the emitter is grounded. There are two basic rules.

Figure 1 This NPN bipolar junction transistor is wired in the common-emitter configuration, meaning its emitter is grounded. Two basic equations dictate its behavior.

Figure 1
This NPN bipolar junction transistor is wired in the common-emitter configuration, meaning its emitter is grounded. Two basic equations dictate its behavior.

First, the current circulating through the collector is roughly proportional to the current applied on the base. Their ratio is the transistor current gain, which is indicated in the transistor’s datasheet and often noted ßF or HFE:Eq1 lacoste

Second, the voltage between the base and the emitter is stable and close to 0.6 V for more devices, as with any bipolar diode:Eq2 lacoste

Here’s how it works: If the voltage applied between the base and the emitter is lower than this threshold, then the transistor is blocked and no current circulates through the collector. If this voltage is increased to the threshold, then the transistor becomes active. You will not be able to increase the base voltage significantly above 0.6 V and the device will start to be current-controlled. In this mode, a given current will circulate through the base. The current through the collector will always be HFE times higher.

For example, if you have a transistor with a gain of 100 and inject 1 mA in the base, then 100 mA will flow through the collector. Of course, this is an approximate explanation, as the transistor’s physics are a little more complex, but is enough for my example. (Search for “Ebers-Moll model” online if you are interested in the details. Wikipedia also provides a good BJT summary.)

The examples in this article are based on the old faithful Fairchild Semiconductor BC238B transistor, but you could use any common NPN transistor (e.g., the ubiquitous 2N2222, the 2N3904, or the BC847 if you prefer surface-mount packages). Figure 2 shows a reproduction of the BC238B’s key characteristics from its datasheet. Figure 2a shows you the relationship between the voltage between the collector and the emitter (VCE) and the current through the collector (IC). Each curve corresponds to a given base current (IB). Look for an example at the curve for IB = 200 µA. As soon as the VCE voltage is above a couple of volts, the current circulating through the collector is nearly constant, around 50 mA. This means this specific transistor’s current gain is 50 mA divided by 200 µA, which is 250. Figure 2b shows the base voltage VBE. It is not strictly constant, but still close to 0.6 V, as explained.

Figure 2 These Fairchild Semiconductor BC238B NPN transistor’s key characteristics have been extracted from its datasheet.

Figure 2
These Fairchild Semiconductor BC238B NPN transistor’s key characteristics have been extracted from its datasheet.

A final but important point about BJT characteristics: Their current gain is far from precise. First, there is a huge dispersion on the current gain from transistor to transistor, even from the same manufacturing batch. Second, this gain will change with the transistor operating conditions and with the junction temperature in particular. Table 1 shows the specified gain for the BC238 family. It can range from 180 to 460 for the BC238B variant! The designer must take this difficulty into consideration.

Table 1 The Fairchild Semiconductor BC238 exists in three gain classes, indicated by an A, B, or C suffix. Even in a single class, the gain dispersion from part to part could be huge.

Table 1
The Fairchild Semiconductor BC238 exists in three gain classes, indicated by an A, B, or C suffix. Even in a single class, the gain dispersion from part to part could be huge.


Simulating a transistor is straightforward using a linear circuit simulator (e.g., SPICE), even if you prefer to wire it. I used Labcenter Electronics’s Proteus VSM in my example, but you can use any SPICE tool (e.g., Linear Technology’s free LTSpice) or an online version (e.g., CircuitLab, PartSim, etc.).

Figure 3 shows a basic circuit built around a BC238B. I connected the collector to a 10-VDC power source through a 1-kΩ resistor and used a 1-MΩ resistor between the transistor’s base and the 10-V power supply. The voltage applied on the base is above the 0.6-V threshold so the transistor will conduct. As discussed, the base voltage will stay close to the 0.6-V threshold (in fact, its simulated value is 0.66 V). The current circulating through the base could then be easily calculated by Ohm’s law applied on the base resistor: I = U/R = (10 – 0.66 V)/1 MΩ = 9.34 µA. You can then calculate the current circulating through the collector by multiplying this value by the transistor’s current gain, or the simulator can calculate it for you.

Figure 3 On this simulation, the current circulating through the collector is 310 × higher than the current through the base. As expected, the base voltage stays close to 0.6 V.

Figure 3
On this simulation, the current circulating through the collector is 310 × higher than the current through the base. As expected, the base voltage stays close to 0.6 V.

Take another look at Figure 3. The calculated collector current is 2.9 mA, which is 310 times higher than the base current. The BC238B model used by my SPICE variant seems to have a 310 gain. Consequently, the voltage drop across the collector resistor is U = R × I = 1 kΩ × 2.9 mA = 2.9 V. As the power supply voltage is 10 V, the voltage between the transistor’s collector and the ground should be 7.1 V (i.e., 10 – 2.9 V), as simulated.

Now imagine you want to use this BC238B transistor to build an AC signal amplifier (e.g., a small audio amplifier). Start with the schematic shown in Figure 3 and add the input AC signal on the transistor’s base. This input signal will periodically increase or decrease the current already applied on the base by the 1-MΩ resistor. These fluctuations will be amplified by the transistor’s current gain. Consequently, the collector voltage will fluctuate more than the input and you will have a working amplifier.

How can you design it? The first step is to define the so-called “transistor quiescent point” (i.e., you should first define the transistor’s behavior without an applied input signal). You will usually start by defining the collector resistor’s value based on the desired output impedance. Then you will need to calculate the resistor between the base and the power supply rail to set the transistor output to your desired DC value.

The rule is simple. For minimum distortion and clipping, you need to set the DC output voltage to half the supply voltage. In Figure 3, I used a 1-MΩ base resistor and found 7.1 V on the output (10 V/2 = 5 V would be preferable). Reducing the base resistor’s value will increase the base current, which will then reduce the mean output voltage (see Figure 4). This simulation shows that a base resistor close to 560 kΩ provides an average voltage on the output of 5 V, which is what we were looking for. The standby current through the collector is I = U/R = 5 V/1 kΩ = 5 mA.

Figure 4 This simulation shows the collector-to-emitter voltage (green) and base current (red) when the base resistor value changes. There is an intermediate value, close to 560 kΩ, where the collector-to-emitter voltage is close to VCC/2, which is 5 V.

Figure 4
This simulation shows the collector-to-emitter voltage (green) and base current (red) when the base resistor value changes. There is an intermediate value, close to 560 kΩ, where the collector-to-emitter voltage is close to VCC/2, which is 5 V.

Now you have a correctly DC-biased transistor and you just have to inject the signal input on the base through a decoupling capacitor and extract the collector’s output signal through another decoupling capacitor (see Figure 5). The value of these capacitors are directly linked to the lowest frequency you want to amplify. You can either calculate it (remembering that a capacitor’s impedance is Z = 1/2πfC) or simulate it.

Figure 5 A fixed-bias amplifier is simply build by injecting the input AC signal on the base through a capacitor. The time-domain simulation (top right) shows that the output voltage is close to ±1.6 VPP with ±10 mV on the input. The pass band extents down to 100 Hz (bottom left), while the distortion stays close to 1% with a second harmonic 25 dB lower than the signal (bottom right).

Figure 5
A fixed-bias amplifier is simply build by injecting the input AC signal on the base through a capacitor. The time-domain simulation (top right) shows that the output voltage is close to ±1.6 VPP with ±10 mV on the input. The pass band extents down to 100 Hz (bottom left), while the distortion stays close to 1% with a second harmonic 25 dB lower than the signal (bottom right).

A 1-µF capacitor provides a reasonable 100-Hz low-frequency cutoff, as shown on the frequency response simulation. I also performed a time-domain simulation with a 20-mVPP, 1.2-kHz input signal. As shown in Figure 5, the resulting simulated output voltage is 3.2 VPP, providing a gain of 160. So you do have an amplifier.
Note that its voltage gain is not identical to the transistor’s current gain (remember, we got 160 against 300). This voltage gain is always lower relative to the HFE current gain, basically because you are adding a voltage and not a current on the base. The relationship between the two is not straightforward. Search for “hybrid-pi model” online if you need more explanation, or just simulate it.


You have now used just a basic transistor, two resistors, and two capacitors to design an AC amplifier with a fairly high gain. This is the so-called fixed-bias solution. But can you guess the problem with this? Remember that the transistor’s current gain is never well defined, except if you measure it yourself for each transistor and take care of the transistor’s operating condition and temperature.
Imagine you build the circuit and use a transistor, which has a gain twice that of the simulated one. This is quite common, knowing the wide dispersion of their performances. Due to the higher gain, the same base bias resistor will provide a collector current twice stronger than planned. Therefore, the voltage drop on the collector’s resistor will be twice as high, meaning that the DC output voltage will no longer be 5 V but close to 0 V! The amplifier will basically no longer work, or it will generate very high distortion.

This explains why a slightly more complex schematic is often required. The basic idea is to stabilize the amplifier’s gain even if the transistor’s current gain is not well defined. The most common method is called “emitter-stabilized biasing.”

As shown in Figure 6, this method requires two additional resistors and a capacitor. First, a resistor is added between the emitter and ground, with a large capacitor in parallel. The goal is to move the emitter level to a virtual ground voltage a little higher than the 0-V reference. Then another resistor is added between the transistor’s base and the 0-V line. Its function is to fix the base’s DC voltage. I will present the calculations shortly, but first a question: What happens if the transistor’s current gain is increased for whatever reason? The current circulating through the collector and emitter will increase, and therefore the voltage drop across the resistor between the emitter and ground will increase. This means that the emitter-to-ground voltage will increase. But wait, the base voltage is fixed relative to the ground and power supply thanks to the two resistors. If the emitter voltage increases, then the base-to-emitter voltage will decrease. This will reduce the current flowing through the base, which, in turn, will reduce the collector current and will compensate for the transistor’s higher gain. Then you have a kind of automatic gain stabilization!

Figure 6 An emitter-stabilized biasing required two more resistors and one more capacitor. The gain is a little lower, but such a circuit is far more stable than a fixed-bias circuit.

Figure 6
An emitter-stabilized biasing required two more resistors and one more capacitor. The gain is a little lower, but such a circuit is far more stable than a fixed-bias circuit.

Calculating such an emitter-stabilized bias is a little more complex than the fixed-bias approach, as all parameters are linked to each other. As a starting point, it is wise to set the emitter resistor for a 1-V drop. Going back to Figure 4, I had a 1-kΩ collector resistor (defined based on the desired output impedance). This resistor provided an average 5-V drop. If I want a 1-V drop, I must use a resistor five times lower. I used the closest standardized value, which is 210 Ω. The collector’s resistor must be slightly reduced to compensate and to keep a 5-mA average collector current. As shown in Figure 7a, this resistor must now be R = U/I = (9 V/2)/5 mA = 900 Ω for optimal performances. I used the standardized 910-Ω value.
Calculating the two resistors on the base is a little more complex and must be precisely done. The starting point is to assume that the current flowing through the two resistors, which fixes the base voltage, must be around five times higher than the transistor’s base current for good performances.Fig7 lacoste

Figure 7b shows the calculations’ details, which are just an application of Thevenin’s and Ohm’s laws. I found 51 and 12 kΩ, respectively.

Last, the capacitor between the emitter and the ground must be “large enough.” You can simulate it or, as a starting point, you can assume that its value should be close to the base capacitor value multiplied by the transistor current gain.
As shown in Figure 7, I used a 100-µF capacitor, which is probably a little short. The resulting voltage gain is around 130, as expected, which is a little lower than the fixed-bias version’s gain (remember, it was 160).


Using an emitter-stabilized schematic is the most common method, but another approach could be used if the added two resistors and capacitor cause a problem. This solution, which is called “collector-stabilized biasing,” does not require a single extra component, as compared to the simplest fixed-bias configuration (see Figure 8).

Figure 8 The collector-stabilized biasing circuit is not more complex than the fixed-bias one to draw, but a little more complex to understand.

Figure 8
The collector-stabilized biasing circuit is not more complex than the fixed-bias one to draw, but a little more complex to understand.

The idea is that rather than biasing the transistor’s base with a resistor connected to the power supply, you just connect it to the transistor’s collector. So, if the transistor’s current gain increases, then the collector current will increase and the collector-to-emitter voltage will decrease. As the base is biased from the collector’s voltage, the current through the base will then decrease, stabilizing the amplifier. Clever, isn’t it?

On the calculation side, the steps are identical to the fixed-bias configuration. The base resistor’s value could be calculated as R = U/I, with U = (VCC/2)/IB. Here I found 270 kΩ. But this solution has two downsides. First, the achievable voltage gain is a little lower than with the previous solution. Second, the compensation is not as good. Nevertheless, it could be large enough for your designs and it costs nothing!

Finally, I compared the three solutions (fixed-bias, emitter-stabilized, and collector-stabilized) in terms of stability. I simply replaced the BC238B with a 2N2222 transistor, which has a significantly lower gain, and restarted the simulator. With the simplest fixed-bias design, the collector’s DC voltage moved from 5 to 6.6 V and the voltage gain was reduced from 156 to 105 V (a 32% reduction). With the emitter-stabilization solution, the gain reduction was only 5.7% (i.e., from 139 to 131). Lastly, the collector stabilization provided an intermediate performance, from 143 to 121 (15%). As expected, the most sophisticated solution is better.


This article could have been written 60 years ago. There is nothing new here. However, I’m convinced that designers often forget that a single transistor can sometimes replace an op-amp. And this could reduce the product cost by tens of cents, which should not be neglected for high-volume applications. As a matter of fact, a single transistor could also be a good solution for ultra-low-power designs.

Recently, my company worked on an alarm detection system where a piezo sensor signal needed to be amplified prior to detection. It was easy, except that the device needed to work on a coin-cell battery for a couple of years and the amplifier had to remain powered. As you can imagine, we started by trying to use ultra-low-power comparators, but a single transistor with ultra-low bias currents was the winning solution.

I hope this article was refreshing even if it did not discuss an exciting new technology. BJT transistors shouldn’t be on your darker side, just play with them!

Bade Engineering Classes, “BJT Biasing,” 2012.

CircuitLab, Inc.,

M. H. Miller, “BJT Biasing,” Introductory Electronics Notes, The University of Michigan-Dearborn, 2000.



BC238B BJT Transistor
Fairchild Semiconductor Corp. |

Proteus VSM design suite
Labcenter Electronics |

LTSpice SPICE simulator
Linear Technology Corp. |

Robert Lacoste lives in France, near Paris. He has 24 years of experience in embedded systems, analog designs, and wireless telecommunications. A prize winner in more than 15 international design contests, in 2003 he started his consulting company, ALCIOM, to share his passion for innovative mixed-signal designs. His book (“Robert Lacoste’s The Darker Side”) was published by Elsevier/Newnes in 2009. You can reach him at if you don’t forget to put “darker side” in the subject line to bypass spam filters.

This complete article appears in Circuit Cellar 279 (October 2013).

The Future of Network-on-Chip (NoC) Architectures

Adding multiple processing cores on the same chip has become the de facto design choice as we continue extracting increasing performance per watt from our chips. Chips powering smartphones and laptops comprise four to eight cores. Those powering servers comprise tens of cores. And those in supercomputers have hundreds of cores. As transistor sizes decrease, the number of cores on-chip that can fit in the same area continues to increase, providing more processing capability each generation. But to use this capability, the interconnect fabric connecting the cores is of paramount importance to enable sharing or distributing data. It must provide low latency (for high-quality user experience), high throughput (to maintain a rate of output), and low power (so the chip doesn’t overheat).

Ideally, each core should have a dedicated connection to a core with which it’s intended to communicate. However, having dedicated point-to-point wires between all cores wouldn’t be feasible due to area, power, and wire layout constraints. Instead, for scalability, cores are connected by a shared network-on-chip (NoC). For small core counts (eight to 16), NoCs are simple buses, rings, or crossbars. However, these topologies aren’t too scalable: buses require a centralized arbiter and offer limited bandwidth; rings perform distributed arbitration but the maximum latency increases linearly with the number of cores; and crossbars offer tremendous bandwidth but are area and power limited. For large core counts, meshes are the most scalable. A mesh is formed by laying out a grid of wires and adding routers at the intersections which decide which message gets to use each wire segment each cycle, thus transmitting messages hop by hop. Each router has four ports (one in each direction) and one or more ports connecting to a core. Optimized mesh NoCs today take one to two cycles at every hop.

Today’s commercial many-core chips are fairly homogeneous, and thus the NoCs within them are also homogeneous and regular. But the entire computing industry is going through a massive transformation due to emerging technology, architecture, and application trends. These, in turn, will have implications for the NoC designs of the future. Let’s consider some of these trends.

An exciting and potentially disruptive technology for on-chip networks is photonics. Its advantage is extremely high-bandwidth and no electrical power consumption once the signal becomes optical, enabling a few millimeters to a few meters at the same power. Optical fibers have already replaced electronic cables for inter-chassis interconnections within data centers, and optical backplanes are emerging viable alternatives between racks of a chassis. Research in photonics for shorter interconnects—from off-die I/O to DRAM and for on-chip networks—is currently active. In 2015, researchers at Berkeley demonstrated a microprocessor chip with on-chip photonic devices for the modulation of an external laser light source and on-chip silicon waveguides as the transmission medium. These chips directly communicated via optical signals. In 2016, researchers at the Singapore-MIT Alliance for Research and Technology demonstrated LEDs as on-chip light sources using novel III-V materials. NoC architectures inspired by these advances in silicon photonics (light sources, modulators, detectors, and photonic switches) are actively researched. Once challenges related to reliable and low-power photonic devices and circuits are addressed, silicon photonics might partially or completely replace on-chip electrical wires and provide high-bandwidth data delivery to multiple processing cores.

Read more Tech the Future Essays

The performance and energy scaling that used to accompany transistor technology scaling has diminished. While we have billions of transistors on-chip, switching them simultaneously can exceed a chip’s power budget. Thus, general-purpose processing cores are being augmented with specialized accelerators that would only be turned on for specific applications. This is known as dark silicon. For instance, GPUs accelerate graphics and image processing, DSPs accelerate signal processing, cryptographic accelerators perform fast encryption and decryption, and so on. Such domain-specific accelerators are 100× to 1000× more efficient than general-purpose cores. Future chips will be built using tens to hundreds of cores and accelerators, with only a subset of them being active at any time depending on the application. This places an additional burden on the NoC. First, since the physical area of each accelerator isn’t uniform (unlike cores), future NoCs are expected be irregular and heterogeneous. This creates questions about topologies, algorithms for routing, and managing contention. Second, traffic over the NoC may have dynamically varying latency/bandwidth requirements based on the currently active cores and accelerators. This will require quality-of-service guarantees from the NoC, especially for chips operating in real-time IoT environments or inside data centers with tight end-to-end latency requirements.

The aforementioned domain-specific accelerators are massively parallel engines with NoCs within them, which need to be tuned for the algorithm. For instance, there’s a great deal of interest in architectures/accelerators for deep neural networks (DNN), which have shown unprecedented accuracy in vision and speech recognition tasks. Example ASICs include IBM’s TrueNorth, Google’s Tensor Processing Unit (TPU), and MIT’s Eyeriss. At an abstract level, these ASICs comprise hundreds of interconnected multiply-accumulate units (the basic computation inside a neuron). The traffic follows a map-reduce style. Map (or scatter) inputs (e.g., image pixels or filter weights in case of convolutional neural networks) to the MAC units. Reduce (or gather) partial or final outputs that are then mapped again to neurons of this/subsequent layers. The NoC needs to perform this map-reduce operation for massive datasets in a pipelined manner such that MAC units are not idle. Building a highly scalable, low-latency, high-bandwidth NoC for such DNN accelerators will be an active research area, as novel DNN algorithms continue to emerge.

You now have an idea of what’s coming in terms of the hardware-software co-design of NoCs. Future computer systems will become even more heterogeneous and distributed, and the NoC will continue to remain the communication backbone tying these together and providing high performance at low energy.

This essay appears in Circuit Cellar 322.

Dr. Tushar Krishna is an Assistant Professor of ECE at Georgia Tech. He holds a PhD (MIT), an MSE (Princeton), and a BTech (IIT Delhi). Dr. Krishna spent a year as a post-doctoral researcher at Intel and a semester at the Singapore-MIT Alliance for Research and Technology.

Electrical Engineering Crossword (Issue 323)

The answers to Circuit Cellar 323‘s crossword are now available.323 crossword


  1. DYNE—10–5 N
  2. GEOFENCE—Virtual barrier
  3. QUIESCENT—At rest
  4. BOND—Electrical union/connection
  5. HDL—Describes circuit structure; Verilog
  6. SOLIDSTATE—Electronics featuring semiconductors rather than mechanical circuits [2 words]
  7. TRIGGER—Pulse used to initiate a circuit action
  8. LONGRANGE—LoRa [2 words]
  9. STRIP—Remove a wire’s jacket
  11. INSULATOR—Rubber, glass, air
  12. DIP—Package for small- and medium-scale integrated circuits with up to approximately 48 pins
  13. HAMMING—Code that can detect single- and double-bit errors, and it can correct single-bit errors


  1. NANOSECOND—10-9 s
  2. BACKDOOR—Secret point of access
  3. NEARFIELD—Close proximity to an antenna [2 words]
  4. LEET—1337
  5. BASE—Fixed-location wireless transceiver
  7. ARRESTER—Limits surge

New STM32L4 Microcontrollers with On-Chip Digital Filter

STMicroelectronics’s ultra-low-power STM32L45x microcontrollers (STM32L451, STM32L452, and STM32L462 lines) are supported by a development ecosystem based on the STM32Cube platform. The new microcontroller lines offer a variety of features and benefits:

  • Integrated Digital Filter for Sigma-Delta Modulators (DFSDM) enables advanced audio capabilities (e.g., noise cancellation or sound localization).
  • Up to 512 Kbyte on-chip Flash and 160 Kbyte SRAM provide generous code and data storage.
  • A True Random-Number Generator (TRNG) streamlines development of security-conscious applications
  • Smart analog peripherals include a 12-bit 5-Msps ADC, internal voltage reference, and ultra-low-power comparators.
  • Multiple timers, a motor-control channel, a temperature sensor, and a capacitive-sensing interface
  • Deliver high core performance and exceptional ultra-low-power efficiency.
  • A 36-µA/MHz Active mode current enables a longer runtime on small batteries STMicro STM32L4

The development ecosystem includes the STM32CubeMX initialization-code generator and STM32CubeL4 package comprising:

  • Middleware components
  • Nucleo-64 Board-Support Package (BSP)
  • Hardware Abstraction Layer (HAL),
  • Low-Layer APIs (LLAPIs)

The STM32CubeMX has a power-estimation wizard, as well as other wizards for managing clock signals and pin assignments. The affordable Nucleo-64 board, NUCLEO-L452RE, enables you to test ideas and build prototypes. It integrates the ST-LINK/V2 probe-free debugger/programmer and you can expand it via Arduino-compatible headers.

The devices are currently available in small form-factor packages from QFN-48 to LQFP-100, including a 3.36 mm × 3.66 mm WLCSP. Prices start from $2.77 in 1,000-piece quantities for the STM32L451CCU6 with 256-KB flash memory and 160-KB SRAM in QFN-48. The development boards start at $14 for the legacy-compatible Nucleo-64 board (NUCLEO-L452RE). The NUCLEO-L452RE-P board with external DC/DC converter will be available to distributors in June 2017.

Source: STMicroelectronics

Ultra-Low-Power RFID Chip for Retail Data

NXP Semiconductors recently launched a new global UCODE 8 RAIN RFID chip platform that’s intended for omnichannel retailer applications. A universal RAIN RFID chip, the UCODE 8 is provides “high inventory accuracy on all retail product categories through best-in-class read sensitivity.”  It includes a new auto-adjust feature ensures a consistently high performance read-rate across different product materials and global deployments. Furthermore, it features a unique brand identifier feature validates product authentication and helps identify fakes.

Source: NXP Semiconductors

Single-Chip, Multi-Protocol Switch for Intelligent Apps

Analog Devices recently introduced a real-time Ethernet, multi-protocol (REM) switch chip Ethernet connectivity solution for intelligent factory applications. Well suited for a variety of connected motion applications, you can use the “TSN-ready” (time sensitive networking) fido5000 with any processor, any protocol, and any stack.

The fido5000 two-port embedded Ethernet switch’s features, specs, and benefits include:

  • Reduces board size and power consumption while improving Ethernet performance at the node under any network load condition.
  • Attaches to Analog’s ADSP-SC58x, ADSP-2158x, and ADSP-CM40x motion control processors
  • Supports PROFINET RT/IRT, EtherNet/IP with beacon-based DLR, ModbusTCP, EtherCAT, SERCOS, and POWERLINK.
  • Achieves cycle times below 125 µs
  • Includes drivers for simple integration with any Industrial Ethernet protocol stack

The fido5100 is scheduled for full production in September 2017 and will cost $6 each in 1,000-piece quantities. The fido5200 (EtherCAT Capable) is also scheduled for full production in September 2017 and will cost $8 each in 1,000-piece quantities.

Source: Analog Devices