As an embedded systems neophyte, I only recently heard of chiplets for the first time. After my interlocutor politely clarified that they didn’t mean the tiny gum and that I could put away my expectant, outstretched hand, they informed me that chiplets are (one of) the rage(s) in embedded systems right now.
A chiplet is—no surprises here—a tiny chip, that contains a well-defined subset of functionality, and that can be linked together with other chiplets to build a system. They promise to be a solution to the increasing costs associated with building ever-smaller transistors in ever-more-complex chips, as their tiny size means they’re less likely to be manufactured with defects (and are thus cheaper to make). And in the face of growing compute demands for embedded devices with machine learning (ML), that’s a good thing.
Further, the open-source Universal Chiplet Interconnect Express (UCIe) standard will enable combinations of chiplets made by different companies, providing more freedom to manufacturers and developers in fast-paced, high-tech fields. Hence, this neophyte expects to hear a lot more about chiplets over the coming months.
Meanwhile, as many of you might be aware, Apple put out a paper in early January of this year entitled “LLM in a flash: Efficient Large Language Model Inference with Limited Memory.” The authors introduce two novel techniques in this paper: (1) a “windowing” strategy to reduce data transfer via the reuse of previously activated neurons, and (2) “row-column bundling” to increase the size of data chunks read from flash memory. Their methods together allow for running models up to twice the size of available DRAM, with big increases in speed over conventional CPU and GPU loading approaches.
Hence, the overall trend in 2024 appears to be more and more intelligence at the edge. Who could have seen that coming? Sarcasm aside, it’s one thing to see the macroscopic picture (technology tends to get faster and more powerful over time), and another to follow the multiple, racing, concurrent developments—and the engineers and researchers behind them—that together make up that macroscopic picture.
This month we feature an article, from three recently graduated Cornell students, on a machine learning project they built for a course last Spring. It’s so exciting to see undergraduates tackling engineering problems I hadn’t even heard of when I was in college. (Granted, I was not an engineering student.) In my role here at Circuit Cellar, I get to witness in real time the rising tide lifting all boats—to witness the next generation fast on the heels of the generations before them.
But wait—there’s more! We also bring you: Bob Japenga’s second installment of his articles covering embedded system software requirement elicitation; a piece on dealing with electrical noise by Stuart Ball; an overview of printed circuit board manufacturing from Jeff Bachiochi; and a piece on real-time operating systems (RTOS) in this month’s Tech Feature by Mike Lynes. Speaking of RTOS, Bill Lamie and Yuxin Zhou break down why an RTOS is useful in the first place. Plus, Alex Pozhitkov simplifies the complex math of analog electronics, and Miguel Sanchez covers programmable IO programming for the Pico. Happy reading!
Issue Table of Contents can be found here,
as articles are made available online they will be linked.
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • FEBRUARY #403 – Get a PDF of the issue
Sam Wallace - became Circuit Cellar's Editor-In-Chief in August 2022.
His experience in writing, editing, and teaching will provide a great perspective on the selection, presentation, and clarity of editorial content. The Circuit Cellar audience will benefit from his strong academic background encompassing a Master of Fine Arts in Writing and a Bachelor of Science in Mathematics with honors. His passion for learning and teaching is a great fit for Circuit Cellar's continuing mission of Inspiring the Evolution of Embedded Design.