One question that comes up again and again in on-line forums is some variation on: “What exactly is the difference between a microprocessor and a microcontroller, and which should I use?” The fact that the question is asked so often suggests that the answer is not straightforward. At the risk of adding to the confusion, I am going to contribute my two cents’ worth.
I would argue that the question is not relevant—think of them both as variations on a theme and choose the one that works best in your application. To describe why I don’t think the distinction matters, let’s take a trip back in history.
It all starts at the very core of both microprocessors and microcontrollers: the central processing unit (CPU). This is the part responsible for fetching, decoding and executing the instructions that make up the user program. Early processors such as the Intel 8008 released in 1972 were nothing more than this basic functionality on a single chip as you can see from Figure 1. The 8008 was even marketed as an “8-bit Parallel Central Processing Unit.” It needed to be supported by an external clock generator, memory and I/O.
As the years past, microcontrollers and microprocessors soon began to diverge, based on their intended application. Microprocessors were developed for general purpose computers with a focus on computing throughput and features to support operating systems. Microcontrollers, intended for embedded industrial and consumer applications, focused on integrating memory and peripherals on-chip to lower the cost of the systems they supported.
One of the earliest examples of a microcontroller was the Texas Instruments (TI) TMA1000, released in 1974, which incorporated on-chip RAM, ROM, a 400kHz clock and I/O (Figure 2). A classic general-purpose microprocessor, the Intel 8086, arrived in 1978. This had a 16-bit data bus, a 20-bit address bus and a 6-stage instruction pipeline (Figure 3). The 8088 version of this device was famously used in the first IBM PC released in 1981. And its successors can be found in PCs to this day.
Over subsequent decades, successive generations of microcontrollers and microprocessors continued to diverge. Microcontrollers acquired ADCs, DACs, complex timers and a plethora of communications interfaces. This was driven by their application in the industrial, automotive and appliance sectors. At the same time, microprocessors acquired faster clock speeds, floating point coprocessors, sophisticated memory management units and various levels of cache in the quest for higher throughput demanded by the explosion in personal computing. At this point, the distinction between the two was clear cut.
All that said, today I would argue that the advent of smart devices and IoT means that the distinction between a general-purpose computer and an embedded device are no longer clear-cut. This has led to high-end microcontrollers acquiring features traditionally associated with microprocessors—such as memory management units, multi-level caches and the like. Similarly, microprocessors intended for smart devices are acquiring peripherals such as those traditionally associated with microcontrollers, such as I2C buses and on-chip USB. It’s getting harder to make the distinction between the two.
So, does this mean there is no meaningful distinction anymore? I would not go that far, but I would suggest you don’t get hung up on the issue. Select the device with the right attributes for your application and get on with it.
8008 8-Bit Parallel Central Processor Unit. Rev 2. Intel Corporation, 1972.
TMS 1000 Series Data Manual. Texas Instruments, 1976.
8088 16-Bit HMOS Microprocessor. Intel Corporation, September 1990.