Vintage Programming Languages

For the last 30 years, C has been my programming language of choice. As you probably know, C was invented in the early 1970s by Dennis M. Ritchie for the first UNIX kernel and ran on a DEC PDP-11 computer. I am probably a bit old-fashioned. Yes, C is outdated, but I’m simply addicted to it, like plenty of other embedded system programmers. For me, C is a low level but portable language that’s adequate for all my professional and personal projects ranging from optimized code on microcontrollers to signal processing or even PC software. I know that there are many powerful alternatives like Java and C++, but, well, I’m used to C.

C is not the only vintage programming language, and playing with some others is definitively fun. This month, I’ll present several vintage languages and show you that each language has its pros and cons. Maybe you’ll find one of them helpful for a future project? I’m sure you won’t use COBOL in your next device, but what about FORTH or LISP? As you’ll see, thanks to web-based compilers and simulators, playing with programming languages is simple. And after you’re finished with this review of 1970s-era computing technology, give one or two a try!

BASIC

Like many teenagers in the 1970s, I learned to program with Beginner’s All-purpose Symbolic Instruction Code (BASIC). In 1980, after some early tests with programming calculators, a friend let me try a Rockwell AIM-65 computer. An expanded version of the KIM-1, it had an impressive 1 KB of RAM and a BASIC interpreter in ROM. It was my first contact with a high-level programming language. I was really astonished. This computer seemed to understand me! “Print 1+1.” “Ok, that’s 2.” One year later, I bought my first computer, an Apple II. It came with a much more powerful BASIC interpreter in ROM, AppleSoft Basic. (This interpreter was developed for Apple by a small company named Microsoft, but that’s another story.)

PHOTO 1: An online emulator for my old Apple II

PHOTO 1: An online emulator for my old Apple II

Now let’s launch an Apple II emulator and write some software for it. Look at Photo 1. Nice, isn’t it? This pretty emulator, developed in JavaScript by Will Scullin, is available online. Just launch it, enter this 10-line program, and then type “RUN”. It will calculate for you the factorial of eight: 8! = 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8, which is 40,320.

Since its invention in 1964 at Dartmouth College, BASIC is more of a concept than a well-specified language. Plenty of variants exist up to Microsoft’s Visual Basic. But it has plenty of disadvantages, especially its early versions: a lack of structured data and controls, mandatory line numbering, a lack of type checking, low speed, and so on. Nevertheless, it is ultra-simple to learn and to understand. Even if you have never used BASIC, you’ll understand the code shown in Photo 1 without any problem. The main program starts by initializing a variable N with the value 8. I then calls a subprogram that starts at line 100, displays the result F, and stops. The subprogram initializes F to 1 and multiplies the result by each integer up to N. Straightforward.

C LANGUAGE

Let compare this BASIC with a C version of the same algorithm. For this article, I looked for online compilers and simulators. I found a great option at www.ideone.com, which, developed by Sphere Research Labs, supports more than 60 programming languages. You can edit a program using any of them, compile it, and test it without having to install anything on your PC. This is great for experimenting.

PHOTO 2: At Ideone.com, you can enter, compile, and simulate numerous programming languages. Here you see C language.

PHOTO 2: At Ideone.com, you can enter, compile, and simulate numerous programming languages. Here you see C language.

The C variant of the factorial algorithm is depicted in Photo 2. I could have used plenty of different approaches, but I tried to stay as close as possible to the “spirit” of C. So, how does it compare with BASIC? The code is significantly more structured, but a little harder to read. C aficionados loves short forms like f*=i++ (which multiplies f by i and then increments i) even when they can be avoided. While this makes the code shorter and helps the compiler with optimization, it is probably cryptic to someone new to the language.

Of course, C also has great strengths. In particular, it offers you precise control of the data types and memory representation, which helps for low level programming. That’s probably why it has been so widely for nearly 50 years.

FORTRAN & COBOL

Let’s stay in the 1970s. BASIC or assembly language was for hobbyists and experimenters. C was used by early UNIX programmers. The rest of the programming world was divided into two camps. Scientifics used FORTRAN. Business leaders used COBOL.

FORTRAN (from FORmula TRANslation) was actually the first high-level programming language. Developed by an IBM team led by John Backus, the first version of FORTRAN was released in 1957 for the IBM 704 computer. It was followed by several incremental improvements: Fortran 66 (1966), Fortran 77, and Fortran 90, all the way up to Fortran 2008. Refer to Listing 1 for the factorial program using FORTRAN 77.

LISTING 1: This is the factorial program using FORTRAN 77.

LISTING 1: This is the factorial program using FORTRAN 77.

It seems close to BASIC, right? That’s not a surprise as BASIC was in fact based on concepts from FORTRAN and from another disapeared language, ALGOL. I’m sure that you are able to read and understand the FORTRAN in Listing 1, but its equivalent in COBOL is a bit stranger (see Listing 2). I must admit that it took me some time to make it working, even after reading some COBOL tutorials on the web. COBOL is an acronym for Common Business-Oriented Language, so it is not exactly targeting an application like a factorial calculation. It was developed in 1959 by a consortium named CODASYL, based on works from Grace Hopper. Even though its popularity fading, COBOL is still alive. I even read that an object-oriented version was released in 2002 (COBOL 2002) and even upgraded in 2014.

LISTING 2: The COBOL version looks a little stranger, right?

LISTING 2: The COBOL version looks a little stranger, right?

PASCAL & FRIENDS

I never actually used FORTRAN or COBOL, but I developed software on my Apple II using PASCAL. Released in 1970 by Niklaus Wirth (ETH Zurich, Swizerland), PASCAL was probably one of the earliest efforts to encourage structured and typed programming. Based on ALGOL-W (also invented by Wirth), it was followed by MODULA-2 and OBERON, which were less known but still influential.

Do you want to calculate a factorial in PASCAL? Here it is Listing 3. It may look familiar to FORTRAN or BASIC, but its advantages are in the details. PASCAL is a so-called strongly typed language. (You can’t add a tomato and a donut, contrarily to C.) It also forbids unstructured programming and it is very easy to read. PASCAL was a limited, but true, success. It was used in particular by Apple for the development of the Lisa computer as well as the first versions of the Macintosh. It is still in use today through one of its object-oriented versions, DELPHI.

LISTING 3: This is the PASCAL version. Easy to read.

LISTING 3: This is the PASCAL version. Easy to read.

THE ADA STORY

In the 1970s, the United States Department of Defense (DoD) conducted a survey and found that they were using no less than 450 different programming languages. So, it decided to define and develop yet another one—that is, a new language to replace all of them. After long specification and selection phases, a proposal from Jean Ichbiah (CII Honeywell Bull, France) was selected. The result was ADA. The name ADA, and its military standard reference (MIL-STD-1815), are in memory of Augusta Ada, Countess of Lovelace (1815–1852), who created of the first actual algorithms intended for a machine.

While ADA is, well, strongly typed and very powerful, it’s complex and quite boring to use (see Listing 4). The key advantage of ADA is that it is well standardized and supports constructs like concurrency. Thanks to its very formal syntax and type checking, it is nearly bug-proof. Based on my minimal experience, it is so strict that the first version of the code usually works, at least after you correct hundreds of compilation errors. That’s probably why it is still largely used for critical applications ranging from airplanes to military systems, even if it failed as a generic language.

LISTING 4: ADA is more verbose.

LISTING 4: ADA is more verbose.

LISP & FORTH

ADA is a difficult language. In my opinion, LISP (List Processing) is far more interesting. It is an old story too. Designed in 1960 by John McCarthy (Stanford University), its concepts are still interesting to learn. McCarthy’s goal was to develop a simple language with full capabilities. That’s quite the opposite of ADA. The result was LISP. The syntax can be frightening, but you must try it. Listing 5 is a version of the factorial calculation in LISP.

LISTING 5: LISP is definitively fun!

LISTING 5: LISP is definitively fun!

In LISP, everything is a list, and a list is enclosed between parentheses. To execute a function, you have to create a list with a pointer to the function as a first element and then the parameters. For example, (- n 1) is a list that calculates n – 1. (if A B C) is a structure which evaluates A, and then evaluates either B or C based on the value of A. If you read this program, you will see that it is not based on a loop like all other versions I’ve presented, but on a concept called recursion. A factorial of a number is calculated as 1 if the number is 0, and as N times the factorial of (N – 1) otherwise. LISP was in fact the first language to support recursion—meaning, the possibility for a function to call itself again and again. It is also the first language to manage storage automatically, using garbage collection. Even more interesting, in LISP everything is a list, even a program. So in LISP, it is possible to develop a program that generates a program and executes it!

Another of my favorites is FORTH. Designed by Charles Moore in 1968, FORTH also supports self-modifying programs like LISP, and it is probably even more minimalist. FORTH is based on the concept of a stack, and operators push and pop data from this stack. It uses a postfix syntax, also named Reversed Polish Notation, like vintage Hewlett-Packard calculators. For example, 1 2 + . means “push 1 on the stack,” “push 2 on the stack,” “get two figures from the stack, add them and put the result back on the stack,” and “get a figure from the stack and display it.”

Here is our factorial program in FORTH:

: fact dup 1 do I * loop ; 8 fact .

The first line defines a new function named fact, and the second line executes it after pushing the value 8 on the stack. The syntax is of course a bit strange due to the postfixing but it is clear after a while. Let’s start with 8 on the stack. The command dup duplicates the top of the stack. The do…loop structure gets count and first index from the stack so it executes I * with I varying from 1 to 7, and each iteration multiplies the top of the stack by the index I. That’s it. You can try it using another web-based programming and simulation host: https://repl.it. Look at the result in Photo 3.

PHOTO 3: This is an example of FORTH in the Repl.it online compiler and simulator.

PHOTO 3: This is an example of FORTH in the Repl.it online compiler and simulator.

FUN WITH PROLOG & APL

LISP and FORTH are fun, but PROLOG is stranger. Developed by Alain Colmerauer and his team in 1972, PROLOG is the first of the so-called declarative languages. Rather than specifying an algorithm, such a declarative language defines facts and rules. It then lets the system determine if another fact can be deduced from them. An example is welcome.

LISTING 6: The PROLOG version based on a completely different paradigm.

LISTING 6: The PROLOG version based on a completely different paradigm.

Listing 6 is our factorial in PROLOG. The first fact states that the factorial of any number lower than 2 is 1. The second fact states that the factorial of any number X is F only if F is the product of X and another number, named here FM1, and if FM1 is the factorial of X – 1. This looks like a recursion, and this is recursion, but expressed differently. Then the last line states that X is the factorial of 8 and ask PROLOG to display X, and you will have the result. This is a confusing approach, but it is close to the needs of artificial intelligence algorithms.

Lastly, I can’t resist to the pleasure to show you another exotic vintage programming language, A Programming Language (APL). Refer to the factorial example in APL in Photo 4. I can’t even write it in the text of this article because APL uses nonstandard characters.

PHOTO 4: APL looks great, right? It’s unique keyboard alone is fun!

PHOTO 4: APL looks great, right? It’s unique keyboard alone is fun!

In fact, APL-enabled computers had APL-specific keyboards! Published in 1962 by Kenneth Iverson (Harvard University and then IBM), it was firstly a mathematical notation and then a programming language. Based largely on data arrays, APL targets numerical calculations so it isn’t a surprise to see that our factorial example is so compact in this language. Let’s understand it by reading the first line from right to left. The omega Greek symbol is the parameter of the function (that is, 8 in this case). The small symbol just before the omega called “iota” is generating a vector from 0 to N – 1, so here it is generating 0 1 2 3 4 5 6 7. The 1+ is adding one to each element of the array. This gives 1 2 3 4 5 6 7 8. Lastly, the x/ asks to multiply each value of the vector, which is the factorial!

GET STARTED

After finishing this article, I searched the web for other interesting languages and found, well, a more than impressive website. Launch your browser right now and enter http://rosettacode.org. These crazy guys simply listed 837 programming tasks, and let the community program each of them with all programming languages. Yes, all of them, and no less than 648 different languages are referenced! Of course, I searched for a factorial calculation algorithm and found it. Versions of the factorial code for 220 different languages are provided! So, you can find similar versions to the ones I provided in this article as versions for more recent languages (Java, Python, Perl, etc.). You will also find obscure languages.

My goal with this article was to show you that languages other than C and JAVA can be fun and even helpful for specific projects. Vintage languages are not dead. For example, it seems that FORTH was used for NASA’s Rosetta mission. Moreover, innovation in computing languages goes on, and new and exciting alternatives are proposed every month!

Don’t hesitate to play with and test programming languages. The web is an invaluable tool for discovering new tools, so have fun!

This article appears in Circuit Cellar 323.

Robert Lacoste lives in France, between Paris and Versailles. He has 30 years of experience in RF systems, analog designs, and high speed electronics. Robert has won prizes in more than 15 international design contests. In 2003 he started a consulting company, ALCIOM, to share his passion for innovative mixed-signal designs. Robert’s bimonthly Darker Side column has been published in Circuit cellar since 2007.

Embedded Software: Tips & Insights (Sponsor: PRQA)

When it comes to embedded software, security matters. Read the following whitepapers to learn about: securing your embedded systems, MISRA coding standard, and using static analysis to overcome the challenges of reusing code.

  • Developing Secure Embedded Software
  • Guide to MISRA Coding
  • Using Static Analysis to Overcome the Challenges of Reusing Code for Embedded Software

VISIT THE DOWNLOAD PAGE

Programming Research Ltd (PRQA) helps its customers to develop high-quality embedded source code—software which is impervious to attack and executes as intended.

Reflections on Software Development

Present-day equipment relies on increasingly complex software, creating ever-greater demand for software quality and security. The two attributes, while similar in their effects, are different. A quality software is not necessarily secure, while a secure software is not necessarily of good quality. Safe software is both of high quality and security. That means the software does what it is supposed to do: it prevents hackers and other external causes from modifying it, and should it fail, it does so in a safe, predictable way. Software verification and validation (V&V) reduces issues attributable to defects, that is to poor quality, but does not currently address misbehavior caused by external effects.

Poor software quality can result in huge material losses, even life. Consider some notorious examples of the past. An F-22 Raptor flight control error caused the $150 million aircraft to be destroyed. An RAF Chinook engine controller fault caused the helicopter crash with 29 fatalities. A Therac radiotherapy machine gave patients massive radiation overdoses causing death of two people. A General Electric power grid monitoring system’s failure resulted in a 48-hour blackout across eight US states and one Canadian province. Toyota’s electronic throttle controller was said to be responsible for the lives of 89 people.

Clearly, software quality is paramount, yet too often it takes the back seat to the time to market and the development cost. One essential attribute of quality software is its traceability. This means that every requirement can be traced via documentation from the specification down to the particular line of code—and, vice versa, every line of code can be traced up to the specification. The documentation (not including testing and integration) process is illustrated in Figure 1.

FIGURE 1: Simplified software design process documentation. Testing, verification and validation (V&V) and control documents are not shown.

FIGURE 1: Simplified software design process documentation. Testing, verification and validation (V&V) and control documents are not shown.

The terminology is that of the DO-178 standard, which is mandatory for aerospace and military software. (Similarly, hardware development is guided by DO-254.) Other software standards may use a different terminology, but the intentions are the same. DO-178 guides its document-driven process, for which many tools are available to the designer. Once the hardware-software partitioning has been established, software requirements define the software architecture and the derived requirements. Derived requirements are those that the customer doesn’t include in the specification and might not even be aware of them. For instance, turning on an indicator light may take one sentence in the specification, but the decomposition of this simple task might lead to many derived requirements.

Safety-Instrumented Functions

While requirements are being developed, test cases must be defined for each and every one of those requirements. Additionally, to increase the system safety, a so-called Safety-Instrumented Functions (SIF) should be considered. SIFs are monitors which cause the system to safely shut down if its performance fails to meet the previously defined safety limits. This is typically accomplished by redundancy in hardware, software or both. If you neglect to address such issues at an early development stage, you might end up with an unsafe system and having to redo a lot of work later.

Quality design is also a bureaucratic chore. Version control and configuration index must be maintained. The configuration index comprises the list of modules and their versions to be compiled for specific versions of the product under development. Without it, configuration can be lost and a great deal of development effort with it.

Configuration control and traceability are not just the best engineering practices. They should be mandated whenever software is being developed. Some developers believe that software qualification to a specific standard is required by the aerospace and military industries only. Worse, some commercial software developers still subscribe to the so-called iron triangle: “Get to market fast with all the features planned and high level of quality. But pick only two.”

Engineers in safety-critical industries (such as medical, nuclear, automotive, and manufacturing) work with methods similar to DO-178 to ensure their software performs as expected. Large original equipment manufacturers (OEMs) now demand adherence to software standards: IEC61508 for industrial controls, IEC62034 for medical equipment, ISO 26262 for automotive, and so forth. The reason is simple. Unqualified software can lead to costly product returns and expensive lawsuits.

Software qualification is highly labor intensive and very demanding in terms of resources, time, and money. Luckily, its cost has been coming down thanks to a plethora of automated tools now being offered. Those tools are not inexpensive, but they do pay for themselves quickly. Considering the risk of lawsuits, recalls, brand damage, and other associated costs of software failure, no company can really afford not to go through a qualification process.

Testing

As with hardware, quality must be built into the software, and this means following strict process rules. You can’t expect to test quality into the product at the end. Some companies have tried and the results have been the infamous failures noted above.
Testing embedded controllers often presents a challenge because you need the final hardware when it is not yet finished. Nevertheless, if you give testing due consideration as you prepare the software requirements, much can be accomplished by working in virtual or simulated environments. LDRA (www.ldra.com) is one great tool for this task.
Numerous methods exist for software testing. For example, dynamic code analysis examines the program during its execution, while the static analysis looks for vulnerabilities as well as programming errors. It has been shown mathematically that 100% test coverage is impossible to achieve. But even if it was, 35% to 40% of defects result from missing logic paths and another 40% from the execution of unique combinations of logic paths. Such defects wouldn’t get caught by testing, but can be mitigated by SIF.

Much embedded code is still developed in-house (see Figure 2). Is it possible for companies to improve programmers’ efficiency in this most labor-intensive task? Once again, the answer lies in automation. Nowadays, many tools come as complete suites providing various analyses, code coverage, coding standards compliance, requirements traceability, code visualization, and so forth. These tools are regularly seen at developers of avionic and military software, but they are not as frequently used by commercial developers because of their perceived high cost and steep learning curve.

FIGURE 2: Distribution of embedded software sources. Most is still developed in-house.

FIGURE 2: Distribution of embedded software sources. Most is still developed in-house.

With the growth of cloud computing and the Internet of Things (IoT), software security is gaining on an unprecedented importance. Some security measures can be incorporated in hardware while others are in software. Data encryption and password protection are the vital parts. Unfortunately, security continues to be not treated by some developers as seriously as it should be. Security experts warn that numerous IoT developers have failed to learn the lessons of the past and a “big IoT hack” in the near future is inevitable.

Security Improvements

On a regular basis, the media report on security breaches (e.g., governmental organization hacks, bank hacks, and automobile hacks). What can be done to improve security?

There are several techniques—such as Common Weakness Enumeration (CWE)—that can help to improve our chances. However, securing software is likely a task a lot more daunting than achieving comprehensive V&V test coverage. One successful hack proves the security is weak. But how many unsuccessful hacks by test engineers are needed to establish that security is adequate? Eventually, a manager, probably relying on some statistics, will have to decide that enough effort has been spent and the software can be released. Different types of systems require different levels of security, but how is this to be determined? And what about the human factor? Not every test engineer has the necessary talent for code breaking.

History teaches us that no matter how good a lock, a cipher, or a password someone has eventually broken it. Several security developers in the past challenged the public to break their “unbreakable” code for a reward, only to see their code broken within hours. How responsible is it to keep sensitive data and systems access available in the cyberspace just because it may be convenient, inexpensive, or fashionable? Have the probability and the consequences of a potential breach been always duly considered?

I have used cloud-based tools, such as the excellent mbed, but would not dream of using them for a sensitive design. I don’t store data in the cloud, nor would I consider IoT for any system whose security was vital. I don’t believe cyberspace can provide sufficient security for many systems at this time. Ultimately, the responsibility for security is ours. We must judge whether the use IoT or the cloud for a given product would be responsible. At present, I see little evidence to be convinced the industry is adequately serious about security. It will surely improve with time, but until it does I am not about to take unnecessary risks.


George Novacek is a professional engineer with a degree in Cybernetics and Closed-Loop Control. Now retired, he was most recently president of a multinational manufacturer for embedded control systems for aerospace applications. George wrote 26 feature articles for Circuit Cellar between 1999 and 2004. Contact him at gnovacek@nexicom.net with “Circuit Cellar”in the subject line.

Aurora Software for Evaluation of ArcticPro eFPGA IP

QuickLogic Corp. recently announced the release of its new Aurora software, which enables SoC developers to evaluate the integration of embedded FPGA (eFPGA) IP into devices designed for different Global Foundries process nodes. The Aurora eFPGA development tool supports design implementation from RTL through place and route. It enables SoC developers to determine the amount of eFPGA resources needed to support a design (including logic cell count, clock network requirements, and routing utilization) and also provide the estimated eFPGA die area associated with those resources. The current version of the tool supports GF’s 40-nm node. Support for the 65-nm node and 22FDX (FD-SOI) platform will be released in the future.

Source: QuickLogic Corp.

Updated LiveLink for SOLIDWORKS

COMSOL recently updated LiveLink for SOLIDWORKS. An add-on to the COMSOL Multiphysics software, LiveLink for SOLIDWORKS enables a CAD model to be synchronized between the two software packages. Furthermore, it provides easy access for running simulation apps that can be used in synchronicity with SOLIDWORKS software. You can build apps with the Application Builder to let users analyze and modify a geometry from SOLIDWORKS software right from the app’s interface. Users can browse and run apps from within the SOLIDWORKS interface, including those that use a geometry that is synchronized with SOLIDWORKS software.

The update includes a new Bike Frame Analyzer app in the Application Libraries. It leverages LiveLink for SOLIDWORKS to interactively update the geometry while computing the stress distribution in the frame that is subject to various loads and constraints. You can use the app to easily test different configurations of a bike frame for different parameters such as, dimensions, materials, and loads. The app computes the stress distribution and the deformation of the frame, based on the structural dimensions, materials, and loads/constraints of the bike frame.

Source: COMSOL

The Future of Test-First Embedded Software

The term “test-first” software development comes from the original days of extreme programming (XP). In Kent Beck’s 1999 book, Extreme Programming Explained: Embrace Change (Addison-Wesley), his direction is to create an automated test before making any changes to the code.

Nowadays, test-first development usually means test-driven development (TDD): a well-defined, continuous feedback cycle of code, test, and refactor. You write a test, write some code to make it pass, make improvements, and then repeat. Automation is key though, so you can run the tests easily at any time.

TDD is well regarded as a useful software development technique. The proponents of TDD (including myself) like the way in which the code incrementally evolves from the interface as well as the comprehensive test suite that is created. The test suite is the safety net that allows the code to be refactored freely, without worry of breaking anything. It’s a powerful tool in the battle against code rot.

To date, TDD has had greater adoption in web and application development than with embedded software. Recent advances in unit test tools however are set to make TDD more accessible for embedded development.

In 2011 James Grenning published his book, Test Driven Development for Embedded C (Pragmatic Bookshelf). Six years later, this is still the authoritative reference for embedded test-first development and the entry point to TDD for many embedded software developers. It explains how TDD works in detail for an unfamiliar audience and addresses many of the traditional concerns, like how will this work with custom hardware. Today, the book is still completely relevant, but when it was published, the state-of-the art tools were simple unit test and mocking frameworks. These frameworks require a lot of boilerplate code to run tests, and any mock objects need to be created manually.

In the rest of the software world though, unit test tools are significantly more mature. In most other languages used for web and application development, it’s easy to create and run many unit tests, as well as to create mock objects automatically.
Since 2011, the current state of TDD tools has advanced considerably with the development of the open-source tool Ceedling. It automates running of unit tests and generation of mock objects in C applications, making it a lot easier to do TDD. Today, if you want to test-drive embedded software in C, you don’t need to roll-your-own test build system or mocks.

With better tools making unit testing easier, I suspect that in the future test-first development will be more widely adopted by embedded software developers. While previously relegated to the few early adopters willing to put in the effort, with tools lowering the barrier to entry it will be easier for everyone to do TDD.
Besides the tools to make TDD easier, another driving force behind greater adoption of test-first practices will be the simple need to produce better-quality embedded software. As embedded software continues its infiltration into all kinds of devices that run our lives, we’ll need to be able to deliver software that is more reliable and more secure.

Currently, unit tests for embedded software are most popular in regulated industries—like medical or aviation—where the regulators essentially force you to have unit tests. This is one part of a strategy to prevent you from hurting or killing people with your code. The rest of the “unregulated” embedded software world should take note of this approach.

With the rise of the Internet of things (IoT), our society is increasingly dependent on embedded devices connected to the Internet. In the future, the reliability and security of the software that runs these devices is only going to become more critical. There may not be a compelling business case for it now, but customers—and perhaps new regulators—are going to increasingly demand it. Test-first software can be one strategy to help us deal with this challenge.


This article appears in Circuit Cellar 318.


Matt Chernosky wants to help you build better embedded software—test-first with TDD. With years of experience in the automotive, industrial, and medical device fields, he’s excited about improving embedded software development. Learn more from Matt about getting started with embedded TDD at electronvector.com.

The Flow Coder

Products come and go. New products are developed all the time. So, what’s the key to success? John Dobson has successfully run Halifax, UK-based Matrix TSL 23 years. During that time, the company has gone through some changes. He recently gave Circuit Cellar a tour Matrix’s headquarters, shared a bit about the company’s history, and talked about product diversification.

Matrix started 23 years ago as a Matrix Multimedia, a CD-ROM publisher. “The Internet came along and destroyed that business, and we had to diversify, and so we diversified into electronics and into education in particular,” Dobson said.

Matrix’s flagship product is Flowcode software. “It basically uses flowcharts to allow people without a huge amount of coding experience to develop complex electronics systems,” Dobson explained. “Sometimes programming in C or other languages is a little complicated and time consuming. So Flowcode has a lot of components to it with libraries and things and lots of features that allow people to design complex electronic systems quickly.”

Today, while still focused on the education market, the latest version of Flowcode has about 3,000 industrial users. Dobson said many of the industrial users are test engineers whose specialty isn’t necessary coding.

Note: Circuit Cellar is currently running a Flowcode 7 promotion with Matrix TSL. Learn More

eSOL RTOS & Debugger Support for Software Development

Imperas Software recently announced its support for eSOL’s eMCOS RTOS and eBinder debugger. The partnership is intended to accelerate embedded software development, debugging, and testing.

The Imperas Extendable Platform Kit (EPK) features a Renesas RH850F1H device and it runs the eSOL eMCOS real time operating system. Imperas simulators can use the debugger from the eSOL IDE, eBinder, for efficient software debugging and testing.

Source: eSOL

October Code Challenge (Sponsor: Programming Research)

Ready to put your programming skills to the test? Take the new Electrical Engineering Challenge (sponsored by Programming Research). Find the error in the code for a shot to win prizes, such as an Amazon Gift Card, a Circuit Cellar magazine digital subscription, or a discount to the Circuit Cellar webshop.

The following program will compile with no errors. It runs and completes with no errors.

Click to enlarge. Find the error and submit your answer via the online submission form below. Submission deadline: 2 PM EST, October 20.

Take the challenge now!

September Code Challenge (Sponsor: Programming Research)

Ready to put your programming skills to the test? Take the new Electrical Engineering Challenge (sponsored by Programming Research). Find the error in the code for a shot to win prizes, such as an Amazon Gift Card, a Circuit Cellar magazine digital subscription, or a discount to the Circuit Cellar webshop.

The following program will compile with no errors. It will crash when run. This is an example of working with link lists. The output should be:

LinkedList : 4321

LinkedList in reverse order : 1234

Click to enlarge. Find the error and submit your answer via the online submission form below. Submission deadline: 2 PM EST, September 20.

Click to enlarge. Find the error and submit your answer via the online submission form below. Submission deadline: 2 PM EST, September 20.

Take the challenge now!

August Code Challenge (Sponsor: Programming Research)

Ready to put your programming skills to the test? Take the new Electrical Engineering Challenge (sponsored by Programming Research). Find the error in the code for a shot to win prizes, such as an Amazon Gift Card or a Circuit Cellar magazine digital subscription.


The following is a sample piece of code that has a subtle programming error that would cause the software to fail. This code is a C++ language program but written as a function.

Specification: The program does a bubble sort for a list (array) of numbers. That means that it takes a list of numbers like this: 5 23 7 1 9 and turns it into 1 5 7 9 23.

Click to enlarge. Find the error and submit your answer via the online submission form below. Submission deadline: 2 PM EST, July 20.

Click to enlarge. Find the error and submit your answer via the online submission form below. Submission deadline: 2 PM EST, July 20.

Take the challenge now!

Latest Release of COMSOL Multiphysics and COMSOL Server

COMSOL has announced the latest release of the COMSOL Multiphysics and COMSOL Server simulation software environment. With hundreds of user-driven features and enhancements, COMSOL software version 5.2a expands electrical, mechanical, fluid, and chemical design and optimization capabilities. COMSOL_Multiphysics

In COMSOL Multiphysics 5.2a, three new solvers deliver faster and more memory-efficient computations:

  • The smoothed aggregation algebraic multigrid (SA-AMG) solver is efficient for linear elastic analysis and many other types of analyses. It is very memory conservative, making it possible to run structural assemblies with millions of degrees of freedom on a standard desktop or laptop computer.
  • The domain decomposition solver has been optimized for handling large multiphysics models. “
  • A new explicit solver based on the discontinuous Galerkin (DG) method for acoustics in the time-domain enables you to perform more realistic simulations for a given memory size than was previously possible.

The complete suite of computational tools provided by COMSOL Multiphysics software and its Application Builder enables you to design and optimize your products and create apps. Simulation apps enable users without any previous experience using simulation software to run the apps. With version 5.2a, designers can build even more dynamic apps where the appearance of the user interface can change during run time, centralize unit handling to better serve teams working across different countries, and include hyperlinks and videos.

Source: COMSOL

BenchVue 3.5 Software Update for Instrument Control, Test Automation, & More

Keysight Technologies recently released BenchVue 3.5, which is an intuitive platform for the PC that provides multiple-instrument measurement applications, data capture, and solution applications. Programming or separate instrument drivers are not required.

When you connect an instrument to your PC over LAN, GPIB or USB, the instrument is automatically configured for use in BenchVue. With the BenchVue Test Flow app, you can quickly create automated test sequences. BenchVue 3.5 also features new apps that support signal generators, universal counters, and Keysight’s FieldFox Series of handheld analyzers.

BenchVue’s features and specs:

  • Expandable apps that provide instrument control with plug and play functionality
  • Data logging for instruments (e.g., digital multimeters, power supplies, oscilloscopes, and FieldFox analyzers
  • Rapid test automation development and analysis
  • Three-click exporting to common data formats (e.g., .csv, MATLAB, Word, and Excel)

BenchVue 3.5 software is available free of charge. Upgrades to extend BenchVue’s functionality are also available and are priced accordingly.

Source: Keysight

Virtual Software Development for Embedded Developers

Embeddetech will launch a Kickstarter campaign on June 20 for its Virtuoso software. Virtuoso is a powerful virtual device framework intended for custom electronics designers. With it, you can virtualize embedded systems. This means that firmware application developers can drag-and-drop commonly used components (e.g., LEDs, touch screens, and keypads) or develop new components from scratch and then start developing applications. With Virtuoso, a fully functional replica of the hardware accelerates firmware development while the hardware is developed in parallel.EmbeddedTech - Virtuoso

In early 2017, Embeddetech plans to bring photo-realistic, real-time, 3-D virtualization to embedded software development using Unreal Engine, which is a powerful game engine developed by Epic Games. Embeddetech has developed a second framework which adapts Unreal Engine to Microsoft’s .NET Framework, allowing business applications to leverage the power of the modern 3-D game development workflow.

Source: Embeddetech

Thermoelectric Module Simulation Software Simplifies Design

To decrease thermal deszsign time for design engineers, Laird recently improved the AZTEC thermoelectric module (TEM) simulation program algorithms. The AZTEC product selection tool enables you to specify input variables based on application attributes and the software analysis outputs. Now you can select the best TEM by easily comparing TEM datasheets. In addition, the software includes an analysis worksheet for simulating TEM device functionality.Laird AZTEC Interface

The AZTEC product selection tool—which is available at Lairdtech.com—uses a variety of input variables (i.e., heat load, ambient and control temperatures, input voltage requirement and thermal resistance of hot side heat exchangers) to recommend appropriate TEMs to meet your application’s needs. Laird updated the software with its newest TEM product offerings.

The Analysis Worksheet Tool simulates expected thermoelectric output parameters based on a given set of thermal and electrical operating points. The included output parameters are:

  • the hot and cold side temperatures of the TEM
  • heat pumped at the cold surface of the TEM
  • coefficient of performance (COP)
  • input power requirements

The total hot side heat dissipation is also calculated.

The included Qc Estimating Worksheet calculates an estimate on the heat load for device (spot) or chamber (volume) cooling applications. Computations are made based on the input (e.g., temperature requirements, volumetric dimensions, insulation thickness, material properties, and active heat load) you provide.

Source: Laird