About Circuit Cellar Staff

Circuit Cellar's editorial team comprises professional engineers, technical editors, and digital media specialists. You can reach the Editorial Department at editorial@circuitcellar.com, @circuitcellar, and facebook.com/circuitcellar

Radar Module for Makers

OmniPreSense Corp.’s recently unveiled radar module is capable of detecting objects 5 to 10 m away and giving electronic systems enhanced information about the world around them. Intended for the “maker” community, the $169 OPS241-A module is capable of making any Android phone supporting USB On-the-Go (OTG) into a radar gun.OmniPreSense

The 53 mm × 59 mm  OSP241-A short-range radar is capable of reporting motion, speed, and direction of objects detected in its wide field of view. You can plug it into a Raspberry Pi’s USB port to enable a variety of useful applications. An API provides direct control of the OPS241-A and allows for changes to reported units (e.g., meters/second and miles/hour), transmitted power, and other settings. Compared to PIR or ultrasonic sensors, the OPS241-A provides increased range, a wider coverage area, and immunity to noise and light, while providing enhanced information about the detected object. Potential applications range from security motion detection to a radar gun. You can plug the OPS241-A directly into an Android phone or tablet running USB OTG and terminal program to turn them into a radar gun. When mounted on a drone, the OPS241-A can detect objects 5 to 10 m away for collision avoidance.

Source: OmniPreSense Corp.

 

New MEMS Accelerometers for Industrial Condition Monitoring Apps

Analog Devices’s new ADXL1001 and ADXL1002 high-frequency, low-noise MEMS accelerometers are designed for industrial condition-monitoring applications. The accelerometers deliver the high-resolution vibration measurements needed for the early detection of machine failure (e.g., bearing faults).Analog ADXL1001

The ADXL1001 and ADXL1002’s benefits, features, and specs:

  • Deliver ultra-low noise density over an extended bandwidth with high-g range.
  • Available in two models with full-scale ranges of ±100 g (ADXL1001) and ±50 g (ADXL1002).
  • Typical noise density for the ADXL1002 is 25 μg/√Hz, with a sensitivity of 40 mV/g, and 30 μg/√Hz for ADXL1001 with sensitivity 20 mV/g.
  • Operate on single voltage supply from 3. to 5.25 V
  • Electrostatic self-test
  • Over range indicator
  • Rated for operation over a –40°C to 125°C temperature range.

The accelerometers cost $29.61 each in 1,000-unit quantities.

Source: Analog Devices

New Thermal Imaging Solution for Benchtop Electronics Testing

FLIR Systems recently launched the FLIR ETS320 thermal imaging solution for benchtop electronics testing. Well suited for testing and analyzing the thermal characteristics of electronic components and printed circuit boards (PCBs), the battery-powered FLIR ETS320 comprises a high-sensitivity thermal camera and an adjustable, hands-free table stand. With more than 76,000 points of temperature measurement, the rechargeable FLIR ETS320 enables you to monitor power consumption, detect hot spots, and identify potential points of failure during product development. The highly accurate camera can visualize small temperature differences so you can evaluate thermal performance, ensure environmental compatibility, and troubleshoot problems.FLIR ETS 320

 

The FLIR ETS320 ships fully assembled and ready to connect to a PC running FLIR Tools software for detailed data analysis, recording, and reporting. The integrated test stand and sliding mount design offer flexibility when imaging electronic components of various sizes.

The FLIR ETS320 costs $2499 and is available through established FLIR distribution partners.

Source: FLIR Systems

Issue 322: EQ Answers

Here are the answers to the four EQ problems that appeared in Circuit Cellar 322.

Problem 1: Some time ago (Issue #274), we discussed how theoretical channel capacity is a function of the bandwidth and the signal-to-noise ratio of the channel. For example, an SNR of 40 dB limits the channel to 100 different symbols at best, or about 6.64 bits per symbol. It is tempting to use just the integer part of that number, and use only 64 states to encode 6 bits per symbol. But there is a way to use all 100 symbols and maximize the information bandwidth of the channel. Describe the general approach of how you’d encode binary data to be transmitted through an N-state channel.

Answer 1: In the most general case, you would treat a binary (base 2) message as one giant number. In order to transmit that message through a channel that can only carry N different symbols, you would convert that number to base N and transmit the resulting digits one at a time. In practice, you would break a long data stream into fixed-length blocks and then transmit those blocks one at a time, using the above scheme, possibly adding extra error detecting and/or correcting bits to each block.


Problem 2: As a specific example, a 24-dB SNR would limit a channel to no more than 15 symbols. What would be the most efficient way to stream 8-bit bytes through such a channel?

Answer 2: For the specific case of 8-bit bytes through a 15-symbol channel, you might pick a block length of 20 bytes, after noticing that 1541 is 1.6586 × 1048, just a little bit larger than 2160 = 1.4615 × 1048. Each block of 20 bytes would require 41 symbols to be transmitted, achieving an efficiency of 160/41 = 3.90 bits/symbol, which is very close to the theoretical maximum of 3.91 bits/symbol that’s implied by having 15 states.


Problem 3: When we talk about Kirchoff’s Voltage Law (the sum of voltages around a complete loop is zero) and Kirchoff’s Current Law (the sum of currents into a circuit node is zero), we are ignoring some important real-world effects in order to simplify circuit analysis. What are they?

Answer 3: There are three important effects that the “lumped component” model (the model on which KVL and KCL are based) ignores:

  • The fact that any circuit node of nonzero size has some capacitance (the ability to store charge) relative to the world at large.
  • The fact that any circuit loop of nonzero size has some inductance (the ability to store energy in a magnetic field).
  • The fact that fields propagate at a specific velocity (e.g., at the speed of light in a vacuum).

Note that “parasitic” capacitances and inductances can be explicitly added to a lumped component model (where relevant) in order to bring the analysis closer to reality. However, dealing with propagation speed issues (such as transmission line effects) requires a different kind of analysis. Such effects can only be crudely approximated in a lumped-component model.


Problem 4: When doing a high-level design of a sensor-based system, it is often useful to consider exactly what the “observables” are — quantities that can be measured and acted on. For example, many people are interested in measuring the distance between portable devices based on exchanging radio messages, using a protocol such as Bluetooth or WiFi. What exactly are the observables in such a system, and how might they be used to estimate the distance between the transmitter and the receiver?

Answer 4: There are a number of observables associated with a radio network, including:

  • The contents of a message
  • The time of arrival of a message
  • The direction of arrival of a message
  • The radio signal strength

The contents of a message can be used to calculate distance if the transmitter reports its own position in a mutually agreed-upon coordinate system, and the receiver also knows its own position. The time of arrival can be used to calculate distance if the time that the message was transmitted is also known. Again, you can get this from the contents of the message if the transmitter and receiver have adequately synchronized clocks. The direction of arrival can be used (assuming the receiver’s antenna is directional enough) to determine the direction of the transmitter relative to the receiver’s orientation. Measurements from multiple transmitters can establish the receiver’s position (and hence its distance) relative to those transmitters. However, this is easily confused by signal reflections in the environment (multipath). The radio signal strength can also be used to estimate distance, but it is a measurement that depends on many things besides distance that need to be accounted for, such as antenna gain and orientation (at both ends), multipath and RF absorption, transmitter power level calibration. This makes it the least useful (and least accurate) way to measure distance.

Electrical Engineering Crossword (Issue 322)

The answers to Circuit Cellar 322‘s crossword are now available.322 crossword

Across

  1. SNIFFER—Software for monitoring network traffic
  2. ACTUATION—To put into mechanical action
  3. INVERTER—DC to AC
  4. NULL—VALUELESS CHARACTER
  5. REPEATER—Receives and amplifies a weak signal before retransmitting it
  6. JITTER—The deviation of some aspect of a digital signal’s pulses
  7. TESLA—Magnetic flux density
  8. PASCAL—1 newton/cm2
  9. ANEMOMETER—Measures wind speed and direction
  10. CADMIUM—48 (Cd)
  11. QUIESCENT—Inactive

Down

  1. SHUNT—Diverts a current
  2. NANOVOLT—One billionth of a volt
  3. TELEMETRY—Automatic remote transmission of data
  4. INERTIA—Newton’s 1st
  5. INSULATOR—Highly resistant, nonconductive material
  6. ABEND—Abnormal end
  7. EDDY—Foucault current
  8. GAUSS—G
  9. CLONE—Exact replica

Adaptive Robotics: An Interview with Henk Kiela

The Adaptive Robotics Lab at Fontys University in Eindhoven, Netherlands, has a high “Q” factor (think “007”). Groups of students are always working on robotics projects. Systems are constantly humming. Robots are continually moving around. Amid the melee, Circuit Cellar interviewed Professor Henk Kiela about the lab, innovations like adaptive robotics, and more.

“Adaptive robotics is the new breed of robots that are going to assist workers on the shopfloor and that will take care of a high variety of routine activities. Relieving them from routine work allows the workers to concentrate on their skills and knowledge and prevent them from getting lost in details. In a car-manufacturing operation you have a lot of robots doing more or less the same job, a top-down controlled robotization. We recognise that the new generation of robots will act more like an assistant for the worker— a flexible workforce that can be configured for different types of activities.”—Henk Kiela

3-D Object Segmentation for Robot Handling

A commercial humanoid service robot needs to have capabilities to perform human-like tasks. One such task for a robot in a medical scenario would be to provide medicine to a patient. The robot would need to detect the medicine bottle and move its hand to the object to pick it up. The task of locating and picking a medicine bottle up is quite trivial for a human. What does it take to enable a robot to do the same task? This, in fact, is a challenging problem for a robot. A robot tries to make sense of its environment based on the visual information it receives from a camera. Even then, creating efficient algorithms to identify an object of interest in an image, calculating the location of the robot’s arm in space, and enabling it to pick the object up is a daunting task. For our senior capstone project at Portland State University, we researched techniques that would enable a humanoid robot to locate and identify a common object (e.g., a medicine bottle) and acquire real-time position information about the robot’s hand in order to guide it to the target object. We used an InMoov open-source, 3-D humanoid robot for this project (see Photo 1).

Photo 1 The InMoov robot built at Portland State University’s robotics lab

Photo 1: The InMoov robot built at Portland State University’s robotics lab

SYSTEM OVERVIEW

In the field of computer vision, there are two dominant approaches to this problem—one using pixel-based 2-D imagery and another using 3-D depth imagery. We chose the 3-D approach because of the availability of state-of-the-art open source algorithms, and because of the recent influx of cheap stereo depth cameras, like the Intel RealSense R200.

Solving this problem further requires a proper combination of hardware and software along with a physical robot to implement the concept. We used an Intel Realsense R200 depth camera to collect 3-D images, and an Intel NUC with a 5th Generation Core i5 to process the 3-D image information. Likewise, for software, we used the open-source Point Cloud Library (PCL) to process 3-D point cloud data.[1] PCL contains several state-of-the-art 3-D segmentation and recognition algorithms, which made it easier for us to compare our design with other works in the same area. Similarly, the information relating to the robot arm and object position computed using our algorithms is published to the robot via the Robot Operating System (ROS). It can then be used by other modules, such as a robot arm controller, to move the robot hand.

OBJECT SEGMENTATION PIPELINE

Object segmentation is widely applied in computer vision to locate objects in an image.[2] The basic architecture of our package, as well as many others in this field, is a sequence of processing stages—that is, a pipeline. The segmentation pipeline starts with capturing an image from a 3-D depth camera. By the last stage of the pipeline, we have obtained the location and boundary information of the objects of interest, such as the hand of the robot and the nearest grabbable object.

Figure 1: 3-D object segmentation pipeline

Figure 1: 3-D object segmentation pipeline

The object segmentation pipeline of our design is shown in Figure 1. There are four main stages in our pipeline: downsampling the input raw image, using RANSAC and plane extraction algorithms, using the Euclidean Clustering technique to segment objects, and applying a bounding box to separate objects. Let’s review each one.

The raw clouds coming from the camera have a resolution which is far too high for segmentation to be feasible in real time. The basic technique for solving this problem is called “voxel filtering,” which entails compressing several nearby points into a single point.[3] In other words, all points in some specified cubical region of volume will be combined into a single point. The parameter that controls the size of this volume element is called the “leaf size.” Figure 2 shows an example of applying the voxel filter with several different leaf sizes. As the leaf size increases, the point cloud density decreases proportionally.

Figure 2: Down-sampling results for different leaf sizes

Figure 2: Down-sampling results for different leaf sizes

Random sample consensus (RANSAC) is a quick method of finding mathematical models. In the case of a plane, the RANSAC method will create a virtual plane that is then rotated and translated throughout the scene, looking for the plane with the data points that fit the model (i.e., inliers). The two parameters used are the threshold distance and the number of iterations. The greater the threshold, the thicker the plane can be. The more iteration RANSAC is allowed, the greater the probability of finding the plane with the most inliers.

Figure 3: The effects of varying the number of iterations of RANSAC. Notice that the plane on the left (a), which only used 200 iterations, was not correctly identified, while the one on the right (b), with 600 iterations, was correctly identified.

Figure 3: The effects of varying the number of iterations of RANSAC. Notice that the plane on the left, which only used 200 iterations, was not correctly identified, while the one on the right, with 600 iterations, was correctly identified.

Refer to Figure 3 to see what happens as the number of iterations is changed. The blue points represent the original data. The red points represent the plane inliers. The magenta points represent the noise (i.e., outliers) remaining after a prism extraction. As you can see, the image on the left shows how the plane of the table was not found due to RANSAC not being given enough iterations. The image on the right shows the plane being found, and the objects above the plane are properly segmented from the original data.

After RANSAC and plane extraction in the segmentation pipeline, Euclidean Clustering is performed. This process takes the down-sampled point cloud—without the plane and its convex hull—and breaks it into clusters. Each cluster hopefully corresponds to one of the objects on the table.[4] This is accomplished by first creating a kd-tree data structure, which stores the remaining points in the cloud in a way that can be searched efficiently. The cloud points are then iterated again with a radius search being performed for each point. Neighboring points within the threshold radius are then added to the current cluster and marked as processed. This continues until all points in the cloud have been marked as processed and put into different segments before the algorithm terminates. After the object segmentation and recognition has been performed, the robot knows which object to pick up, but it doesn’t know the boundaries of the object.


Saroj Bardewa (saroj@pdx.edu) is pursuing an MS in Electrical and Computer Engineering at Portland State University, where he earned a BS in Computer Engineering in June 2016. His interests include computer architecture, computer vision, machine learning, and robotics.

Sean Hendrickson (hsean@pdx.edu) is a senior studying Computer Engineering at Portland State University. His interests include computer vision and machine learning.


This complete article appears in Circuit Cellar 320 (March 2017).

New Fifth-Generation Quasi-Resonant Flyback Controller and Integrated Power IC

Infineon Technologies recently announced the fifth generation of its stand-alone quasi-resonant flyback controller and integrated power IC CoolSET family. This generation offers more efficiency, faster startup, and improved overall performance. The new ICs are especially designed for AC/DC switch mode power supplies in a wide variety of applications.Infineon_Gen5_CoolSET

The latest 700- and 800-V CoolMOS P7 families are integrated with a fifth-generation controller in a single package. The cascode configuration for the high-voltage MOSFET in combination with the internal current regulator provides fast startup performance. Light load performance is optimized through an Active Burst Mode (ABM) with selectable entry/exit thresholds.

Furthermore, the device incorporates new algorithms that minimize the switching frequency spread between different line conditions. It also simplifies EMI filter design. Device protection includes input over-voltage protection, brown in/out, pin short to ground, and over-temperature protection with hysteresis. All protection features are implemented with auto-restart to minimize any interruption to operation.

The complete fifth generation quasi-resonant controller and CoolSET product portfolio will be available starting in May 2017. The controller comes in an SMD package (DSO-8). The CoolSET comes in both SMD (DSO-12) and through-hole (DIP-7) packages.

Source: Infineon Technologies

TeraFire Hard Cryptographic Microprocessor

Microsemi Corp. recently added Athena’s TeraFire cryptographic microprocessor to its new PolarFire field programmable gate array (FPGA) “S class” family. The TeraFire hard core provides Microsemi customers access to advanced security capabilities with high performance and low power consumption.Microsemi

Features, benefits, and  specs:

  • Supports additional algorithms and key sizes commonly used in commercial Internet communications protocols such as TLS, IPSec, MACSec and KeySec.
  • The Athena TeraFire EXP-5200B DPA-resistant cryptographic microprocessor capable of nearly 200MHz operation.
  • Enables high-speed DPA-resistant cryptographic protocols at speeds well over 100 Mbps
  • Integrated true random number generator for generating keys on-chip and for protecting cryptographic protocols
  • The TeraFire crypto microprocessor is extensible with additional object code licensed from Athena or with accelerators attached via the PolarFire FPGA fabric

Microsemi’s PolarFire “S class” FPGAs with Athena’s TeraFire cryptographic microprocessor will be available in Q2 2017. A soft version of the core is available for Microsemi’s SmartFusion2 SoC FPGAs.

Source: Microsemi 

E-Paper Display Modules Drive Batteryless RFID Tags

Pervasive Displays recently announced that Japan-based TOPPAN Printing Co. is using its low-power e-paper display (EPD) module in a batteryless RFID EPD tag. Operating off harvested RF energy, the new solution enables you to update a device’s EPD and RFID tag data at the same time with a smartphone or NFC reader/writer. Pervasive PDI004

Pervasive Displays’s low-power  Aurora Mb EPD modules require as little as 2 mA current during display update operations. This means display updates can be powered purely by harvested energy from NFC, RFID, solar, or thermal sources. Unlike traditional active matrix LCDs, e-paper technology is bistable, which means power isn’t required to maintain an image on the tag. A backlight is unncessary.

Source: Pervasive Displays

New Easy-to-Use BOM Tool

Mouser Electronics recently launched FORTE, an intelligent bill of materials (BOM) management tool intended to save time and improve order accuracy in specifying and purchasing electronic components. Free to Mouser account holders, FORTE offers access to more than 4 million part numbers and it quickly validates part numbers, product availability, and price.Mouser Forte

FORTE’s features and benefits:

  • Evaluates the millions of daily interactions of purchasing professionals and engineers
  • Analyzes partial part numbers and descriptions to suggest the best options for customers
  • Offers expanded customization, added intelligence, and enhanced part match confidence
  • Remembers users’ preferences, spreadsheet layouts, naming conventions, and previous product orders.
  • Translations for all of Mouser’s supported language
  • Supports most common file formats, real-time product pricing and availability, and easy purchasing directly from the BOM.

 

Source: Mouser Electronics

Bluetooth 5 Low Power SoC with Integrated Microphone Interface

Dialog Semiconductor recently announced the next generation in its SmartBond family. The DA14586 SoC is the company’s first standalone device qualified to support the latest Bluetooth 5.0 specification. It delivers the lowest power consumption and unrivaled functionality for advanced use cases.DialogDA14586

The DA14586’s features, specs, and benefits:

  • Bluetooth 5 qualified
  • An integrated microphone interface allows manufacturers to add intuitive intelligent voice control to any cloud connected product that has a microphone and speaker
  • Enhancements include an advanced power management setup with both buck and boost converters, which enable support of most primary cell battery types.
  • Double the memory of its predecessor for user applications, making it ideal for adding Bluetooth low energy to proximity tags, beacons, connected medical devices, and smart home applications.
  • Advanced features allow for mesh-based networked applications to be simply supported.
  • Supported by a complete development environment and Dialog’s SmartSnippets software to help engineers optimize software for power consumption.

Source: Dialog Semiconductor

New Development Tool for Bluetooth 5

Nordic Semiconductor’s Bluetooth 5 developer solution for its nRF52840 SoC comprises the Nordic S140 v5.0 multi-role, concurrent protocol stack that brings Bluetooth 5’s long range and high throughput modes for immediate use to developers on the Nordic nRF52840 SoC. The Nordic nRF5 SDK offers application examples that implement this new long-range, high-throughput functionality. The existing Nordic nRF52832 SoC is also complemented with a Bluetooth 5 protocol stack.NordicBluetooth5Board

Bluetooth 5’s high throughput mode offers not only new use cases for wearables and other applications, but also significantly improves user experience with Bluetooth products. Time on air is reduced and thus leads to faster more robust communication as well as reduced overall power consumption. In addition, with 2 Mbps, the prospect of audio over Bluetooth low energy is possible.

The new Preview Development Kit (nRF52840-PDK) is a versatile, single-board development tool for Bluetooth 5, Bluetooth low energy, ANT, 802.15.4m, and 2.4-GHz proprietary applications using the nRF52840 SoC. The kit is hardware compatible with the Arduino Uno Revision 3 standard, making it possible to use third-party-compatible shields. An NFC antenna can be connected to enable NFC tag functionality. The kit gives access to all I/O and interfaces via connectors and has four LEDs and four buttons which are user-programmable.

Source: Nordic Semiconductor

Embedded Software: Tips & Insights (Sponsor: PRQA)

When it comes to embedded software, security matters. Read the following whitepapers to learn about: securing your embedded systems, MISRA coding standard, and using static analysis to overcome the challenges of reusing code.

  • Developing Secure Embedded Software
  • Guide to MISRA Coding
  • Using Static Analysis to Overcome the Challenges of Reusing Code for Embedded Software

VISIT THE DOWNLOAD PAGE

Programming Research Ltd (PRQA) helps its customers to develop high-quality embedded source code—software which is impervious to attack and executes as intended.

Reflections on Software Development

Present-day equipment relies on increasingly complex software, creating ever-greater demand for software quality and security. The two attributes, while similar in their effects, are different. A quality software is not necessarily secure, while a secure software is not necessarily of good quality. Safe software is both of high quality and security. That means the software does what it is supposed to do: it prevents hackers and other external causes from modifying it, and should it fail, it does so in a safe, predictable way. Software verification and validation (V&V) reduces issues attributable to defects, that is to poor quality, but does not currently address misbehavior caused by external effects.

Poor software quality can result in huge material losses, even life. Consider some notorious examples of the past. An F-22 Raptor flight control error caused the $150 million aircraft to be destroyed. An RAF Chinook engine controller fault caused the helicopter crash with 29 fatalities. A Therac radiotherapy machine gave patients massive radiation overdoses causing death of two people. A General Electric power grid monitoring system’s failure resulted in a 48-hour blackout across eight US states and one Canadian province. Toyota’s electronic throttle controller was said to be responsible for the lives of 89 people.

Clearly, software quality is paramount, yet too often it takes the back seat to the time to market and the development cost. One essential attribute of quality software is its traceability. This means that every requirement can be traced via documentation from the specification down to the particular line of code—and, vice versa, every line of code can be traced up to the specification. The documentation (not including testing and integration) process is illustrated in Figure 1.

FIGURE 1: Simplified software design process documentation. Testing, verification and validation (V&V) and control documents are not shown.

FIGURE 1: Simplified software design process documentation. Testing, verification and validation (V&V) and control documents are not shown.

The terminology is that of the DO-178 standard, which is mandatory for aerospace and military software. (Similarly, hardware development is guided by DO-254.) Other software standards may use a different terminology, but the intentions are the same. DO-178 guides its document-driven process, for which many tools are available to the designer. Once the hardware-software partitioning has been established, software requirements define the software architecture and the derived requirements. Derived requirements are those that the customer doesn’t include in the specification and might not even be aware of them. For instance, turning on an indicator light may take one sentence in the specification, but the decomposition of this simple task might lead to many derived requirements.

Safety-Instrumented Functions

While requirements are being developed, test cases must be defined for each and every one of those requirements. Additionally, to increase the system safety, a so-called Safety-Instrumented Functions (SIF) should be considered. SIFs are monitors which cause the system to safely shut down if its performance fails to meet the previously defined safety limits. This is typically accomplished by redundancy in hardware, software or both. If you neglect to address such issues at an early development stage, you might end up with an unsafe system and having to redo a lot of work later.

Quality design is also a bureaucratic chore. Version control and configuration index must be maintained. The configuration index comprises the list of modules and their versions to be compiled for specific versions of the product under development. Without it, configuration can be lost and a great deal of development effort with it.

Configuration control and traceability are not just the best engineering practices. They should be mandated whenever software is being developed. Some developers believe that software qualification to a specific standard is required by the aerospace and military industries only. Worse, some commercial software developers still subscribe to the so-called iron triangle: “Get to market fast with all the features planned and high level of quality. But pick only two.”

Engineers in safety-critical industries (such as medical, nuclear, automotive, and manufacturing) work with methods similar to DO-178 to ensure their software performs as expected. Large original equipment manufacturers (OEMs) now demand adherence to software standards: IEC61508 for industrial controls, IEC62034 for medical equipment, ISO 26262 for automotive, and so forth. The reason is simple. Unqualified software can lead to costly product returns and expensive lawsuits.

Software qualification is highly labor intensive and very demanding in terms of resources, time, and money. Luckily, its cost has been coming down thanks to a plethora of automated tools now being offered. Those tools are not inexpensive, but they do pay for themselves quickly. Considering the risk of lawsuits, recalls, brand damage, and other associated costs of software failure, no company can really afford not to go through a qualification process.

Testing

As with hardware, quality must be built into the software, and this means following strict process rules. You can’t expect to test quality into the product at the end. Some companies have tried and the results have been the infamous failures noted above.
Testing embedded controllers often presents a challenge because you need the final hardware when it is not yet finished. Nevertheless, if you give testing due consideration as you prepare the software requirements, much can be accomplished by working in virtual or simulated environments. LDRA (www.ldra.com) is one great tool for this task.
Numerous methods exist for software testing. For example, dynamic code analysis examines the program during its execution, while the static analysis looks for vulnerabilities as well as programming errors. It has been shown mathematically that 100% test coverage is impossible to achieve. But even if it was, 35% to 40% of defects result from missing logic paths and another 40% from the execution of unique combinations of logic paths. Such defects wouldn’t get caught by testing, but can be mitigated by SIF.

Much embedded code is still developed in-house (see Figure 2). Is it possible for companies to improve programmers’ efficiency in this most labor-intensive task? Once again, the answer lies in automation. Nowadays, many tools come as complete suites providing various analyses, code coverage, coding standards compliance, requirements traceability, code visualization, and so forth. These tools are regularly seen at developers of avionic and military software, but they are not as frequently used by commercial developers because of their perceived high cost and steep learning curve.

FIGURE 2: Distribution of embedded software sources. Most is still developed in-house.

FIGURE 2: Distribution of embedded software sources. Most is still developed in-house.

With the growth of cloud computing and the Internet of Things (IoT), software security is gaining on an unprecedented importance. Some security measures can be incorporated in hardware while others are in software. Data encryption and password protection are the vital parts. Unfortunately, security continues to be not treated by some developers as seriously as it should be. Security experts warn that numerous IoT developers have failed to learn the lessons of the past and a “big IoT hack” in the near future is inevitable.

Security Improvements

On a regular basis, the media report on security breaches (e.g., governmental organization hacks, bank hacks, and automobile hacks). What can be done to improve security?

There are several techniques—such as Common Weakness Enumeration (CWE)—that can help to improve our chances. However, securing software is likely a task a lot more daunting than achieving comprehensive V&V test coverage. One successful hack proves the security is weak. But how many unsuccessful hacks by test engineers are needed to establish that security is adequate? Eventually, a manager, probably relying on some statistics, will have to decide that enough effort has been spent and the software can be released. Different types of systems require different levels of security, but how is this to be determined? And what about the human factor? Not every test engineer has the necessary talent for code breaking.

History teaches us that no matter how good a lock, a cipher, or a password someone has eventually broken it. Several security developers in the past challenged the public to break their “unbreakable” code for a reward, only to see their code broken within hours. How responsible is it to keep sensitive data and systems access available in the cyberspace just because it may be convenient, inexpensive, or fashionable? Have the probability and the consequences of a potential breach been always duly considered?

I have used cloud-based tools, such as the excellent mbed, but would not dream of using them for a sensitive design. I don’t store data in the cloud, nor would I consider IoT for any system whose security was vital. I don’t believe cyberspace can provide sufficient security for many systems at this time. Ultimately, the responsibility for security is ours. We must judge whether the use IoT or the cloud for a given product would be responsible. At present, I see little evidence to be convinced the industry is adequately serious about security. It will surely improve with time, but until it does I am not about to take unnecessary risks.


George Novacek is a professional engineer with a degree in Cybernetics and Closed-Loop Control. Now retired, he was most recently president of a multinational manufacturer for embedded control systems for aerospace applications. George wrote 26 feature articles for Circuit Cellar between 1999 and 2004. Contact him at gnovacek@nexicom.net with “Circuit Cellar”in the subject line.