Issue 286: EQ Answers

Question 1—A divider is a logic module that takes two binary numbers and produces their numerical quotient (and optionally, the remainder). The basic structure is a series of subtractions and multiplexers, where the multiplexer uses the result of the subtraciton to select the value that gets passed to the next step. The quotient is formed from the bits used to control the multiplexers, and the remainder is the result of the last subtraction.

If it is implemented purely combinatorially, then the critical path through all of this logic is quite long (even with carry-lookahead in the subtractors) and the clock cycle must be very slow. What could be done to shorten the clock period without losing the ability to get a result on every clock?

Answer 1—Pretty much any large chunk of combinatorial logic can be pipelined in order to reduce the clock period. This allows it to produce more results in a given amount of time, at the expense of increasing the latency for any particular result.

Divider logic is very easy to pipeline, and the number of pipeline stages you can use is fairly arbitrary. You could insert a pipeline register after each subtract-mux pair, or you might choose to do two or more subtract-mux stages per pipeline register You could even go so far as to pipeline the subtracts and the muxes separately (or even pipeline *within* each subtract) in order to get the fastest possible clock speed, but this would be rather extreme.

The more pipeline registers you use, the shorter the critical path (and the clock period) can be, but you use more resources (the registers). Also, the overall latency goes up, since you need to account for the setup and propagation times of the pipeline registers in the clock period (in addition to the subtract-mux logic delays). This gets multiplied by the number of pipeline stages in order to compute the total latency.

Question 2—On the other hand, what could be done to reduce the amount of logic required for the divider, giving up the ability to have a result on every clock?

 

Answer 2—If you don’t need the level of performance provided by a pipelined divider, you can computes the quotient serially, one bit at a time. You would just need one subtractor and one multiplexer, along with registers to hold the input values, quotient bits and the intermediate result.

You could potentially compute more than one bit per clock period using additional subtract-mux stages. This gives you the flexibility to trade off space and time as needed for a particular application.

Question 3—An engineer wanted to build an 8-MHz filter that had a very narrow bandwidth, so he used a crystal lattice filter like this:

EQ-fig1-CC287-June14

However, when he built and tested his filter, he discovered that while it worked fine around 8 MHz, the attenuation at very high frequencies (e.g., >80 MHz) was very much reduced. What caused this?

Answer 3—The equivalent circuit for a quartz crystal is something like this:EQ-fig2-CC287-June14

The components across the bottom represent the mechanical resonance of the crystal itself, while the capacitor at the top represents the capacitance of the electrodes and holder. Typical values are:

  • Cser: 10s of fF (yes, femtofarads, 10-15F)
  • L: 10s of mH
  • R: 10s of ohms
  • Cpar: 10s of pF

The crystal has a series-resonant frequency based on just Cser and L. It has a relatively low impedance (basically just R) at this frequency.

It also has a parallel-resonant (sometimes called “antiresonant”) frequency when you consider the entire loop, including Cpar. Since Cser and Cpar are essentially in series, together they have a slightly lower capacitance than Cser alone, so the parallel-resonant frequency is slightly higher. The crystal’s impedance is very high at this frequency.

But at frequencies much higher than either of the resonant frequencies, you can see that the impedance of Cparalone dominates, and this just keeps decreasing with increasing frequency. This reduces the crystal lattice filter to a simple capacitive divider, which passes high freqeuncies with little attenuation.

Question 4—Suppose you know that a nominal 10.000 MHz crystal has a series-resonant frequency of 9.996490 MHz and a parallel-resonant frequency of 10.017730 MHz. You also know that its equivalent series capacitance is 27.1 fF. How can you calculate the value of its parallel capacitance?

Answer 4—First, calculate the crystal’s equivalent inductance, based on the series-resonant frequency:EQ-equation1-CC287-June14

Next, calculate the capacitance required to resonate with that inductance at the parallel-resonant frequency:EQ-equation2-CC287-June14

Finally, calculate the value of Cpar required to give that value of capacitance when in series with Cser:EQ-equation3-CC287-June14

Note that all three equations can be combined into one, and this reduces to:EQ-equation4-CC287-June14

Issue 284: EQ Answers

PROBLEM 1
Can you name all of the signals in the original 25-pin RS-232 connector?

ANSWER 1
Pins 9, 10, 11, 18, and 25 are unassigned/reserved. The rest are:

Pin Abbreviation Source Description
1 PG - Protective ground
2 TD DTE Transmitted data
3 RD DCE Received data
4 RTS DTE Request to send
5 CTS DCE Clear to send
6 DSR DCE Data Set Ready
7 SG - Signal ground
8 CD DCE Carrier detect
12 SCD DCE Secondary carrier detect
13 SCTS DCE Secondary clear to send
14 STD DTE Secondary transmitted data
15 TC DCE Transmitter clock
16 SRD DCE Secondary received data
17 RC DCE Receiver clock
19 SRTS DTE Secondary request to send
20 DTR DTE Data terminal ready
21 SQ DCE Signal quality
22 RI DCE Ring indicator
23 - DTE Data rate selector
24 ETC DTE External transmitter clock

 

PROBLEM 2
What is the key difference between a Moore state machine and a Mealy state machine?

ANSWER 2
The key difference between Moore and Mealy is that in a Moore state machine, the outputs depend only on the current state, while in a Mealy state machine, the outputs can also be affected directly by the inputs.

 

PROBLEM 3
What are some practical reasons you might choose one state machine over the other?

ANSWER 3
In practice, the difference between Moore and Mealy in most situations is not very important. However, when you’re trying to optimize the design in certain ways, it sometimes is.

Generally speaking, a Mealy machine can have fewer state variables than the corresponding Moore machine, which will save physical resources on a chip. This can be important in low-power designs.

On the other hand, a Moore machine will typically have shorter logic paths between flip-flops (total combinatorial gate delays), which will enable it to run at a higher clock speed than the corresponding Mealy machine.

 

PROBLEM 4
What is the key feature that distinguishes a DSP from any other general-purpose CPU?

ANSWER 4
Usually, the key distinguishing feature of a DSP when compared with a general-purpose CPU is that the DSP can execute certain signal-processing operations with few, if any, CPU cycles wasted on instructions that do not compute results.

One of the most basic operations in many key DSP algorithms is the MAC (multiply-accumulate) operation, which is the fundamental step used in matrix dot and cross products, FIR and IIR filters, and fast Fourier transforms (FFTs). A DSP will typically have a register and/or memory organization and a data path that enables it to do at least 64 MAC operations (and often many more) on unique data pairs in a row without any clocks wasted on loop overhead or data movement. General-purpose CPUs do not generally have enough registers to accomplish this without using additional instructions to move data between registers and memory.

Issue 282: EQ Answers

PROBLEM 1
Construct an electrical circuit to find the values of Xa, Xb, and Xc in this system of equations:

21Xa – 10Xb – 10Xc = 1
–10Xa + 22Xb – 10Xc = –2
–10Xa – 10Xb + 20Xc = 10

Your circuit should include only the following elements:

one 1-Ω resistor
one 2-Ω resistor
three 10-Ω resistors
three ideal constant voltage sources
three ideal ammeters

The circuit should be designed so that each ammeter displays one of the values Xa, Xb, or Xc. Given that the Xa, Xb, and Xc values represent currents, what kind of circuit analysis yields equations in this form?

ANSWER 1
You get equations in this form when you do mesh analysis of a circuit. Each equation represents the sum of the voltages around one loop in the mesh.

PROBLEM 2
What do the coefficients on the left side of the equations represent? What about the constants on the right side?

ANSWER 2
The coefficients on the left side of each equation represent resistances. Resistance multiplied by current (the unknown Xa, Xb, and Xc values) yields voltage.
The “bare” numbers on the right side of each equation represent voltages directly (i.e., independent voltage sources).

PROBLEM 3
What is the numerical solution for the equations?

ANSWER 3
To solve the equations directly, start by solving the third equation for Xc and substituting it into the other two equations:

Xc = 1/2 Xa + 1/2 Xb + 1/2

21Xa – 10Xb – 5Xa – 5Xb – 5 = 1
–10Xa + 22Xb – 5Xa – 5Xb – 5 = –2

16Xa – 15Xb = 6
–15Xa + 17Xb = 3

Solve for Xa by multiplying the first equation by 17 and the second equation by 15 and then adding them:

272Xa – 255Xb = 102
–225Xa + 255Xb = 45

47Xa = 147 → Xa = 147/47

Solve for Xb by multiplying the first equation by 15 and the second equation by 16 and then adding them:

240Xa – 225Xb = 90
–240Xa + 272Xb = 48

47Xb = 138 → Xb = 138/47

Finally, substitute those two results into the equation for Xc:

Xc = 147/94 + 138/94 + 47/94 = 332/94 = 166/47

PROBLEM 4
Finally, what is the actual circuit? Draw a diagram of the circuit and indicate the required value of each voltage source.

ANSWER 4
The circuit is a mesh comprising three loops, each with a voltage source. The common elements of the three loops are the three 10-Ω resistors, connected in a Y configuration (see the figure below).

cc281_eq_fig1The values of the voltage sources in each loop are given directly by the equations, as shown. To verify the numeric solution calculated previously, you can calculate all of the node voltages around the outer loop, plus the voltage at the center of the Y, and ensure they’re self-consistent.

We’ll start by naming Va as ground, or 0 V:

Vb = Va + 2 V = 2 V

Vc = Vb + 2 Ω × Xb = 2V + 2 Ω × 138/47 A = 370/47 V = 7.87234 V

Vd = Vc + 1 Ω × Xa = 370/47 V + 1 Ω × 147/47A = 517/47 V = 11.000 V

Ve = Vd – 1 V = 11.000 V – 1.000 V = 10.000 V

Va = Ve – 10 V = 0 V

which is where we started.

The center node, Vf, should be at the average of the three voltages Va, Vc, and Ve:

0 V + 370/47 V + 10 V/3 = 840/141 V = 5.95745 V

We should also be able to get this value by calculating the voltage drops across each of the three 10-Ω resistors:

Va + (Xc – Xb) × 10 Ω = 0 V + (166 – 138)/47A × 10 Ω = 280/47 V = 5.95745 V

Vc + (Xb – Xa) × 10 Ω = 370/47V + (138-147)/47A × 10 Ω = 280/47 V = 5.95745 V

Ve + (Xa – Xc) × 10 Ω = 10 V + (147-166)/47 A × 10 Ω = 280/47 V = 5.95745 V

System Safety Assessment

System safety assessment provides a standard, generic method to identify, classify, and mitigate hazards. It is an extension of failure mode effects and criticality analysis and fault-tree analysis that is necessary for embedded controller specification.

System safety assessment was originally called ”system hazard analysis.” The name change was probably due to the system safety assessment’s positive-sounding connotation.

George Novacek (gnovacek@nexicom.net) is a professional engineer with a degree in Cybernetics and Closed-Loop Control. Now retired, he was most recently president of a multinational manufacturer of embedded control systems for aerospace applications. George wrote 26 feature articles for Circuit Cellar between 1999 and 2004.

Columnist George Novacek (gnovacek@nexicom.net), who wrote this article published in Circuit Cellar’s January 2014 issue, is a professional engineer with a degree in Cybernetics and Closed-Loop Control. Now retired, he was most recently president of a multinational manufacturer of embedded control systems for aerospace applications. George wrote 26 feature articles for Circuit Cellar between 1999 and 2004.

I participated in design reviews where failure effect classification (e.g., hazardous, catastrophic, etc.) had to be expunged from our engineering presentations and replaced with something more positive (e.g., “issues“ instead of “problems”), lest we wanted to risk the wrath of buyers and program managers.

System safety assessment is in many ways similar to a failure mode effects and criticality analysis (FMECA) and fault-tree analysis (FTA), which I described in “Failure Mode and Criticality Analysis” (Circuit Cellar 270, 2013). However, with safety assessment, all possible system faults—including human error, electrical and mechanical subsystems’ faults, materials, and even manuals—should be analyzed. The impact of their faults and errors on the system safety must also be considered. The system hazard analysis then becomes a basis for subsystems’ specifications.

Fault Identification

Performing FMECA and FTA on your subsystem ensures all its potential faults become detected and identified. The faults’ signatures can be stored in a nonvolatile memory or communicated to a display console, but you cannot choose how the controller should respond to any one of those faults. You need the system hazard analysis to tell you what corrective action to take. The subsystem may have to revert to manual control, switch to another control channel, or enter a degraded performance mode. If you are not the system designer, you have little or no visibility of the faults’ potential impact on the system safety.
For example, an automobile consists of many subsystems (e.g., propulsion, steering, braking, entertainment, etc.). The propulsion subsystem comprises engine, transmission, fuel delivery, and possibly other subsystems. A part of the engine subsystem may include a full-authority digital engine controller (FADEC).

Do you have an electrical engineering tip you’d like to share? Send it to us here and we may publish it as part of our ongoing EE Tips series.

Engine controllers were originally mainly mechanical devices, but with the arrival of the microprocessor, they have become highly sophisticated electronic controllers. Currently, most engines—including aircraft, marine, automotive, or utility (e.g., portable electrical generator turbines)—are controlled by some sort of a FADEC to achieve best performance and safety. A FADEC monitors the engine performance and controls the fuel flow via servomotor valves or stepper motors in response to the commanded thrust plus numerous operating conditions (e.g., atmospheric and internal pressures, external and internal engine temperatures in several locations, speed, load, etc.).

The safety assessment mostly depends on where and how essentially identical systems are being used. A car’s engine failure, for example, may be nothing more than a nuisance with little safety impact, while an aircraft engine failure could be catastrophic. Conversely, an aircraft nosewheel steer-by-wire can be automatically disconnected upon a fault. And, with a little increase of the pilots’ workload, it may be substituted by differential braking or thrust to control the plane on the ground. A similar failure of an automotive steer-by-wire system could be catastrophic for a car barreling down the freeway at 70 mph.

Analysis

System safety analysis comprises the following steps: identify and classify potential hazards and associated avoidance requirements, translate safety requirements into engineering requirements, design assessment and trade-off support to the ongoing design, assess the design’s relative compliance to requirements and document findings, direct and monitor specialized safety testing, and monitor and review test and field issues for safety trends.

The first step in hazard analysis is to identify and classify all the potential system failures. FMECA and FTA provide the necessary data. Table 1 shows an example and explains how the failure class is determined.

TABLE 1
This table shows the identification and severity classification of all potential system-level failures.
Eliminated Negligible Marginal Critical Catastrophic
No safety impact. Does not significantly reduce system safety. Required actions are within the operator’s capabilities. Reduces the capability of the system or operators to cope with adverse operating conditions. Can cause major illness, injury, or property damage. Significantly reduces the capability of the system or the operator’s ability to cope with adverse conditions to the extent of causing serious or fatal injury to several people. Total loss of system control resulting in equipment loss and/or multiple fatalities.

The next step determines each system-level failure’s frequency occurrence (see Table 2). This data comes from the failure rates calculated in the course of the reliability prediction, which I covered in my two-part article “Product Reliability” (Circuit Cellar 268–269, 2012) and in “Quality and Reliability in Design” (Circuit Cellar 272, 2013).

TABLE 2
Use this information to determine the likelihood of each individual system-level failure.
Frequent Probability of occurrence per operation is equal or greater than 1 × 10–3
Probable Probability of occurrence per operation is less than 1 × 10–3 or greater than 1 × 10–5
Occasional Probability of occurrence per operation is less than 1 × 10–5 or greater than 1 × 10–7
Remote Probability of occurrence per operation is less than 1 × 10–7 or greater than 1 × 10–9
Improbable Probability of occurrence per operation is less than 1 × 10–9

Based on the two tables, the predictive risk assessment matrix for every hazardous situation is created (see Table 3). The matrix is a composite of severity and likelihood and can be subsequently classified as low, medium, serious, or high. It is the system designer’s responsibility to evaluate the potential risk—usually with regard to some regulatory requirements—to specify the maximum hazard levels acceptable for every subsystem. The subsystems’ developers must comply with and satisfy their respective specifications. Electronic controllers in safety-critical applications must present low risk due to their subsystem fault.

TABLE 3
The risk assessment matrix is based on information from Table 1 and Table 2.
Probability / Severity Catastrophic (1) Critical (2) Marginal (3) Negligible (4)
Frequent (A) High High Serious Medium
Probable (B) High High Serious Medium
Occasional (C) High Serious Medium Low
Remote (D) Serious Medium Medium Low
Improbable (E) Medium Medium Medium Low
Eliminated (F) Eliminated

The system safety assessment includes both software and hardware. For aircraft systems, the required risk level determines the development and quality assurance processes as anchored in DO-178 Software Considerations in Airborne Systems and Equipment Certification and DO-254 Design Assurance Guidance for Airborne Electronic Hardware.

Some non-aerospace industries also use these two standards; others may have their own. Figure 1 shows a typical system development process to achieve system safety.
The common automobile power steering is, by design, inherently low risk, as it continues to steer even if the hydraulics fail. Similarly, some aircraft controls continue to be the old-fashioned cables but, like the car steering, with power augmentation. If the power fails, you just need more muscle. This is not the case with the more prevalent drive- or fly-by-wire systems.

FIGURE 1: The actions in this system-development process help ensure system safety.

FIGURE 1: The actions in this system-development process help ensure system safety.

Redundancy

How can the risk be mitigated to at least 109 probability for catastrophic events? The answer is redundancy. A well-designed electronic control channel can achieve about 105 probability of a single fault. That’s it. However, the FTA shows that by ANDing two such processing channels, the resulting failure probability will decrease to 1010, thus mitigating the risk to an acceptable level. An event with 109 probability of occurring is, for many systems, acceptable as just about “never happening,” but there are requirements for 1014 or even lower probability. Increasing redundancy will enable you to satisfy the specification.
Once I saw a controller comprising three independent redundant computers, with each computer also being triple redundant. Increasing safety by redundancy is why there are at least two engines on every commercial passenger carrying aircraft, two pilots, two independent hydraulic systems, two or more redundant controllers, power supplies, and so forth.

Human Engineering

Human engineering, to use military terminology, is not the least important for safety and sometimes not given sufficient attention. MIL-STD-1472F, the US Department of Defense’s Design Criteria Standard: Human Engineering, spells out many requirements and design constraints to make equipment operation and handling safe. This applies to everything, not just electrical devices.

For example, it defines the minimum size of controls if they may be operated with gloves, the maximum weight of equipment to be located above a certain height, the connectors’ location, and so forth. In my view, every engineer should look at this interesting standard.
Non-military equipment that requires some type of certification (e.g., most electrical appliances) is usually fine in terms of human engineering. Although there may not be a specific standard guiding its design in this respect, experienced certificating examiners will point out many shortcomings. But there are more than enough fancy and expensive products on the market, which makes you wonder if the designer ever tried to use the product himself.

By putting a little thought beyond just the functional design, you can make your product attractive, easy to operate, and safe. It may be as simple as asking a few people who are not involved with your design to use the product before you release it to production.

Test Pixel 1

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.