Quality and Cost
Manufacturing tests are vital to ensuring high-quality products. Quality is a factor that no company or individual wants to compromise because quality defines the product and ultimately is the main thing that retains a customer. In this article, Nishant Mittal discusses various techniques to manage quality, cost and “corner case catching” scenarios in the manufacturing test environment of a board fabrication house.
Manufacturing tests are arguably the most important aspect in any kind of hardware design company, be it small or big. These tests are essential for ensuring quality. Apart from quality, cost is one of the major factors that are responsible for defining the profit margin of the hardware. For example, if a board is manufactured where let’s say out of 1,000 units there are 200 with a defect. Or, let’s say that that the manufacturing test setup is so costly that it downsizes the profit. Or, here’s the important one: What if the manufacturing test misses a defect that a customer finds? That could cost the company a lot.
There are a variety of ways to manage quality and cost. In this article, I’ll discuss some these factors and also look at corner case catching scenarios in the context of a manufacturing test environment in a board fabrication house. I will also discuss architecture for crafting manual, semi-automatic and automatic manufacturing tests. For these purposes, in the article, I’ll look at these issues as applied to FPGA- and processor-based board, but the same principles apply to less complex boards as well.
The manufacturing test design process runs parallel to the board design process. With that in mind, the steps involved are similar, but involve more critical judgement. Manufacturing tests have to consider the cost of development, minutes per board to test, corner case reviews and so on. All these factors are necessary to optimize cost without compromising the quality of the product.
The first step toward designing a manufacturing test is to choose one of three approaches: manual, automatic or semiautomatic. This choice depends on the organization’s budget, the complexity and quantity of boards as well as the use case. A manual approach has less development time while its test execution time is more per board. In contrast, an automatic approach has more development time, however the test execution time is much less, thereby increasing the productivity. Semi-automated systems are generally in between the two others, and are generally appropriate done for situations where some processes require human intervention.
Let’s consider an example of a one-of-a-kind Xilinx Zynq Ultrascale plus FPGA Evaluation board. This board has the FPGA loaded on board with peripherals such as temperature sensors, infrared sensors, power supply, FTDI chip, IO header, SD card and DIP Switch.
In a system like this, we can think of different ways of testing this board. However, when we test a board that is going to thousands of customers, many things need to be documented such as board test coverage, corner cases, time to test and production test cost. Let’s focus on each of these points, and then complete the manufacturing test design.
Board test coverage doesn’t necessarily mean the test should cover each and every component on the board. A standard rule in this kind of environment is divide and conquer. A standard board can be divided into its major sections. A board like this contains a power supply—which is given input either through a power jack/USB—an analog region, a digital region, filtering circuits, I/Os and communication blocks.
The first step is to create a block diagram of the system as shown in Figure 1. Based on this diagram, we should make a table of coverage showing the number of components, which are actually affected during test and the ones which are not affected. This gives us a fair idea about the percentage coverage and failure scenarios. This not only helps in getting an error-free board out of production, but also creates a “database,” which is helpful in future to debug the board when same issue may occur. Figure 2 shows a format of the table that could be used to create a clean database along these lines.
With the table in Figure 2 in mind, let us consider the design of a typical microcontroller board that contains lot of decoupling capacitors and RC networks which are required for proper decoupling of ground noise in the PCB. In a typical manufacturing test environment, it is very difficult to test the presence or absence of each and every decoupling capacitor, so they generally are considered to be in the “not covered category.“
When we say not covered, that doesn’t necessarily mean we are ignoring how critical the presence or absence of that particular component is. To judge the criticality of failure, coverage goals and actions to be taken. For this, the team needs to perform DFMEA (design failure mode and effect analysis). For DFMEA, an Excel sheet is prepared that looks like the one shown in Figure 3. This is a standard format for DFMEA, with a few things here and there that may differ for different organizations.
In this analysis, the design team finds out the potential causes of failure, their impact on the design from a user and board safety point of view and the possible workaround. Based on this, the designer rates all these parameters and the average of all these parameters are then judged to determine that critical test coverages to be made. DFMEA not only makes the manufacturing test foolproof, but also identifies loopholes in the design and even helps you fine tune your design, if done in the early stage. Once the DFMEA is completed, the next step is to design the test system. The type of test system can be dependent upon the complexity of the board. A system could be manual test, automated or semi-automated.
Manual Tests: Manual tests are done for very low complexity, lesser volume boards which have fewer interfaces to be tested. A pure manual test involves extensive human involvement, which can lead to human errors. Proper documentation is the key to a successful manual test system. That said, these tests require lot of time per board, putting both efficiency and cost at stake. Generally, manual tests are preferred when some kind of observation or calibration is required.
Automated Tests: The next method is the one that is mostly preferred throughout the industry: automated test. Automated tests are performed to test the board automatically—without human intervention. This is achieved both in the product’s hardware and software.
Figure 4 shows what a typical automated test looks like for hardware. For the board picked as an example, there are metal beads running all around the I/Os, which will perform loopback tests between each other. If any of the I/O presents a short- or an open-circuit, the result is a fail status. For LEDs, we use light sensors on the test systems that detect the light intensity. There are actuators that press the buttons and report the operating status of the buttons. Sensors—such as light sensors, infra-red sensors and so on—can be tested by providing potential stimuli and and then the results can be analyzed in the software using the ADC.
Software such as Mathwork’s MATLAB, National Instruments’ Labview, python and pearl scripts can be used to create UI-based interfaces to that display pass and fail status. The UI is basically used to monitor what’s happening and to trigger the tests. Once the test is completed, the UI is supposed to report all the data in the log file, which may be exported to a pdf file.
Semi-Automated Tests: The next category of test systems is essentially the combination of manual and automated tests: semi-automated systems. Semi-automated systems are used in cases where human intervention becomes necessary. Human intervention doesn’t have to mean triggering tests, putting the board into the proper location or even sitting in front of the system to monitor the events going on. Rather, it applies to whenever there’s any human intervention impacting the result of the particular test—then it becomes a semi-automated system.
Let’s look at an example of a board that has a microphone, a capacitive touch sensor and the rest of the interfaces I mentioned earlier. The tester is supposed to test the mic sensitivity by feeding it sound from different directions and at different volumes. Meanwhile, a capacitive touch sensor needs to be touched by a human hand to see if it’s sensitive to human touch. These tests can be automated, but for optimum performance it has to have some human intervention. These types of use cases could force a designer to make the system semi-automated.
Once the test design is complete, the designer needs to validate whether the coverage really matches what’s theoretically stated. To validate this, the tester would remove the major components on board to test and see if the manufacturing test it really is doing its job correctly. Figure 5 shows the algorithm that displays the entire design flow.
There are other techniques such as JTAG scan chain. This uses system controllers on board that can equally perform the board testing as well as control the interfaces—either by themselves or along with traditional test techniques. It’s open for debate whether the amount of cost reduction, visibility of the board and test coverage that such controllers can provide compared with the traditional approach of external test systems. I’ll plan to discuss that question further in future articles.
In this article, we discussed the concepts of how manufacturing tests are developed and analyzed in order to cater to the requirements of cost, efficiency and accuracy. We also discussed how the test system designer would decide whether the board should be tested using a manual, automated or semi-automated approach.
For detailed article references and additional resources go to:
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • NOVEMBER 2019 #352 – Get a PDF of the issue