Exploring the virtuous cycle: Test system design & design for testability

by

Jason Swoboda, senior program manager at Velentium, explains why device test systems should be developed in parallel with the device itself, and in partnership with its designers.

Shutterstock

As developers, we’re comfortable with the idea that design informs testing. However, design teams are often less familiar with the ways in which test system development can and should inform design. This concept is the basis for a key consideration in development called “design for testability.” This means considering what should be tested and why, as well as when and how those tests will take place, as early as possible in the development lifecycle. Test engineering will be able to do its best work by developing test plans and systems in parallel with the device itself, and the device design will be improved when principles of testability and findings are incorporated into the design.

Here are a couple of principles, illustrated with real examples.

First, design the test system in partnership. Our team was tasked with creating a test system to put one element of a device through its paces. The product included a PCB with an onboard chip that drove a motor. Our system was meant to prove the motor would be able to do what the product needed it to do. To accomplish this, we designed a fixture that used digital communication to drive the motor using the same techniques as the control program inside of the end product. This meant we didn’t need to wait on the team creating the PCB firmware to perform motor tests.

It also meant that we couldn’t be 100% certain that our test software would drive the motor in exactly the same way the device firmware would. We could have instead used an actual controller board from the product as part of the test fixture and exercised the functions of the motor using production-released software. It was a missed opportunity, and a lesson that if the test plan calls for testing an element of a product system, look for opportunities to incorporate related elements directly into the test fixture. That way, you can test the subsystem as a unit in addition to testing its individual components.

Second, just because it's measurable doesn't mean it's significant. Know your “why”. Test engineers get laser-focused on how precisely we can measure. Test systems have the potential to perform measurements far beyond what is truly necessary. Or there can be a sensitivity mismatch between the test fixture hardware and the DUT. On a high-pressure test system, we were asked to use a particular model transducer that senses up to 40,000 PSI and translates that to a digital signal with 0.25% resolution, meaning the transducer output readings are accurate to within 100 PSI. Yet the test protocol we received indicated that the client wanted us to perform a test where the DUT was subjected to 17,000 PSI and held there for a certain length of time, during which the pressure was not to fluctuate by more than 10 PSI. So, the test spec called for a transducer that is “only” accurate to within 100 PSI, but the test protocol demanded an accuracy of 10 PSI.

This is a situation where test engineering needs to go back to the design team and ask some questions. What is truly required of this product? What must the test demonstrate? As Velentium’s CTO, Randy Armstrong, frequently advises: “Know your why.”

Third, design test systems for intelligent data capture and reuse from the beginning. Where automated test projects are involved, start logging all test data early and build functionality into the control system. This allows you to repeat tests using data “played back” from a prior test or set of tests, instead of always having to use a “live” DUT. Gathering the full testing data set allows test engineers to refine the control system for the test fixture itself.

As soon as you know what the test inputs and outputs are, start logging. The design and test engineering teams can decide later on what’s valuable to continue collecting for feedback on the system. This approach provides the software and hardware development teams the capacity to test against specific scenarios that hadn't been considered before. When tests fail, it allows you to look back, DVR-style, at its cause. The hardest automated test problems to troubleshoot are often those that can’t be replicated for lack of data.

By taking this approach from the outset, it builds an opportunity for continuous improvement of the test system, a great troubleshooting tool for test engineering and more granular insight into the DUT performance.

By working in tandem from early on, design and test engineering teams develop more efficiently, reducing cost- and schedule-related risks and leading to better final products.

Back to topbutton