One in 10 design engineers still rely on purely physical prototypes, potentially adding significant, unnecessary costs to their designs. That’s according to research from 6SigmaET, which surveyed over 350 engineers working in industrial, consumer, and mass-produced electronics.
Rather than running simulations to test their products, the new research highlights that many engineers (49%) produce multiple physical prototypes throughout the design process. This, of course, significantly adds to the end production cost of electronics products, components, and designs.
Additional findings from 6SigmaET’s research highlight that 40% of design engineers use simulation to reduce the number of physical prototypes, while 10% rely purely on simulation and don’t produce any physical prototypes, aiming for “right first time” when designing their products. It is essential to properly validate the design before manufacture to avoid poor-quality products, potential design issues like overheating and interference, as well as costly “back to the drawing board” product recalls.
Some industry experts were asked to share their opinions on prototyping and the role it plays in the design process. Here’s what they said:
Christophe Basso, Technical Fellow, ON Semiconductor, and author of several books on power-supply design
“Prototype testing is an important step in the development cycle of a project. However, in my opinion, a solid theoretical study must precede this experimental phase. The study can include an equations-based approach to calculate component values, stress, and the various constraints semiconductors or passive will endure. Solvers like Mathcad, or even Excel to some extent, can be great tools for this method.
“Then, once this phase is over, a simulation template can help in testing a virtual prototype to verify that calculations lead to the expected behavior: Does the converter deliver the right voltage for the selected input value and load; are the rms current and thus conduction losses in the power switch in the ballpark of what I expected; etc.? When the system you have developed works on the computer, you can assemble a prototype. Experience shows that you are usually not far away from the final result, at least in nominal operating conditions.
“The bench validation is not the final phase. You usually collect practical data to enrich the computer model and make it behave closer to reality. For instance, add parasitics you did not identify at first (leakage terms, stray capacitance, various losses, and so on). Once the model is refined, you must assess the design sensitivity to component variations as defined in the manufacturer’s datasheets.
“You need to take the right actions to shield the product against unavoidable production spreads. For instance, how do phase and gain margins or crossover frequency change as the output capacitor equivalent series resistance (ESR) drifts, or if among the selected optocouplers, the current transfer ratio (CTR) is expected to vary widely? Assessing the impact of these deviations is difficult on the bench, and a good model saves you a lot of time to check these points through parametric simulations or Monte Carlo sweeps. Then you can think of appropriate cures to fix identified issues. If you do not take time to check your design robustness before pressing the production button, you may end up patching the product in a high-pressure mode when failures occur. Believe me, you want to avoid that.
“As you can see, in my view, the prototype is not the final lap, but rather an intermediate step to validate or correct assumptions you made at the definition stage. Simulation surely helps, but, again, it should be seen as a design aid, letting you virtually test the prototype on the PC and chase gross design errors you may have done. After all, if you calculate the wrong clamp resistor value in your flyback project, the library won’t burn and you’ll avoid smoke and noise in the laboratory at the first power on. My recipe for success? Analytical analysis, simulation, prototype validation, and numerous reviews with colleagues. Good luck for your next design!”
Steve Sandler, Picotest
“In most cases, and in all of our seminars and workshops, we teach that a measurement is a snapshot of a single instant in time. The measurement tells us about what “is” and not what “can be.” We use a combination of measurement and simulation in order to determine the sensitivities of operating characteristics and the impact of component and environmental tolerances to determine what “can be.”
“We also use the simulation models and measurements in concert to optimize the performance characteristics. In the typical case, we make measurements using an evaluation board or first-draft prototype PCB design to obtain the component data that the manufacturers didn’t provide or appeared to be suspect. These measurements then support the creation of a high-fidelity simulation model that is used for design optimization. This high-fidelity model and optimization result in a second, improved circuit and PCB design.”
Jeff Smoot, VP of Application Engineering, CUI
“With more and more engineers being asked to wear multiple hats, coupled with limited resources and increasing pressures to get parts to market quickly, there is a temptation to adopt a “build and test” strategy instead of relying on sound engineering analysis or simulations. The trap is easy to fall into.
“Since it is still critical to validate the design before releasing it to production with physical samples, the prototype is often viewed as a “short cut,” with the assumption that any issues discovered will be minor or easily fixed with a small design tweak. Unfortunately, this strategy often winds up extending the project timeline and increasing project costs, as issues that could have been identified through analysis and simulation are not detected until after building a costly prototype and often exhausting hours of testing.”
David Pace, Senior Technologist, Power Management, Texas Instruments
“Verification of hardware designs by analysis, modeling, and simulation has advanced rapidly in the past decade or two. Today, the robustness of new designs can be largely attributed to extensive modeling and simulation. Hardware prototypes are essential to understand interaction and validate simulation models, but design margin versus component and environmental variations are established by simulation.
“Engineers relying solely on hardware prototypes are almost certain to find parametric or functional failures as volume production begins. Using hardware to validate simulation results and extensive simulation to expand the number of test cases can produce robust designs ready for high volume manufacturing.”
Power Electronics Technology have presented several articles on simulation:
“STATESET: SPICE Model Ensures Simulation of Nonlinear Circuits,” by Lawrence Meares, President, Intusoft (July 2011)
A major new “STATESET” SPICE model used in fast average modeling for power supplies and related control systems guarantees dc convergence for nonlinear circuits. Many nonlinear circuits have more than one stable operating-point solution during SPICE simulation, which uses the “nodeset” statement to suggest an initial value. However, the suggestion must be released when the simulator starts doing iterations in order to account for the branch currents.
STATESET, on the other hand, creates a new circuit branch in which the element output is initialized during the entire dc operating-point calculation. Once the operating point is calculated, the input of the STATESET branch connects to its output so that the ac and transient analyses can proceed. STATESET is essentially a gain element that has two terminals, input and output, where a signal flows from the input to the output in a control-system fashion.
Though STATESET is a simple concept, standard SPICE simulators can’t handle the problem resolved by STATESET. Users of XSPICE-enabled simulators can easily add a code model that achieves the desired result. Other SPICE simulators (Pspice and Hspice) need to add a new primitive element.
“Compensating the RHPZ in a CCM Boost Converter: Using a Simulator,” Christophe Basso, Director, Product Application Engineering, ON Semiconductor (July 2009)
In the previous article [on this subject] (Power Electronics Technology, June 2009), we showed how to compensate the boost converter operated in continuous conduction mode (CCM) using an analytical approach. Despite its repulsive aspect, the analytical study is extremely important for the design phase, as it unveils the dependency of the poles and zeros locations with varying parasitic elements. It is, therefore, the designer’s duty to ensure that the impact of these parasitic elements is well under control, and in keeping with the right design margins despite unavoidable production dispersions.
However, if analytical derivations work well in one operating mode, e.g. the CCM, they need to be reworked in case the converter transitions into a different mode, e.g. the discontinuous conduction mode (DCM). Furthermore, it is extremely difficult to predict the transient response resulting from the adopted compensation strategy.
In that respect, a SPICE model featuring auto-toggling capabilities can do a better job at assessing and testing the choices you have made. However, as with any automated tool, they require a minimum of engineering judgment to challenge the results. It’s a bit like using a GPS without looking at a paper map to confirm the adoption of a sensible itinerary by the machine.
“Thermal Simulation Boosts Reliability and Shortens Time-To-Market in Power Electronics System Designs,” Tom Gregory, Product Specialist, 6SigmaET (June 2015)
Stated efficiency figures for power semiconductors are nearly always “best case.” Many power systems operate under conditions that don’t reflect the idealized ones in the datasheet. For example, they may be run significantly below maximum load for much of the time—data-center power supplies represent one example of where this is often the case. At these lower loads, efficiency usually declines, sometimes substantially.
Power systems may also be operated with lower-than-optimal input voltages, or in extreme environments, where ambient temperatures do not accord with those in the published data. An electrical control system in a Trans-Mongolian Railway train crossing the Gobi Desert faces very testing thermal conditions, for example. In short, many factors can contribute to real-world efficiency, and hence real-world thermal conditions may be different from those indicated in product specifications. Even if 98% efficiency is achieved, 2% of 2 kW is still 40 W of heat that has to go somewhere.
Using thermal simulation early in the design flow greatly reduces the scale and number of changes needed to accommodate thermal factors. Electronic, mechanical, and thermal engineers should cooperate together to fully appreciate the impact of design changes on thermal performance.