Blog Archives

Is Your Circuit Simulator Just A Pretty Face? Five Reasons Why Simulations Are Not Sufficient For Design Validation

Jerry Twomey recently pointed out some pitfalls with math-based circuit analysis (“Academic Simplifications Produce Meaningless Equations,” 13 June 2012, Electronic Design.com.)

I agree with the general sentiments of Mr. Twomey, but would like to point out that there is a simple solution to avoiding the pitfalls he mentions: develop equations from component data sheets, not from academic simplifications. This is straightforward and will be discussed further in a future post.

Also, it should be noted that simulations are not some miracle cure-all elixir. Indeed, simulators are also math-based creatures: SPICE and its cousins simply grind out numerical solutions to the multitude of hidden equations that are buried beneath their pretty graphical interfaces.

So what’s the problem with simulators? A lot. For example,

1. Because simulator math is hidden behind the user interface, simulators don’t promote engineering analysis (thinking). To the contrary, they promote lazy tweak-and-tune tinkering.

2. Because simulator component models are typically very complex, the interactions between important variables are usually obscure, if not downright unfathomable. Obscurity does not promote engineering understanding.

3. Simulator results typically do not provide insight into important sensitivities. For example, can your simulator tell you how sensitive your power supply’s thermal stability is to the Rds(on) of the switching Mosfet, including the effects of thermal feedback?

4. A simulation “run” is not an analysis, but is instead a virtual prototype test. Yes, it’s better to check out crappy designs with a simulator rather than wasting time and money on building and testing crappy hardware. So simulators have their place, particularly when checking out initial design concepts. Eventually, however, hardware testing is required to verify that the simulator models were correct. And you will still need to do a worst case math analysis to determine performance limits, and to confirm that desired performance will be maintained in the presence of tolerances and aging.

  • Proper Design Validation = Testing/Simulations + Analysis.

5. Simulators don’t really do worst case analysis. Yes, you can use a simulator to implement a bunch of Monte Carlo runs, but valid results requires (a) identification of all of the important parameters (such as Rds(on)), (b) assignment of the appropriate distributions to those parameters (such distributions are typically not available), and (c) the generation of enough runs to catch errors out in the tails of the overall resultant distribution (and how many runs should you do? Hmmm…).

  • Monte Carlo is not a crystal ball. It only shows you the production performance you will get if all of your assumptions were correct, and if you did enough runs.
  • The knowledge required to determine the number of runs requires an exhaustive study of the circuit’s parameters, distributions, and interrelationships (not practical), or a knowledge of the limits of performance.
  • But if you know the limits of performance, then why do you need a Monte Carlo analysis? You don’t. You can skip it altogether and go directly to a math-based Worst Case Analysis.

For further insights into math-based Worst Case Analysis versus simulations, please see “Design MasterTM: A Straightforward No-Nonsense Approach to Design Validation.”

-Ed Walker