Want To Learn Proper Worst Case Analysis Basics? See Our Latest Article at How2Power.com

A new Design Master article,  “Use Worst-Case Analysis Tool To Efficiently Validate Your Designs,” is now available in the latest issue of How2Power.com.

Free WCA Software: Design Master Lite Cloud Version

Design MasterTM, the practical and easy-to-use advanced worst case analysis software used worldwide, provides a fully integrated set of analysis tools, including worst case solutions to design equations, probability estimates of any out-of-spec conditions, sensitivities, and optimized values for design centering.

The Lite version is now available for free at How2Power.com, under Design Notes and Tools / Worst case analysis software. Although the Lite version has some functional restrictions, it is ideal for small projects and academic use. (For the full featured versions, please click here.)

Also, please be sure to watch for How2Power’s July Newsletter, which will include the application note, “How To Use Design Master for WCA – A Simple Example,” as well as other in-depth design articles for power electronics engineers.

Is Your Circuit Simulator Just A Pretty Face? Five Reasons Why Simulations Are Not Sufficient For Design Validation

Jerry Twomey recently pointed out some pitfalls with math-based circuit analysis (“Academic Simplifications Produce Meaningless Equations,” 13 June 2012, Electronic Design.com.)

I agree with the general sentiments of Mr. Twomey, but would like to point out that there is a simple solution to avoiding the pitfalls he mentions: develop equations from component data sheets, not from academic simplifications. This is straightforward and will be discussed further in a future post.

Also, it should be noted that simulations are not some miracle cure-all elixir. Indeed, simulators are also math-based creatures: SPICE and its cousins simply grind out numerical solutions to the multitude of hidden equations that are buried beneath their pretty graphical interfaces.

So what’s the problem with simulators? A lot. For example,

1. Because simulator math is hidden behind the user interface, simulators don’t promote engineering analysis (thinking). To the contrary, they promote lazy tweak-and-tune tinkering.

2. Because simulator component models are typically very complex, the interactions between important variables are usually obscure, if not downright unfathomable. Obscurity does not promote engineering understanding.

3. Simulator results typically do not provide insight into important sensitivities. For example, can your simulator tell you how sensitive your power supply’s thermal stability is to the Rds(on) of the switching Mosfet, including the effects of thermal feedback?

4. A simulation “run” is not an analysis, but is instead a virtual prototype test. Yes, it’s better to check out crappy designs with a simulator rather than wasting time and money on building and testing crappy hardware. So simulators have their place, particularly when checking out initial design concepts. Eventually, however, hardware testing is required to verify that the simulator models were correct. And you will still need to do a worst case math analysis to determine performance limits, and to confirm that desired performance will be maintained in the presence of tolerances and aging.

  • Proper Design Validation = Testing/Simulations + Analysis.

5. Simulators don’t really do worst case analysis. Yes, you can use a simulator to implement a bunch of Monte Carlo runs, but valid results requires (a) identification of all of the important parameters (such as Rds(on)), (b) assignment of the appropriate distributions to those parameters (such distributions are typically not available), and (c) the generation of enough runs to catch errors out in the tails of the overall resultant distribution (and how many runs should you do? Hmmm…).

  • Monte Carlo is not a crystal ball. It only shows you the production performance you will get if all of your assumptions were correct, and if you did enough runs.
  • The knowledge required to determine the number of runs requires an exhaustive study of the circuit’s parameters, distributions, and interrelationships (not practical), or a knowledge of the limits of performance.
  • But if you know the limits of performance, then why do you need a Monte Carlo analysis? You don’t. You can skip it altogether and go directly to a math-based Worst Case Analysis.

For further insights into math-based Worst Case Analysis versus simulations, please see “Design MasterTM: A Straightforward No-Nonsense Approach to Design Validation.”

-Ed Walker

You Want To Be A Consultant? Rule #1: The Customer Is Not Always Right

Your clients hire you for your expertise, not to agree with them. If their design approach is clearly flawed, tell them. Better to be unpopular, or even lose your assignment, than to help kill the client’s company.

Corollary: Clients hire you for your advice, but that doesn’t mean they have to agree with you. If your recommendations are ignored or overruled, as long as you’ve provided your best advice based on the data the client provided, your task is complete. Just smile, deposit your fee, and move on.

-Ed Walker

Bulletin: Design Master Analyzer Now Available

The Design Master™ Analyzer (DMA) is a simple fill-in-the-blanks quick and easy worst case analysis tool. DMA is based on expert templates, allowing powerful results to be quickly generated by less experienced engineers.

DMA is designed to be more easily used by iPad and other compact devices.

The Design Master Analyzer is targeted at specific applications with a very simple and easy-to-use format. If you’re an engineering director or project manager, simply provide copies of specific DMA applications to your staff for quick and efficient “fill in the blanks” analyses and receive design validation in minutes rather than weeks.

Although DMA files are useable as provided and are securely locked, a Professional Edition “master” owner can edit or create DMA templates. The DMA engine can also be used to convert any existing Design Master file into a DMA fill-in-the-blanks format. Please inquire for pricing for the DMA engine.

To order please click here.

Bulletin: Design Master Cloud Version Now Available

The Cloud version of the Design Mastertm Professional Edition is now available:

  • Use Design Master whenever you need it from wherever you are, on almost any platform including PCs, Macs, iPads, etc.
  • Order only the number of days you need.
  • The latest version is always available online; no upgrades are ever required.

For more information or to order, please click here.

Used worldwide, our Design Master™ software provides a fully integrated set of analysis tools, including worst case solutions to design equations, probability estimates of any out-of-spec conditions, sensitivities, and optimized values for design centering.

Reliability Prediction or Magic 8 Ball? You Decide

Many years ago I was contacted by someone who worked for a very large defense contractor. The gentleman (Mr. X) had the responsibility for helping ensure that the electronics modules used by his company met stringent reliability requirements, one of which was a minimum allowable Mean Time Between Failures (MTBF). He had read one of our DACI newsletters that mentioned such reliability predictions, and gave me a call.

“My problem,” he said (I paraphrase), “is that MTBF predictions per Military Handbook 217 don’t make any sense.” He subsequently provided detailed backup studies, including a data collection — using real fielded hardware — that showed the predicted times to failure for the hardware did not match the field experience. The predicted numbers were not just too low (as some folks claim for MIL-HDBK-217), they were also too high, or sometimes about right. In other words, they were pretty random, indicating that MIL-HDBK-217 had no more predictive value than you would get by using a Magic 8 Ball.

But that’s not all. “These reliability predictions,” he continued, “are worse than useless, because engineering managers are cramming in heavy heat sinks, or using other cooling techniques, to drive down the MTBF numbers. The result is a potential decrease of overall system reliability, as well as increased weight and cost, based on this MTBF nonsense.”

Until I heard from Mr. X, I had prepared numerous MTBF reports using MIL-HDBK-217, assuming (what a horrible word, I’ve learned) that the methodology was science-based. After reviewing the data, however, I agreed with Mr. X that MTBFs were indeed nonsense, and said so in the DACI newsletter. This sparked a minor controversy, including being threatened by a representative of a reliability firm (one that did a lot of business with the government) that DACI would be “out of business” because of our stance on the issue.

Well, DACI survived. Today, though, and sadly, my impression is that lots of folks still use MIL-HDBK-217-type cookbook calculations for MTBFs, which are essentially a waste of money, other than the important side benefit (that has nothing to do with MTBF predictions) of examining components for potential overstress. But that task can be done as part of a good WCA, skipping all of the costly and misleading MTBF pseudoscience.

Instead of trying to predict reliability, it’s better to ensure reliability by employing “physics of failure,” the scientific process of studying the chemistry, mechanics, and physics of specific materials and assemblies.

Bottom line: Skip the handbook-style MTBF nonsense, and use those dollars instead to keep abreast of materials science, as applicable to your specific products. (If for some reason you absolutely must prepare an MTBF report, use a Magic 8 Ball: it will be much quicker and just as accurate.)

p.s. Prior to my education by Mr. X, I had been deeply involved with the electronics design for a very ambitious spacecraft project. Thinking MTBF to be an important metric, I asked the project manager what the preliminary MTBF was for the system. He smiled and asked me to meet him privately.

Later, alone in his office, I was furtively told that the MTBF calculations indicated that the system was doomed to failure, so it had been decreed that the project was not going to use MTBFs. The rationale was that each system component would be examined on a case-by-case basis to ensure that its materials and assembly were suitable for its intended task. In essence, this can be viewed as an early example of the physics of failure approach. And yes, the mission was a complete success.

-Ed Walker

Oh, No! We Forgot the Bozo Protection (and other Persistent Design Errors)

We’ve contributed to hundreds of electronics design projects wherein the circuitry was subjected to rigorous WCA+ (WCA+ is our advanced version of Worst Case Analysis; see “Four Costly Myths About WCA“). Our analyses invariably detected various design deficiencies, both stress and functional. Unfortunately, like an annoying relative who can’t get the hint to please not visit again, some common problems that we were finding decades ago are still regularly popping up in today’s new designs. These include:

  • Lack of protection from Bozo the Clown: inadequate ESD protection; connectors without reverse-polarity keying; identical connectors for all ports (you don’t expect Bozo to pay any attention to cable labels or connector colors, do you?); no spills/immersion protection (e.g., coffee, slurpees, beer, or even juice from a steak being thawed on top of a warm electronics unit (no kidding)).
  • Transient protection devices (TPDs) not present at circuit interfaces. Not just the AC power and load interfaces, but all the internal interfaces that are exposed to ESD or potentially unruly test equipment during testing, particularly for costly subassemblies. We’ve seen a hugely expensive and schedule-critical board blown up by a test instrument failure; a disaster that could have been prevented by a few bucks’ worth of TPDs.
  • Failure to account for dissimilar power supply voltages, causing interface overdrive and/or latchup. (Sometimes this only occurs during transient conditions, making the deficiency hard to catch during testing. You will typically learn about it after you’ve shipped a few thousand units and your boss is frantically paging you to get back to work after you’ve had too many beers and the last thing you want is to work through the night and the weekend on warranty repairs while angry customers are screaming at you on the phone…but I digress…)
  • Inadequate ratings for AC mains rectifiers and other power components, particularly in switchmode supplies. Hint: Don’t completely rely on SPICE or other simulations to identify realistic worst case performance boundaries for these components. Or do, but then be sure to not provide a warranty with your product.

For some more tips, see page 210 of The Design Analysis Handbook; still very relevant after all of these years. (Note: We’re out of copies of the Revised Edition, but it’s still available from Amazon and Elsevier.)

P.S. We’re considering creating some low-cost mini-modules of our Design Master WCA+ software, configured for common design tasks such as proper TPD selection, op amp gain stage analysis, etc. (If you care to comment, your feedback will be appreciated and will help us make a decision. You can add a comment to this post, or email us at daci@daci-wca.com.)

Thanks.
-Ed Walker

Four Costly Myths About Worst Case Analysis

Myth #1: Worst Case Analysis (WCA) is a rigidly defined mathematical method of determining the limits of performance of a design.

There are actually a few different types of WCA, primarily:

Extreme Value Analysis (EVA)

Statistical Analysis (Monte Carlo)

WCA+

WCA+ is safer than Monte Carlo and more practical than EVA. Monte Carlo can miss small but important extreme values, and EVA can result in costly overdesign. WCA+ identifies extreme values that statistical methods can miss, and then estimates the probability that the extreme value will exceed specification limits, thereby providing the designer with a practical risk-assessment metric. WCA+ also generates normalized sensitivities and optimization, which can be used for design centering. (Ref. http://daci-wca.com/products_005.htm)

Myth #2: Worst Case Analysis is optional if you do a lot of testing

To maintain happy customers and minimize liability exposure, the effects of environmental and component variances on performance must be thoroughly understood. Testing alone cannot achieve this understanding, because testing — for economic reasons — is usually performed on a very small number of samples. Also, since testing typically has a short time schedule, the effects of long-term aging will not be detected.

Myth #3: Worst Case Analysis is optional if we vary worst case parameters during testing

Initial tolerances typically play a substantial role in determining worst case performance. Such tolerances, however, are not affected by heating/cooling the samples, varying the supply voltages, varying the loads, etc.

For example, a design might have a dozen functional specs and a dozen stress specs (these numbers are usually much, much higher). To expose worst case performance, some tolerances may need to be at their low values for some of the specs, but at their high or intermediate values for other specs. First, it’s not even likely that a tolerance will be at the worst case value for a single spec. Second, it’s impossible for the tolerance to simultaneously be at the different values required to expose worst case performance for all the specs. Therefore it’s not valid to expect a test sample to serve as a worst case performance predictor, regardless of the amount of temperature cycles, voltage variations, etc. that are applied to the sample.

Myth #4: Worst Case Analysis is best done by statistics experts

No, it is far better to have WCA performed — or at least supervised — by experts in the design being analyzed, using a practical tool like WCA+ that employs minimal statistical mumbo-jumbo. Analyses (particularly cook-book statistical ones), when applied by those without expertise in the design being analyzed, often yield hilariously incorrect results.

-Ed Walker

The Perfect People

The Perfect People,” posted earlier on my ne-walker blog site, received some good response, so I thought I would post a link to it here; I think a lot of engineers will relate. (I also had a recent encounter with one of these types.)

-Ed