Category Archives: Sensitivities
Van Brollini’s new book is an essential addition to the test engineer’s library, as well as the library of any product manager.
The Handbook contains practical advice that is based on Mr. Brollini’s extensive experience with test development, including unique insights that I have not seen elsewhere, insights that will provide the test engineer with a quantum leap in productivity.
The test engineer will also appreciate the fact that Brollini’s methods — clearly presented as a series of rules, tips, and straightforward equations — are practical and cost-effective, illustrated by real-world examples throughout.
The Handbook’s teachings can be applied with basic math and spreadsheet tools, although Brollini does recommend Design Master™ for best efficiency, particularly for more advanced applications.
(I have known Van for many years, as he was one of the first engineers to adopt our Design Master software. From time to time he has offered suggestions for improvements, which were incorporated into the software.)
The Test Engineer’s Measurement Handbook is available through the DACI website.
(C) 2010 Design/Analysis Consultants, Inc.
Newsletter content may be copied in whole or part if attribution
to DACI and any referenced source is prominently displayed with the copied material
This Issue: NEWS BITE: Man Shocked To Discover Twin Brother Is A Robot! / DM V8 Wish List / NEWS BULLETS: Unintended Consequences Strike Again / DACI’s BLOG: An Engineer Writes A Novel / KEEPING OUT OF TROUBLE: What Every Engineer Should Know About Statistics / OUR VIEW: Using Statistics For High-Quality Designs
NEWS BITE: Man Shocked To Discover Twin Brother Is A Robot!
“The Amazing Androids of Hiroshi Ishiguro” from “Special Report: Robots for Real,” IEEE Spectrum
Design MasterTM V8 (Major Upgrade) is planned for release soon, so now’s your chance to send us any suggestions for features you would like to see added. Also, if you have the current version of DM and would like to receive a beta version of V8, please let us know.
For more details on the current version, please click here: Design Master V7
NEWS BULLETS: Unintended Consequences Strike Again
“Rear-end collisions more than doubled and accidents increased overall in the first 70 days of red-light cameras in West Palm Beach compared to the same period of 2009, traffic records reviewed by The Palm Beach Post show.”
-“Rear-end collisions jump at red-light camera intersections in West Palm Beach” By Charles Elmore, 15 July 2010 Palm Beach Post
Think you can figure out what’s happening? Unconventional, but logically consistent. Read about Nexus here.
Update: NEXUS receives “highly recommended” rating from Cindy Taylor, Allbooks Review. Read the full review here.
“Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot…”
“It’s science’s dirtiest secret: The ‘scientific method’ of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.”
-from “Odds Are, It’s Wrong / Science fails to face the shortcomings of statistics” By Tom Siegfried, 27 March 2010 Science News
We have long recommended that statistical inference (limited sampling) not be used to try to predict performance (a practice that leads to the myriad problems discussed in the article referenced above), and have recommended instead that known performance limits and sensitivities be used to estimate the probability of success.
Stated another way, statistics (properly employed) can provide a good description of observed performance; it cannot be used to predict non-observed performance.
Example: If one examines all of the socks of various colors in a large drawer, one can use that data (analogous to a part vendor’s data sheet) to estimate the probability of blindly pulling out a sock of a certain color. For instance, if there are a few purple socks (unacceptable performance), we know the odds of getting that color.
However, if one only examines a few of the socks (limited experimental data, or a data sheet that only provides “typical” values), one cannot reliably predict much of anything. Such limited data is therefore unsuitable for high-quality designs.
In our consulting practice we have observed more than once the natural but very risky tendency of a design team to “see” hoped-for performance from limited experimental results, sometimes leading to premature jubilation. As the team’s official party-pooper, we have always advised keeping the champagne corked until sufficient data have been accumulated to be sure that the performance is properly understood. In every case this advice has served our customers well.