PQ Systems - Quality eLine
>> In this issue:

A primer for improvement for a new generation of busy leaders

Quality Quiz: With a video!

Data in everyday life

Six Sigma and more

Bytes and pieces

FYI: Current releases

>> Be social:

Visit our Quality Blog and Twitterfollow us on Twitter.

>> Sign up:

Just type in your friend's e-mail below to have them receive Quality eLine:

>> Software:






Quality Gamebox


Six Sigma and more: David Schwinn
David Schwinn offers a word
to the ‘why’s’

A few months ago a reader explained that she had gone through Black Belt training and felt that she understood the “how” of the statistical tools associated with Six Sigma, but wanted to better understand the “why” and the “when.” She wanted to understand the context within which the statistical tools should be applied. What a great question! I’ve been struggling with it for some time and am finally ready to offer a little insight. It’s funny how we forget the basics.

First, I want to honor my wise friend and colleague, Jamshid Gharajedaghi, who taught me that objective data is simply widely agreed upon subjective data. That little sentence continually reminds me that my truth may not be THE truth. For purposes of this little column, I’ll start with two assumptions that I believe are fairly widely held. They work for me. They are:

  1. People engaged in Six Sigma want to delight their customers and other stakeholders.

  2. Everything is one of a kind (W. Edwards Deming. Out of the Crisis. Cambridge, MA: Massachusetts Institute of Technology. 1986). For example, if any two products or services appear to be the same, our measuring tools are simply too blunt. While blunt tools may be satisfactory, variation always exists.

Therefore, we seek the perfect product or service, but understand that it is impossible to consistently achieve that condition. The use of statistics, an understanding of variation, helps us approach our goal of perfection more closely. As an aside, I want to remind us that the perfection we seek is usually not our perfection, but that of our customer. We must operationally define what our customers care about and how they define perfection for those characteristics. We must, therefore, ask our customer these questions in, I believe, two steps:

  1. Engage a sample of customers in an open-ended conversation such as a focus group, a customer idealized design session (Belliveau, Griffin, and Somermyer. The PDMA Toolbook. Hoboken, NJ: John Wiley & Sons, Inc. 2004), or any other rigorous dialogic process. Dr. Deming has reminded us that this kind of investigation must precede a survey, because many surveys don’t ask the right questions (Henry Neave, The Deming Dimension. Knoxville, TN: SPC Press. 1990).

  2. Design, conduct, and analyze a survey, or, better yet, an ongoing series of surveys, to better understand the variation in the definition of perfection among our customers.

Once we have some idea of what the customer wants, we need to ask how we will know when we achieve it. That usually gives us the metrics we need to monitor. We must remember, however, Deming’s popularization of Lloyd Nelson’s observation that, “The most important figures needed for management of any organization are unknown and unknowable” (Out of the Crisis. Cambridge, MA: Massachusetts Institute of Technology. 1986). He, of course, is commenting on the difficulty of quantifying things such as success, delight, and happiness.

We can often use a family of surrogate metrics to simulate these attributes, but we must remember that they are merely surrogates. We all know the danger in measuring something we don’t really care about… wasted effort and wrong direction are just two of the many results of pursuing the wrong numbers. One might even observe that when we place too much emphasis on quarterly business performance numbers that have been defined by Wall Street, the economy can go belly up… a phenomenon that Deming called a deadly disease nearly 30 years ago (Out of the Crisis. Cambridge, MA: Massachusetts Institute of Technology. 1986).

Once we determine the best metrics, we must gather the data. As I’ve said before, my experience is that many organizations gather data that is easy to gather rather than data that is particularly useful. Before we plan to gather the data, we need to determine the kind of study we want to do.

Deming helped us remember analytic studies (control charts). He reminded us that the more common statistical studies, descriptive and inferential, are enumerative. That means that those studies help us understand what we have. They help us understand what to do with what we have. We, as managers, can, for example, choose to accept, reject, rework, or scrap the products or services we have in front of us. Enumerative statistical studies are very powerful for these kinds of decisions. He also reminded us that many, if not most, decisions that managers make, however, are designed to influence the future, not just what we have in front of us.

Analytic studies are studies that take place over a period of time. These longitudinal studies track how systems change over time. If we understand how a process, product, or service varies over time, it makes sense that we can be more comfortable and, in fact, accurate about making decisions today that will influence the future. Control charts help us understand the natural variation inherent in any process. Once we understand that inherent variation, we can more easily see when some unusual variation has occurred. Although, so far as I know, there is no way to quantify this improved level of comfort and accuracy, it is a very powerful concept. This is why control charts are the most powerful statistical tools for understanding and improving processes that produce products and services over time. The other pragmatic aspect of control charts is how they are interpreted.

Control limits and the other control chart interpretation guidelines were developed using many, many experiments designed to minimize the two kinds of decision errors that decision-makers can make. They can act as if there has been a significant change to the system when there has been none, and they can act as if there has been no significant change to the system when there has been one. These errors can cause frustration, wasted effort, and even a system that is performing at a less desirable level than the level it started with. Control chart interpretation guidelines have proven to minimize these two kinds of errors over the years.

There are obviously other "why’s" associated with data gathering and other statistical tools, but I think I have overstayed my welcome for now. As I wrap up this month’s entry, I fear that what I’ve provided may be obvious or of little value. I, therefore, particularly wish to hear from you regarding how useful this has been and if any of you have different "why’s" you’d like to share. I’m at support@pqsystems.com.

PQ Systems  |  Proof of quality.

PQ Systems, Inc.  |  210 B East Spring Valley Road, Dayton, OH 45458  |  800-777-3020

Copyright 2010 PQ Systems, Inc. All rights reserved.