Vol. 9, No. 12

December 2007

PQ Systems
 
Contents

Who moved the data?

Quality Quiz: With a video!

Data in everyday life

MSA with Jackie Graham

Bytes and pieces

FYI: Current releases

 

Send Quality eLine
to a friend!

Just type in your friend's email below:

 

Sign up
If you received this newsletter from a friend and want your own subscription to Quality eLine, click below.

Subscribe to Quality eLine

 
Software

 

   

Jackie GrahamMSA with Jackie Graham:
Analyzing data from a typical study

By Jackie Graham, Ph.D.
Managing Director, PQ Systems, Australia

In our last MSA discussion, we addressed setting up a statistical assessment of a measurement system. In coming issues we will discuss the analysis of the data generated from a typical study. Remember, the key to getting a good study together is good preparation.

To illustrate the analysis, we will use an example R&R study looking at the length of plastic moulded parts. The study used 5 samples. Four testers measured the length of each sample twice. Here are the results of the study.

 

Sample 1

Sample 2

Sample 3

Sample 4

Sample 5

Tester 1

9.0

9.0

9.7

10.1

10.0

 

9.1

9.3

9.3

10.3

10.4

Tester 2

9.6

9.1

9.0

10.3

10.5

 

9.6

10.1

10.2

10.8

10.6

Tester 3

9.5

9.7

9.9

10.0

9.8

 

9.6

10.0

9.9

10.1

9.4

Tester 4

9.1

8.2

9.3

9.1

9.5

 

8.8

9.3

9.7

10.3

9.0

Just looking over the data before doing any analysis reveals some interesting variation in the results. So, how do we decide if this variation is significant or if it is acceptable? This is actually a simpler question than it sounds, as it depends on what you are measuring. The best indicator is to simply compare the amount of variation found in the study to the amount of variation in the specification. To do this we use a statistic called R&R.

R&R stands for repeatability and reproducibility (sorry it doesn’t stand for rest and relaxation!). Let’s define these terms.

Repeatability : The ability of one tester using the same equipment and the same sample to get the same number or value each time the sample is measured. It is also known as test-retest error. This also defines the variation inherent in the test method. It takes no account of the difference in testers, so it is also known as the variation in the equipment or the test method. In summary, it is the ability of the equipment to repeat the same measurement under the same conditions (same person and same sample).

Reproducibility: The ability of different testers to produce the same number or value using the same equipment and the same sample. If there are differences between the testers, it can be in the form of:

One tester being more or less consistent in comparison to another.

One tester getting consistently high or low readings in comparison to another. This is referred to as bias.

We will look at detecting differences between testers in a later issue. Reproducibility is also known as tester or appraiser variation.

Calculating R&R is relatively complex and involves several steps. We completed them using PQ Systems’ GAGEpack. Here are the results for the data shown earlier.

 

Value

% of specification

Equipment variation EV

1.96

65.44

Appraiser variation AV (tester)

1.61

53.62

Repeatability and reproducibility R&R

2.54

84.60

In this case, the values are compared to the specification of the product being tested. The specification for the product is 8 to 11, making a specification width of 3. Each of the values is divided by 3 and expressed as a percentage of the specification width.

So, what does the percentage mean? Quite simply, this is the percentage of the specification taken up by measurement variation; it is the width of the variation curve for the measurement system. So, in the example, 84.6% of the specification is taken up with measurement variation--before we look at any variation in the process! Remember, when you are looking at data from the process you are looking at:

Variation in data = measurement variation + process variation

When variation in data is observed, we tend to assume that it is all coming from the process when, in fact, it is often coming from the measurement system. If the length results measured by the equipment used in the above example are analysed, shifts and changes will be apparent in the data and in the control charts. The origin of this variation will be the measurement system, NOT the process. So, in order to improve the capability of the process, it is necessary to improve the measurement system and leave the production process alone. Not the other way around.

So, what value are we looking for in the R&R? The following is a guideline.

Greater than 30%: Not acceptable. The measurement system will not reliably distinguish good product from bad.

Between 10 and 30% : Acceptable for normal measurement.

Less than 10% : Acceptable for use with statistical process control.

Note the tougher requirement for use with statistical process control. If the value is greater than 10%, then many of the trends and spikes found in control charts will be caused by the measurement system, NOT the process. So, changing the production process will only make things worse!

In the example, the R&R is 84.6%. This means that the variation in the measurement system is taking up most of the specification. The process has to be excellent under these circumstances; otherwise, any slight change in the process will cause results out of specification due to the poor measurement system. As soon as a measurement system has an R&R in excess of 30%, a question is raised about its ability to detect whether product is in or out of specification.

So, how do you go about reducing the R&R%? Well, that depends on the source of most of the variation. In the example, the equipment variation is slightly higher than the appraiser or tester variation.

If the repeatability or equipment variation is large, it is an indication that the equipment or test method is unsuitable at this stage. In extreme cases, it may be necessary to replace the equipment. Alternatively, it may only require an adjustment in the test equipment or the method of using the equipment. Investigate how the equipment is used in detail, and improve any potential causes of variation.

If the reproducibility or the tester variation is high, the data from the study can be analysed further to understand the causes. Is it one individual tester, is it a bias issue, was a particular sample hard to measure? This is the next stage in the analysis and will be discussed next time.

 

Copyright 2007 PQ Systems.
Please direct questions or problems regarding this web site to the Webmaster.