

The deviation is how far a given data point is from the average. If you plot your 100 tests on a graph, you’ll get a well-known shape called a bell curve that’s highest in the middle and tapers off on either side. You’ll get quite a few 45s or 55s, but almost no 20s or 80s. You’ll get almost as many cases with 49, or 51. But if you do this test 100 times, most of the results will be close to 50, but not exactly. In many situations, the results of an experiment follow what is called a “normal distribution.” For example, if you flip a coin 100 times and count how many times it comes up heads, the average result will be 50. The term refers to the amount of variability in a given set of data: whether the data points are all clustered together, or very spread out. The unit of measurement usually given when talking about statistical significance is the standard deviation, expressed with the lowercase Greek letter sigma (σ).

It’s a question that arises with virtually every major new finding in science or medicine: What makes a result reliable enough to be taken seriously? The answer has to do with statistical significance - but also with judgments about what standards make sense in a given situation. Have you ever wondered whether your findings would be considered reliable enough to be taken seriously? An MIT news journalist, with input from MIT professors, takes the time to explain the importance of the Greek letter sigma (σ) and its role in statistical significance. The percentage of data points that would lie within each segment of that distribution are shown.

In other words, although statistical tables indicate that 3.4 defects / million is achieved when 4.5 process standard deviations (Sigma) are between the mean and the closest specification limit, the target is raised to 6.0 standard deviations to accommodate adverse process shifts over time and still produce only 3.4 defects per million opportunities.On this chart of a ‘normal’ distribution, showing the classic ‘bell curve’ shape, the mean (or average) is the vertical line at the center, and the vertical lines to either side represent intervals of one, two, and three sigma. * The table assumes a 1.5 sigma shift because processes tend to exhibit instability of that magnitude over time. The other side of the process distribution, which may have a tail beyond the farther specification, is ignored by the Cpk calculation. Note: the conversion of Sigma Level to Cpk is only an approximation because Cpk is based only upon the specification limit closest to the process mean.

The 1.5 sigma shift may or may not be an accurate estimate of the actual long-term instability of your process. In essence, the 1.5 sigma shift indicates that if you intend to have 3 DPMO over the long term, the process must be more capable than the 4.5 sigma (Cpk) indicated by a normal curve in order to accommodate instability or process shifts that occur over time. By convention established at Motorola, where the Six Sigma program originated, the Sigma level is adjusted by 1.5 sigma to recognize the tendency of processes to shift over the long term. The following table lists Defects Per Million Opportunities with the corresponding Sigma Level.Īlso shown is a direct conversion to a Cpk level based on the area under a Normal Curve.
