SIX SIGMA FOR PROJECT MANAGERS
SIX SIGMA
FOR PROJECT
MANAGERS
Steve Neuendorf
Vienna, VA
SIX SIGMA FOR PROJECT MANAGERS
About the Author
Steve Neuendorf has more than 25 years of consulting, management, industrial engineering, and measurement experience, including 15 years directly related to management, measurement, and improvement of software engineering projects and processes. Steve also has extensive management consulting experience. He has BA and MBA degrees from the University of Puget Sound with post-graduate work in information management, and a JD degree from Seattle University. Steve also has extensive teaching experience ranging from academics to hands-on workshops.
SIX SIGMA FOR PROJECT MANAGERS
CHAPTER 1
What Is Six Sigma?
Sigma (σ) is the Greek symbol used in statistics to indicate the statistical property of a set of grouped data called “standard deviation.” If you are fairly well versed in statistics, you would say that is pretty elementary; if not, you would say it’s Greek to you. Either way you would be right.
“Grouped data” refers to any set of data that are somehow related. If you are rolling dice, for example, all the data from each roll for however many rolls would be grouped data, because all the data come from the same system of rolled dice. Eventually, after rolling the dice enough times and analyzing the results, you no longer need to roll the dice and collect and analyze the data to predict the likelihood of the next roll or several rolls. We would say that rolling a 7 is the most likely outcome, since there are several ways the dice can be rolled to give a 7, just as we can say that rolling a 2 or a 12 is the least likely outcome, since there is only one way to roll either of those results.
With some analysis, we could create a “probability distribution” for rolling the dice that showed the probability of rolling any particular result. Lots of statistics can be derived from this type of data, but the ones we are interested in are the standard deviation, σ, and the average or mean—that is, the most likely result.
For any given distribution, the percentage of the results that falls within 1 standard deviation of the mean is a constant. Further, the percentage of results that falls within any number of standard deviations is a constant. For a normal distribution, about 64% of the results will fall within 1 standard deviation of the mean (1σ), about 95% will fall within 2 standard deviations of the mean (2σ), and about 99.9% will fall within 3 standard deviations (3σ) (see Figure 1-1). For the mathematical derivation of six sigma, the area under the curve is 99.999999998%; in other words, about 2 parts per billion (ppb) are not under the curve. To illustrate, 2 ppb of the world’s population (as of early 2003) would be 12 people.
While six sigma is at its roots all about statistics, in its application and practice it is very little about statistics except to a very few people. It is not likely to have much, if anything, to do with statistics for project managers. By analogy, temperature is all about the average speed of molecules, but making the room warmer has more to do with starting a fire, turning up the thermostat, or shutting the window than it does with the molecules and how they are moving. In the analogy, the result—a warmer room—is what is important; in six sigma, improved quality is what is important.
Nevertheless, we should note that standard deviation is a property of the number set. Since we are attributing the results represented in the number set to the measured results of a process, we can say that standard deviation is a property of the process. Any process will also have its “design objectives” as defined by its creators. This property is the “spec limit” or the “performance standard” for that process.
If we look at a manufacturing process, the spec limit might be expressed as a range about a dimension, such as a length of 10” ± 0.1.” If the process is returning a customer call, the spec limit or performance standard might be stated as “by the end of the next business day.” No matter what it is, we can treat the specification limit as independent of the observed statistics (mean and standard deviation) for that process. If the process is designed so that the performance distribution falls well within the spec limit or performance standard, then we would say we have “good quality”—that is, very few defects as defined by comparing the results with the spec limit or performance standard. On the other hand, if the process is defined so that a lot of the results fall outside of the spec limit, then we would say we have poor quality in the results.
So what can we do?
Changing the spec would be effective, but tough to do in the world of interchangeable parts or the need to make a good customer first impression. Changing the results (i.e., fixing all the defects) would be effective but expensive and wasteful. Or we can change the process to make sure that the most probable outcome is a result within the specification limits. Remember, the “tighter” the process (i.e., the smaller the standard deviation), the higher the percentage of the results that falls within the specification limits or performance standards.
Six sigma, or any statistical process control tool for that matter, is really about developing a congruency between the specification limit (performance standard) and process performance, such that the process is designed to perform within the specification limit. Six sigma goes one step further in “normalizing” the measurement of quality for use in any type of process or activity.
Just in the examples we have used so far—the dice, the temperature, the length, and the call return—we have four different units of measure: integers, degrees, inches, and business day. To the people responsible for any of these processes, the units of measure have meaning and usefulness, but to someone with responsibility for all of these processes collectively, the measures would only confuse whatever information is needed to manage these disparate processes effectively.
What six sigma does at the result level is express measurement as “defects,” which can be defined as any failure to meet the specification limit or performance standard, and opportunities for defects, which can be defined as any place where the measurement could have indicated that a defect occurred. Further, the quality level is usually expressed as “defects per million opportunities for defects” (DPMO), or more succinctly as ppm (parts per million). With any activity’s quality performance expressed as defects per million opportunities, one can effectively compare performance between very different activities and make decisions about where and how to focus improvement activity and resources.
As noted, if we measure any process, the results will be a set of grouped data. If we are measuring the number of defects produced by our process, we quantitatively know how much the process produced and how many defects were contained in that process.
Processes are somewhat like the concept of inertia in physics, which is that an object will remain in a constant state until it is acted upon by an outside force. Processes will tend to reproduce the same results until they are acted upon by an outside force. So, if you have a process that tends to produce so many defects in some measured amount of its output, that process can be expected to continue to produce that same number of defects until someone does something to change it. Once the process is changed and it has a new characteristic for producing defects (or for producing defect-free output), it will continue to operate at that level until it is acted upon again.
Organizations tend to think that projects are the alternative to process. That is, if something is done routinely there is no need to initiate a project to get it done. If we need something done just once or done differently from what we have done in the past, we might charter a project.