The concepts surrounding the drive to Six Sigma quality are
essentially those of statistics and probability. In simple language,
these concepts boil down to, "How confident can I be that what I
planned to happen actually will happen?" Basically, the concept of Six
Sigma deals with measuring and improving how close we come to
delivering on what we planned to do.
Anything we do varies, even if only slightly, from the plan. Since no
result can exactly match our intention, we usually think in terms of
ranges of acceptability for whatever we plan to do. Those ranges of
acceptability (or tolerance limits) respond to the intended use of the
product of our labors-the needs and expectations of the customer.
Here's an example. Consider how your tolerance limits might be
structured to respond to customer expectations in these two
instructions:
"Cut two medium potatoes into quarter-inch cubes." and "Drill and tap
two quarter-inch holes in carbon steel brackets."
What would be your range of acceptability-or tolerances-for the value
quarter-inch? (Hint: a 5/16" potato cube probably would be acceptable;
a 5/16"threaded hole probably would not.)Another consideration in your
manufacture of potato cubes and holes would be the inherent capability
of the way you produce the quarter inch dimension-the capability of
the process. Are you hand-slicing potatoes with a knife or are you
using a special slicer with preset blades?
Are you drilling holes with a portable drill or are you using a drill
press? If we measured enough completed potato cubes and holes, the
capabilities of the various processes would speak to us. Their
language would be distribution curves.
Distribution curves tell us not only how well our processes have done;
they also tell us the probability of what our process will do next.
Statisticians group those probabilities in segments of the
distribution curve called standard deviations from the mean. The
symbol they use for standard deviation is the lower-case Greek letter
sigma.
For any process with a standard distribution (something that looks
like a bell-shaped curve), the probability is 68.26% that the next
value will be within one standard deviation from the mean. The
probability is 95.44% that the same next value will fall within two
standard deviations. The probability is 99.73% that it will be within
three sigma; and 99.994% that it will be within four sigma.
If the range of acceptability, or tolerance limit, for your product is
at or outside the four sigma point on the distribution curve for your
process, you are virtually assured of producing acceptable material
every time-provided, of course, that your process is centered and
stays centered on your target value.
Unfortunately, even if you can center your process once, it will tend
to drift. Experimental data show that most processes that are in
control still drift about 1.5 sigma on either side of their center
point over time.
This means that the real probability of a process with tolerance
limits at four sigma, producing acceptable material is actually more
like 98.76%, not 99.994%.
To reach near-perfect process output, the process capability curve
must fit inside the tolerances such that the tolerances are at or
beyond six standard deviations, or Six Sigma, on the distribution
curve. That is why we call our goal Six Sigma quality.
Quality makes us strong
In the past, conventional wisdom said that high levels of quality cost
more in the long run than poorer quality, raising the price you had to
ask for your product and making you less competitive. Balancing
quality with cost was thought to be the key to economic survival. The
surprising discovery of companies which initially developed Six Sigma,
or world-class, quality is that the best quality does not cost more.
It actually costs less. The reason for this is something called cost-
of-quality. Cost-of-quality is actually the cost of deviating from
quality-paying for things like rework, scrap and warranty claims.
Making things right the first time-even if it takes more effort to get
to that level of performance-actually costs much less than creating
then finding and fixing defects.
Shooting for Six Sigma:
An illustrative fable
The underlying logic of Six Sigma quality involves some understanding
of the role of statistical variation. Here's a story about that. Robin
Hood is out in the meadow practicing for the archery contest to be
held next week at the castle. After Robin's first 100 shots, Friar
Tuck, Robin's Master Black Belt in archery, adds up the number of hits
in the bull's eye of each target. He finds that Robin hit within the
bull's eye 68% of the time.
Friar Tuck plots the results of Robin's target practice on a chart
called a histogram. The results look something like this. "Note that
the bars in the chart form a curve that looks something like a bell,"
says the friar. "This is a standard distribution curve. Every process
that varies uniformly around a center point will form a plot that
looks like a smooth bell curve, if you make a large enough number of
trials or, in this case, shoot enough arrows."
Robin scratches his head. Friar Tuck explains that Robin's process
involves selecting straight arrows (raw material); holding the bow
steady and smoothly releasing the bowstring (the human factor); the
wood of the bow and the strength of the string (machinery); and the
technique of aiming to center the process on the bull's eye
(calibration and statistical process control).
The product of Robin's process is an arrow in a target. More
specifically, products that satisfy the customer are arrows that
score. Arrows outside the third circle on these targets don't count,
so they are defects. Robin's process appears to be 100% within
specification. In other words, every product produced is acceptable in
the eyes of the customer.
"You appear to be a three- to four-sigma archer," the friar continues.
"We'd have to measure a lot more holes to know for sure, but let's
assume that 99.99% of your shots score, that you're a four sigma
shooter." Robin strides off to tell his merry men.
The next day, the wind is constantly changing directions; there is a
light mist. Robin thinks he feels a cold coming on. Whatever the
reason, his process doesn't stay centered on the mean the way it did
before. In fact, it drifts unpredictably as much as 1.5 sigma either
side of the mean. Now, instead of producing no defects, after a
hundred shots, Robin has produced a defect, a hole outside the third
circle. In fact, instead of 99.99% of his shot scoring only 99.38% do.
While this may not seem as if much has changed, imagine that, instead
of shooting at targets, Robin was laser-drilling holes in turbine
blades. Let's say there were 100 holes in each blade. The probability
of producing even one defect-free blade would not be good. (Because
the creation of defects would be random, his process would produce
some good blades as well as some blades with multiple defects.)
Without inspecting everything many times over (not to mention spending
an enormous amount for rework and rejected material), Robin, the laser
driller, would find it virtually impossible to ever deliver even one
set of turbine blades with properly drilled holes.
Not only would the four-sigma producer have to spend much time and
money finding and fixing defects before products could be shipped, but
since inspection cannot find all the defects, she would also have to
fix problems after they got to the customer. The Six Sigma producer,
on the other hand, would be able to concentrate on only a handful of
defects to further improve the process.
How can the tools of Six Sigma quality help? If Robin the archer were
to use those tools to become a Six Sigma sharpshooter instead of a
four-sigma marksman, when he went out into the wind and rain, he would
still make every shot score. Some arrows might now be in the second
circle, but they would all still be acceptable to the customer,
guaranteeing first prize at the contest. Robin the laser driller would
also succeed; he would be making virtually defect free turbine blades.
The steps on the path to Six Sigma quality:
1. Measurement
Six Sigma quality means attaining a businesswide standard of making
fewer than 3.4 mistakes per million opportunities to make a mistake.
This quality standard includes design, manufacturing, marketing,
administration, service, support-all facets of the business. Everyone
has the same quality goal and essentially the same method to reach it.
While the application to engine design and manufacturing is obvious,
the goal of Six Sigma performance-and most of the same tools-also
apply to the softer, more administrative processes as well.
After the improvement project has been clearly defined and bounded,
the first element in the process of quality improvement is the
measurement of performance. Effective measurement demands taking a
statistical view of all processes and all problems. This reliance on
data and logic is crucial to the pursuit of Six Sigma quality.
The next step is, knowing what to measure. The determination of sigma
level is essentially based on counting defects, so we must measure the
frequency of defects. Mistakes or defects in a manufacturing process
tend to be relatively easy to define-simply a failure to meet a
specification. To broaden the application to other processes and to
further improve manufacturing, a new definition is helpful: a defect
is any failure to meet a customer satisfaction requirement, and the
customer is always the next person in the process.
In this beginning phase, you would select the critical-to-quality
characteristics you plan to improve. These would be based on an
analysis of your customer's requirements-(usually using a tool like
Quality Function Deployment.) After you clearly define your
performance standards and validate your measurement system (with gage
reliability and repeatability studies), you would then be able to
determine short-term and long-term process capability and actual
process performance (Cp and Cpk).
2. Analysis
The second step is to define performance objectives and identify the
sources of process variation. As a business, we have set Six Sigma
performance of all processes within five years as our objective. This
must be translated into specific objectives in each operation and
process. To identify sources of variation, after counting the defects
we must determine when, where and how they occur. Many tools can be
used to identify the causes of the variation that creates defects.
These include tools that many people have seen before (process
mapping, Pareto charts, fishbone diagrams, histograms, scatter
diagrams, run charts) and some that may be new (affinity diagrams, box-
and-whisker diagrams, multivariate analysis, hypothesis testing).
3. Improvement
This phase involves screening for potential causes of variation and
discovering interrelationships between them. (The tool commonly used
in this phase is Design of Experiment or DOE.) Understanding these
complex interrelationships, then allows the setting of individual
process tolerances that interact to produce the desired result.
4. Control
In the Control Phase, the process of validating the measurement system
and evaluating capability is repeated to insure that improvement
occurred. Steps are then taken to control the improved processes.
(Some examples of tools used in this phase are statistical process
control, mistake proofing and internal quality audits.)
Words of Wisdom about Quality
If you believe it is natural to have defects, and that quality
consists of finding defects and fixing them before they get to the
customer, you are just waiting to go out of business. To improve speed
and quality, you must first measure it-and you must use a common
measure.
The common business-wide measures that drive our quality improvement
are defects per unit of work and cycle time per unit of work. These
measures apply equally to design, production, marketing, service,
support and administration.
Everyone is responsible for producing quality; therefore, everyone
must be measured and accountable for quality. Measuring quality within
an organization and pursuing an aggressive rate of improvement is the
responsibility of operational management.
Customers want on-time delivery, a product that works immediately, no
early life failures and a product that is reliable over its lifetime.
If the process makes defects, the customer cannot easily be saved from
them by inspection and testing.
A robust design (one that is well within the capabilities of existing
processes to produce it) is the key to increasing customer
satisfaction and reducing cost. The way to a robust design is through
concurrent engineering and integrated design processes.
Because higher quality ultimately reduces costs, the highest quality
producer is most able to be the lowest cost producer and, therefore,
the most effective competitor in the marketplace.
Steven Bonacorsi is a Certified Lean Six Sigma Master Black Belt
instructor and coach. Steven Bonacorsi has trained hundreds of Master
Black Belts, Black Belts, Green Belts, and Project Sponsors and
Executive Leaders in Lean Six Sigma DMAIC and Design for Lean Six
Sigma process improvement methodologies.
Author for the Process Excellence Network (PEX Network / IQPC)
Process Excellence Network
Steven Bonacorsi, President of International Standard for Lean Six
Sigma(ISLSS)
Certified Lean Six Sigma Master Black Belt
47 Seasons Lane
Londonderry, NH 03053
Phone: +(1)
(603) 401-7047
E-mail:
sbona...@islss.com
Process Excellence Network:
http://bit.ly/n4hBwu
Article Source:
http://EzineArticles.com/?expert=Steven_Bonacorsi