Implementing Six Sigma: Success and Failure
Critical analysis of Six Sigma implementation
Kamran Moosaa∗ and Ali Sajidb
aCEO—PIQC, Institute of Quality, Lahore, Pakistan; bInstitute of Business Management, University of Engineering & Technology, Lahore, Pakistan
The Six Sigma programme has recently gained popularity throughout the world. There are a number of claims of its successes as well as failures. Successful claims are mostly supported in the literature with the popular case studies of Motorola, GE, and some other American companies. Based on the popularity of these success stories, many companies started implementation of this programme. Some did it successfully while many failed to achieve the desired results. This paper explores and analyses the critical success and failure factors of implementing Six Sigma in organisations based on lessons drawn from real life practices and case studies, as well as available literature. The paper also draws useful conclusions and recommendations for strategists, CEO’s and quality managers on how to effectively implement Six Sigma.
Keywords: Six Sigma implementation; Six Sigma metric; Six Sigma structure; Six Sigma methodology; total quality management (TQM); ISO 9000
Background
Total Quality Management (TQM) is a sub-discipline of management science which aims
to define, set, control and improve the effectiveness of an organisation within its con-
straints. It has been named and labelled by different nomenclatures in its evolution over
the last 60 years or so, such as quality control (QC), quality assurance (QA), total
quality control (TQC), company-wide quality control (CWQC), TQM, or quality manage-
ment systems (QMS). Since then, a paradigm shift in the core concept has occurred in the
field of TQM by expanding the process of measurement, control and improvement from
the testing/inspection departments to all departments in all types of firms, which may
be manufacturing or service. Having implemented TQM, it means that the organisation
is essentially using the philosophy of standardisation, customer satisfaction and continual
improvement. For this purpose, this field uses many tools, methods, standards or pro-
grammes continuously being evolved by top class practicing companies, practitioners,
or academicians. Six Sigma is the latest entry in this field.
Introduction
Six Sigma is an improvement methodology in the field of Total Quality Management
(TQM). It is defined as ‘a methodology for pursuing continuous improvement in customer
satisfaction and profit that goes beyond defect reduction and emphasizes business process
improvement in general’ (Breyfogle III, 2003). It aims for an error free business environ-
ment (Pyzdek, 2003). It was originally introduced in the US by Motorola in the late 1980s
ISSN 1478-3363 print/ISSN 1478-3371 online
# 2010 Taylor & Francis DOI: 10.1080/14783363.2010.483100
http://www.informaworld.com
∗Corresponding author. Email: kamranmoosa@piqc.com.pk
Total Quality Management
Vol. 21, No. 7, July 2010, 745–759
and became popular elsewhere in early 1990s. The case studies of Motorola, followed by
GE and Allied Signals, generated an interest in Six Sigma (Breyfogle III, 2003). This tool
became the focus of attention for CEOs and quality managers in the late 1990s, at a time
when stagnancy and criticism of ISO 9000 was rising about its effectiveness with respect
to making improvements in organisations.
Other TQM tools like Kaizen and quality circles were also in use for decades but did
not achieve too much popularity in companies. They were initially found in Japanese or
their associated firms but later accepted by firms throughout the world. The literature
and real life observations provide ample justifications for the success of these Japanese
tools in all sorts of companies and all parts of the world, especially Asian countries.
These tools were generally labelled as processes for slow incremental improvements.
They are considered as tools of problem solving and improvement at the level of
workers or supervisors. In terms of complexity, Kaizen is the simplest tool to solve com-
monsense problems (a suggestion system); whereas, quality circles/teams provide problem
solving platforms utilising elementary level analysis and statistical tools. Six Sigma, on the
other hand, is a tool to be practiced at the level of management to deal with complex organ-
isational problems that need extensive analysis of data, confirmation of results, and vali-
dation of long-term actual benefits. In other words, it provides more promises to
management to solve deep rooted and complex performance issues of their organisations.
It is, therefore, usually labelled as a tool which strives for breakthrough improvements
rather than slow and simple improvements.
This paper will present the key elements of a Six Sigma programme and their critical
analysis. It will include: (1) implementation models; (2) relationship between Six Sigma
and ISO 9001 QMS; (3) Six Sigma – an approach, a methodology, a metric, or a pro-
gramme; (4) a social taboo; (5) problem solving – oversimplified; (6) how much statistics;
(7) mis-concepts about 3.4 DPMO; (8) Six Sigma structure; (9) Six Sigma deployment;
and (10) conclusion.
Research methodology
This paper is a critical review of the subject of Six Sigma from an academic, as well as an
application, point of view. It is the result of extensive literature study as well as many real
life Six Sigma implementation processes and observations. Like usual research outputs,
case studies are not part of this paper, rather an exhaustive review of Six Sigma application
phenomenon was carried out in order to identify key factors involved in it. It mainly ident-
ifies some important practical phenomena which are usually neglected by academicians in
their usual research. These will help other researchers to plan their researches.
Implementation models
Many professionals, including TQM practitioners sometimes erroneously confuse two
things with each other, that is: (1) TQM assessment models; (2) TQM implementation
models (Moosa, 2007b). The TQM assessment models are commonly known as Quality
Award Criteria, Business Excellence Models, Six Sigma or even ISO 9001 QMS, and
provide the contents of what may be considered TQM. Whether it is the DMAIC Method-
ology of Six Sigma (Define, Measure, Analyse, Improvement, and Control), PDCA model
of ISO 9000 (Management Responsibility, Resource Management, Product Realisation,
and Measurement, Analysis and Improvement) or the Business Excellence Model of Euro-
pean Quality Award, they all provide a checklist of activities or requirements and what is
746 K. Moosa and A. Sajid
required from a firm or an organisation. These models are prepared from the point of view
of assessors who assess whether these organisations fulfill these requirements.
However, when it comes to the deployment or implementation of these models, prac-
titioners do not find any standard methodology or model that spells out as to how to
implement them. For example, ISO 9001 QMS is a set of requirement of the standard,
whereas ISO 9000 and ISO 9004 are the guidelines which describe what is meant by ISO
9001, not how to implement it. Similarly, DMAIC is a standard set of problem solving
process in Six Sigma methodology. These are not the guidelines on how to implement
the requirements. Perhaps, it may not be possible to come up with a common set of standards
on implementation as every organisation has different resources, context, history, competi-
tive position, skills, technology, leadership, focus of attention, and so forth. Lascelles and
Dale (1991) identified six levels for the adoption of TQM. These levels are: uncommitted,
drifter, tool pusher, improver, award winner, and world class. However, when analysing
them critically, these are found to be the types of TQM implementation and not the
stages of TQM implementation. This is also seen in case of Six Sigma. A number of case
studies are seen in the literature (Breyfogle III, 2003; Pyzdek, 2003; McCarty, 2005)
which describe different styles of Six Sigma implementation. However, no proven or
tested model which can be generalised, has yet been identified.
The success and failure of most Six Sigma programmes largely depend upon their
implementation rather than their contents. Though not much of data is available on the
failure and success rate of Six Sigma implementation, it is generally believed that only
a small number of organisations who start this programme succeed, whereas a large
number of them fail. According to an observation of 12 firms by the author adopting
Six Sigma, it was found that one fourth of this programme achieved some significant
improvements in organisations. Similarly, Eskildson (1994), based on survey results,
states that the two main reasons for the failure of quality initiatives are vague definitions
and objectives of respective TQM or its tools, and its inappropriate implementation. Thus,
it is an issue of considerable concern to identify why these quality initiatives or pro-
grammes fail so often and how to improve their level of implementation.
Relationship between Six Sigma and ISO 9001 QMS
The second revision of ISO 9001:2000 also emphasised improvement. It introduced the
‘process approach’ as one of the key ingredients of a Quality Management System
(QMS). This was based on the world-popular Deming Cycle (PDCA). The emphasis
was on ‘Quality Improvement’, rather than just assuring quality. However, no tool or
methodology is prescribed in this standard. As a result, even knowing the requirements
and process of improvement actions, most quality managers, auditors and consultants are
neither focused nor skilled in the tools of quality improvement. Without understanding
and implementing effective tools of quality improvement, for example Six Sigma, the
revised standard has in fact not made any significant impact in organisations. As a
result, while some companies claim benefits from this standard, most do not. It is, there-
fore, also important for companies already implementing ISO 9001:2000 to carefully
integrate their QMS with Six Sigma in order to achieve its full benefits. At the same
time, Six Sigma has also not been sustainable in an environment where there is a
weak QMS or a QA programme being implemented. It is therefore important for com-
panies to first identify what their current weaknesses are in their existing QMS and
then strengthen it by integrating it properly with the Six Sigma methodologies to
ensure the success of both.
Total Quality Management 747
Six Sigma – an approach, methodology, metric, or programme?
This is a common confusing point for laymen. In fact, all the terminologies are commonly
used and practiced. As an approach, it means that management of a firm agrees to adopt a
databased problem solving approach when solving business and quality related problems.
It includes all the business processes. As a methodology, it means that these problems are
solved by management teams with a sequence of steps called DMAIC. These steps are
known as a scientific method to problem solving (also taught in universities to researchers
as a research methodology). Problems are identified as projects and then solved in steps
where a number of statistical and analytical tools are defined at each step. As a metric,
it uses the measure of sigma, DPMO (defect per million opportunities) and RTY (rolled
throughput yield), instead of commonly used DPU (defect per unit) measures, as explained
in the next section. When the word Six Sigma programme is used it implies a Six Sigma
management system which encompasses both the Six Sigma metric and Six Sigma meth-
odology. It is when Six Sigma is implemented as a management system that organisations
see the greatest impact (McCarty, 2005). The Six Sigma teams are always sufficiently
trained to ensure appropriate competence in the use of various necessary tools and tech-
niques (commonly known as green and black belts). If such teams stop to function, the
Six Sigma programme stops.
A social taboo
One of the biggest differences between an underdeveloped and a developed country is the
use of systematic approach in everything that they do or not do. The concept of ‘systems’
is practically very weak in most of the underdeveloped countries. Application of systems
approach in a country is what makes her a developed country. We commonly observe vio-
lation of systems as a way of life in underdeveloped or developing countries. This differ-
ence usually strikes people from underdeveloped countries when they visit developed
countries, where usually ‘systems’ are a way of life. Systems are mainly the routines
being followed by people in general; where reminders are not required and the honesty
of commitment is never questioned.
An impact of following or not following systems in our daily life is always reflected in
management styles. In a society where systems are often violated by people, this by itself
becomes a management style (or habit). Such styles are often the cause of lack of failure or
sustainability of any management programme that we introduce in our firms. Most
research also neglect to mention such styles as a root cause. Thus our focus of attention
in failures is quite often at finding the technical flaws in the programmes rather than
our management styles. When this happens, TQM in general and Six Sigma, in particular,
suffers from these management styles and results in a failure in most implementation pro-
grammes. It is therefore important to recognise this social taboo and focus on it to improve
the rate of success, especially in the Middle East and subcontinent.
People often tackle this issue by looking for people who can work effectively in a
non-systematic environment. Competent and energetic people are then searched who
can deal with problems competently and solve issues on a day-to-day basis without
causing any disturbances. Organisations are usually run this way in underdeveloped
countries. Although this does solve problems at some level, it then also creates more pro-
blems at other ends. For example, the work gets dependent on these special people
(champions), job rotations become impossible, creativeness is killed, de-motivation gen-
erates and so forth. Problems are popped up again and again as people leave or change
organisations.
748 K. Moosa and A. Sajid
Problem solving – often oversimplified!
The use of word ‘problem solving’ is very common in our daily life and thus does not catch
much attention when talked in the context of quality management. Even a child knows
how to solve problems. So what is the big deal with Six Sigma?
Delays, rejections, errors, mistakes, losses, and inefficiencies are all problems of not
just an organisation as a whole but are found in every department or section of an organ-
isation. When problems are created regularly by one department, they become a regular
feature of others’ processes. Say for example, regular delays by a Purchase Department
in purchasing is not just a problem but a cause of many problems of other departments:
planning errors, production delays and long stoppages, maintenance delays and long
shut-downs, broken promises by the Sales department, customer complaints, employees
irritation and dissatisfaction, for example. In return, all of these problems further aggravate
and create more problems: such as customers’ dissatisfaction and disloyalty, vendors dis-
satisfaction and disloyalty, employees dissatisfaction and lower retention rates.
Fixing systems and improving the culture of following systems is therefore an
important strategy for quality improvement in general. Considering the social taboo
discussed earlier, problem solving becomes an act of correcting (or fixing) the bad
incidence rather then focusing on the systems behind the incidence. This is not just a
philosophical point of discussion which require the attention of management, but a tech-
nical point which requires problem solving tools, techniques, methods, and management
policies and commitment. For example, when our intention is to discover flaws in
Figure 1. DMAIC process, goals and usual tools.
Total Quality Management 749
systems, we need to have a strong process of measurement and analysis of the systems,
such as data collection of problems on a long-term basis, statistical tools, statistical
software for processing data, and controlled experimentations. Six Sigma provides
statistical and other analytical tools to process complex problems with the help of
DMAIC Methodology (Figure 1). It usually requires sophisticated software like
Minitab w or SPSS w. However, if these tools are not properly learnt by all concerned, the abilities to solve and analyse problems remain limited. This means the core compe-
tence of managers needs to be upgraded on statistical and analytical tools, as well as
abilities to use statistical software.
How much statistics?
Quality is only as good as the information and data behind it. Solving quality problems
requires that large amounts of data are collected, analysed, deciphered, and acted upon.
Product and service quality is only as good as the quality of the process information
and data generated (for example customer feedback reports, inspection reports). Data
and information therefore must be accessible and understandable to management,
quality improvement teams, and all employees. Statistics can be used to make data and
information understandable for quality decision-making.
Statistics involves collecting, processing, and then presenting data in an understand-
able form. Statistical analysis provides techniques and tools for studying variation and
patterns by examining data samples to estimate characteristics of the phenomena. More
often, managers are not even trained on the application of basic statistics resulting in
their insufficient capabilities to analyse and infer data effectively.
Those who adopt Six Sigma, generate a data-driven management style and make use of
elementary to medium level applied statistics in all business function units. Green belts are
those who are trained to a basic level of techniques while black belts are trained on the
advance level of applied statistics (McCarty, 2005). Two types of statistics are addressed
in problem solving: (1) descriptive statistics; (2) inferential statistics.
Descriptive statistics is used for summarising and characterising data. It provides
quantitative measure of the characteristics (such as the average and standard deviation)
of sample data. It has useful application in almost all areas where quantitative data are
collected. It can provide information about the product, process or some other aspect of
the QMS, and may be used in management reviews, for example summarising key
measures of product specs, describing process performance, characterising delivery time
or response rate, and displaying distribution. It usually includes the use of mean,
median, mode, variance, standard deviation, process capability index, different types of
distributions and control charts.
On the other hand, inferential statistics is about studying the sample (customer
feedback, employees’ feedback, process data, experimental data) and then interpreting
results about the whole phenomenon or data (long-term process). It also aims to explore
the relationships (associations), especially causal relations followed by their validation.
Six Sigma extensively utilises these techniques, such as sampling techniques, probabil-
ities, test of hypothesis, analysis of variance, correlation, regression analysis, and design
of experiments. These techniques, if taught to managers, raise the level of their analytical
capabilities tremendously. Six Sigma includes the investigation of causal relations in
complex systems through the use of these statistical techniques.
However, it is commonly observed that many training programmes throughout the
world which claim Six Sigma black/green belt certification are not capable enough to
750 K. Moosa and A. Sajid
develop these skills, resulting in qualified but incapable persons. They do provide fancy
certificates but very little statistical capability which can be used in real life problems.
Mis-concepts about 3.4 DPMO
The value of 3.4 Defects per Million Opportunities is commonly quoted as an ultimate
goal of quality in Six Sigma. However, very few understand what this really means.
It is important to note that 3.4 DPMO does not mean 3.4 defects per million product or
services produced. Most people are found to have this misunderstanding. Defects per Unit
(DPU) is a day to day common metric (unit of measure), which we use for measuring
defects in a whole product without considering its constituent parts. In DPMO, we do
take into consideration the number of parts (or opportunities) from which a product is
made. The following examples will clear this point:
Example 1
Suppose you produce 100 pens. Each complete pen has two parts, that is the lid and its
refill. After checking, 10 defects are found in the whole lot of 100 pens. The calculations
will be as follows:
Parts per product: 2
Defects: 10
Units: 100 (products)
DPU: 0.1 OR 10% (defects per unit or product)
DPO: 10
2 × 100 = 0.05 or 5% (defect per Opportunity) DPMO: DPO × 106 ¼ 50,000 Defects Per Million Opportunities s (Sigma Level) ¼ 3.1 (FROM SIGMA TABLE)
Example 2
Now suppose the same lot and same number of defects but the number of parts in the same
pen are four, instead of two (as in the previous example). The calculations will now be as
follows:
Parts per product: 4
Defects: 10
Units: 100
DPU: 0.1 OR 10% (Defects per unit-same as previous)
DPO: 10
4 × 100 = 0.025 or 2.5% (Defect per Opportunity)
DPMO: DPO × 106 ¼ 25,000 Defects Per Million Opportunities s (Sigma Level) ¼ 3.4 (FROM SIGMA TABLE)
Example 3
Now consider the above problem (example 2) but change the number of defects to 100.
The calculations will now be as follows:
Parts per product: 4
Defects: 100
Units: 100
Total Quality Management 751
DPU: 1.0 OR 100% (Defects per unit)
DPO: 100
4 × 100 = 0.25 or 25% (Defect per Opportunity)
DPMO: DPO × 106 ¼ 250,000 Defects Per Million Opportunities s (Sigma Level) ¼ 2.2 (FROM SIGMA TABLE)
Interpretation
. Example 1: The DPU was 10% while DPO was 5%. DPO means defect rate in every
part, that is five defects in 100 parts or 50,000 defective parts in a million parts. . Example 2: The DPU was still 10% while DPO was 2.5%. This means 2.5 defects in
100 component or 25,000 defective parts in a million parts . Example 3: The DPU was 100% while DPO was 25%. This means one defective part
in every four parts. Although 100% of pens are defective but when we consider at the
level of parts, it is 25% only, that is 250,000 defective parts in a million.
So it is apparent that the sense of DPU is different from the sense of DPMO. In the above
example, the opportunities are the number of parts. However, it is not always the parts only
(more study may be required to grasp this further). Similarly, the opportunities in a service
environment may be the number of questions in a customer feedback form. A defect may
be any rating of 1 or 2 (on a scale of 5). Detailed discussion is beyond the scope of this
paper.
Many people do not distinguish between DPU and DPMO, thus making conceptual
errors. This mistake is quite often seen in the people interpreting the sigma levels into
common sense. Sigma levels are also calculated on a long-term basis (months and
years, rather than hours and days).
From the above examples, another very important issue can be identified. That is, if
your product (or service) comprises of more than one component (or service parameter),
the result of DPMO will differ. Table 1 will clear this concept.
The first column includes the figures of average percentage defect rate for each part of
the listed products. The second column provides its relevant sigma level. The third, fourth,
fifth, sixth and seventh column are examples of products with two parts, five parts, 10
parts, 100 parts and 1000 parts from which they are made (considering these as opportu-
nities). These columns provide the yield or percentage of products as a result of the
average defect rate provided in the first column. The first row, for example, considers
an average defect rate of 20% for each part being used in the product. In such cases,
the shoe manufacturer will have only 64% of defect free final products. If it was a pen
which contain five parts in a product, it will have 33% defect free products after assembly;
if it is a shirt with 10 parts in it, then 11 shirts will be defect free after production; if it is a
computer, it will have 0% (no) defect free computer after assembly; and if it is an auto-
mobile, it will also have no defect free car after assembly.
Looking at the table, it will be seen with an 1% average defect rate of each part in a
product, 98% shoes, 95% pens, 90% shirts, 37% computers and 0% cars will be turned
out defect free after assembly or production. In order to produce 90% cars without any
defect, the automobile manufacturer will have to set its processes and vendors defect
rate at 0.001%. This means the sigma level of 5.2 at the process or vendor level will
produce 2.8 sigma cars (the calculations are beyond the scope of this paper). The same
type of sigma calculations can also be carried out in case of services. Different types of
product manufacturers or service deliverers cannot compare their products or service
752 K. Moosa and A. Sajid
processes by percentage of defects measured in DPU. The number of parts which make up
their products or service is an important parameter which is not addressed in DPUs but
accounted for in DPMO (or sigma). Therefore, DPMO and sigma values are useful
metrics to compare two different products and their processes. For example, a TV manu-
facturer can compare its process quality with a pen manufacturer. The same is true in the
case of services.
This suggests that as the number of parts in a product increase so does the complexity
of quality. It is therefore much more difficult to control the quality of an automobile than
shoes. In other words, two equal quality processes (say 5% defect rate) will produce differ-
ent quality levels of products if the number of parts from which they are made varies.
Process improvement is, therefore, of much higher priority for companies whose products
or services comprises many constituent parts.
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!
NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

