The Six Sigma methodology represents an evolution in quality management that has been widely implemented in industry, healthcare, and laboratory medicine. Six Sigma is based on two important principles: 1) problem-solving approaches, such as define, measure, analyse, improve, and control (DMAIC) improvement cycle, and 2) quantitative statistical analysis. Through their combined use, the Six Sigma process aims to achieve very small output imprecision, such that 12 standard deviation (SD) units can be fit between the upper tolerance limit (UTL) and the lower tolerance limit (LTL) (i.e., 6 SD can be fit between the target and the UTL/LTL). This is quantified through the sigma metric (SM), which can directly provide the number of defects per million opportunities (DPMO) (1).
Quantitatively, the performance of ‘‘world-class’’ processes is 6 sigma, which implies that only 3.4 or less DPMO are expected to occur. Statistically however, 6 sigma corresponds to 0.002 DPMO rather than 3.4 DPMO. Since 0.002 parts-per-million corresponds to the area under the standard normal distribution curve that lays outside of the ± 6 SD distant from the mean. The reason for this difference is the introduction of a 1.5 SD shift to account for the random variation of the process around its target (Figure 1). In practice, the shift protects the process from underestimating the rate of non-compliances during the development stage. Consequently, a 6 sigma process is deemed to actually be 4.5 sigma in the routine phase (2). It should be noted that a 1.5 SD shift is part of the normal distribution graph and therefore should be applied to normally distributed data. If the distribution is not normal, then the SD cannot be used as measure of the shift of the average. However, in laboratory medicine it has been included in SM calculation for the extraanalytical phase. In this paper, we aim to 1) explain the reason for inclusion of a 1.5 SD shift in the analytical phase, and 2) show that inclusion of a 1.5 SD shift in the extraanalytical phase will cause great deviation in process performance from the actual level.
Statistical basis of the 1.5 SD shift
The statistical basis of the 1.5 SD shift comes from the application of Shewhart control charts (Scc), like the X-bar and R chart, to monitor processes in real-time (2, 3). In fact, the shift δ of a mean from its target that can be signalled by setting control limits to ± Zα/2 times the standard error is given by the equation below (2):
It can be shown through a simple simulation that βout is acceptable regardless of n (i.e. from 1 to 10) only when δ ≥ 1.5 SD for the process (2). From a standpoint of economic control, βout is the average length of the production run (i.e. number of items) spent out of compliance before the shift δ is detected and the process is re-calibrated on its target. Therefore, there is a kind of “blind spot” in real-time process monitoring with Scc, for which 1.5 SD represents a correction factor (2).
Equation 1 suggests at least three reflections: 1) the application of the shift depends on the use of control charts to monitor the process, 2) the process must be stochastically normal (i.e. the output must be normally distributed over time) to be measured in terms of SD units, and 3) the actual size of the shift may be different (eventually smaller) than 1.5 SD. In the industrial field, where Six Sigma was originally developed, observation has indicated that the 1.5 SD shift is appropriate for manufacturing processes where long-term drift usually occurs (3). In clinical chemistry, where the Scc has been in use since the 1950s to control the analytical process (i.e. production of test results), the correspondence between the SM and the allowable Total Error (TEa) model has favoured the inheritance of the 1.5 SD shift (4). However, one may object that analytical processes are ideally less prone to long-time drift due to the higher frequency of their re-calibration and their ability to be externally controlled through participation in collaborative trials.
Sigma metric in the extraanalytical phase
There is, however, another field in laboratory medicine – non-analytical processes – where the use of a 1.5 SD shift looks even more dubious. In 2000, Nevalainen et al. used the SM to rate the quality of processes of the extraanalytical phases (i.e. pre-analytical and post-analytical phases) of laboratories (5). In their work, the SM was obtained directly from the observed DPMO, using the “classical” industrial tables where the results of the conversion already included a 1.5 SD shift.
In the years that followed, the work of Nevalainen et al., became the template for studies where a quality indicator (QI) is applied to estimate the performance of a process within the total testing process (TTP). Unfortunately, this seminal work disregarded that the calculation of SM as well as the application of the shift depends on the validity of Eq. 1. Thus, considering that neither of the extraanalytical process is likely to be normally distributed, nor are they monitored by Scc, there is concern regarding how the SM-to-DPMO conversion was accepted with no further investigation (2, 6). The reason may lie in the application of a QI that produces a binomial measure (i.e. number of non-compliances), which corresponds to a probability distribution that becomes nearly normal if the number of the observations is adequately large (i.e. N > 20). Notably, this cannot be considered a sufficient condition, as the QI is a collated statistic of the output and not the output itself (7).
Remarkably, the error rate of the components within the TTP is heterogeneous, with the highest error rate seen in the pre-pre analytical phase and the lowest seen in the analytical phase. In the past decades, the analytical error rate has decreased significantly and consequently more than 90% of errors are now extraanalytic in nature (8). Therefore, a 1.5 SD shift may be reasonable only for the analytical phase, while including it as a dogma in other phases would unnecessarily inflate the error rate of the TTP. For example if the SM of post-analytical phase is 4, the DPMO is 32, not 6210. In extraanalytical phases we need evidence to include 1.5 SD as the shift of the process. From a pragmatic point of view, for example, it is not easy to find evidence to include 1.5 SD shift to pre-pre analytical phase. This should be of particular concern considering the heterogeneous nature of extraanalytical processes that may give rise to different long-term inflations of the error rate. For instance, in a totally automated laboratory the intra-laboratory turnaround time (TAT) depends on the operation of identical automated units that are subject to mechanical wear and maintenance, whereas the extra-laboratory TAT (i.e. patient-to-laboratory delivery time) mostly depends on the physical activity of human operators and their turnover (8). Thus, it is unwise to assume a priori that the error rate of different extraanalytical processes will inflate in exactly the same way over time.
We have concluded that a 1.5 SD shift should no longer be considered a kind of universal constant to be applied to the SM across different fields regardless of their nature. In SM calculation, inclusion or exclusion of 1.5 SD shift should be evidence based, considering each analyte and process independently. To this end, a first step should be to discourage the use of tables where the “normal” SM with a 1.5 SD shift is provided. This is because the relationship between DPMO and SM is non-linear, so a small change in the SM can result in huge inflation of DPMO (Figure 2) (9). Furthermore, investigation of the dynamics of extraanalytical processes – especially if, and how much, they can drift – should be encouraged. Otherwise, the risk of continuing to use a constant 1.5 SD shift in laboratory medicine is that users will be misled, and the application of Six Sigma in this field will be blunted.