Q Manage Health CareVol. 15, No. 4, pp. 221–236c© 2006 Lippincott Williams & Wilkins, Inc.
A Statistical Process Control Case StudyThomas K. Ross, PhD
Statistical process control (SPC) chartscan be applied to a wide number of healthcare applications, yet widespread use has notoccurred. The greatest obstacle preventingwider use is the lack of quality managementtraining that health care workers receive. Thetechnical nature of the SPC guarantees thatwithout explicit instruction this techniquewill not come into widespread use. Reviewsof health care quality management textsinform the reader that SPC charts should beused to improve delivery processes andoutcomes often without discussing how theyare created. Conversely, medical researchfrequently reports the improved outcomesachieved after analyzing SPC charts. Thisarticle is targeted between these 2 positions:it reviews the SPC technique and presents atool and data so readers can construct SPCcharts. After tackling the case, it is hopedthat the readers will collect their own dataand apply the same technique to improveprocesses in their own organization.
Key words: case study, control charts, qualitymanagement, SPC, statistical process control
O ne of Edward Deming’s quality princi-ples is to make quality everyone’s job.1
Health care organizations have always as-sumed that quality is everyone’s job and
everyone knows the difference between quality careand substandard care. Yet health care by relying onindividualistic definitions of quality without perfor-mance evaluation systems has never made qualityeveryone’s job. Consequently, quality managementis left to quality improvement director, department,and/or committee, fails to achieve widespread sup-port in the organization, and produces few tangibleimprovements in processes or outcomes.
Implementing Deming’s principle requires em-ployees to understand what quality is, be capable ofidentifying substandard performance, and have theauthority to make changes that will improve perfor-mance. According to Deming, it is the responsibilityof management to suffuse this quality focus into theirorganizations. Of course, this is the problem in healthcare; other industries have developed rigorous sys-tems to monitor quality and modify processes whenoutput does not meet expectations. Berwick bluntlyaddressed this problem; doctors, nurses, reception-ists, and other health care workers are not trained inor capable of changing how they work.2
Health care has been implementing quality im-provement programs for decades but efforts to crossinto the fertile fields of quality management alwaysseem to flounder. The inability to apply the qualitymanagement techniques of other industries is due tothe more complex nature of health care but part ofthe problem is the way health care has approachedquality management. Berwick recognizes that health
From the Department of Health Services and Informa-tion Management, School of Allied Health Sciences, EastCarolina University, Greenville, NC.
Corresponding author: Thomas K. Ross, PhD, Departmentof Health Services and Information Management, School ofAllied Health Sciences, East Carolina University, 600 MoyeBlvd, Greenville, NC 27858 (e-mail: email@example.com).
222 QUALITY MANAGEMENT IN HEALTH CARE/VOLUME 15, ISSUE 4, OCTOBER–DECEMBER 2006
care workers have not been given the tools to sys-tematically incorporate quality measurement in theirwork.
The following case was developed to demonstratehow statistical process control (SPC) can be appliedto health care processes. The SPC is a tool that de-fines what quality is, how performance is measured,and when investigation and possibly correction mustoccur. The SPC, once understood, empowers the doc-tor, nurse, department manager, and other healthcare workers to enter into the quality managementprocess on a more equal basis with those trainedin quality management techniques. The initial audi-ence for this article is health care managers, but asDeming and Berwick note, improvement occurs onlywhen all employees understand and embrace thesequality improvement techniques. Getting managersand opinion leaders on board is only the first stepto institutionalizing quality practices throughout theorganization.
This article does not assume that the reader isversed in statistics or quality management tech-niques, it does however assume familiarity withspreadsheet software. The goal is to demonstrate thatquality improvement techniques can be used produc-tively by all health care workers. At the end of thiscase it is hoped that the reader will be able to identifya use for the SPC in their area of responsibility, createSPC charts using Excel, and interpret these charts toimprove outcomes.
QUALITY IMPROVEMENT INHEALTH CARE
One of the greatest obstacles to quality improve-ment in health care is the belief of providers thatquality management tools developed in other indus-tries are not relevant to health care services. Not onlyare the initiatives of other industries deemed irrele-vant to health care, many providers hold that qualityimprovement mechanisms developed in other healthcare organizations are not applicable to the institu-tion they work in. This belief is the direct result of thesecond factor, and health care providers are not famil-iar with the purpose, operation, and interpretation
of quality management tools. Without understand-ing quality management techniques and the ends towhich they can be applied, it is understandable thathealth care providers would think these tools havelittle to offer them.
Health care quality management texts inform thereader that SPC charts should be used to improvedelivery processes and outcomes often without dis-cussing how they are created. Conversely, medicalresearch frequently reports the improved outcomesachieved after analyzing SPC charts. This article istargeted between these 2 positions: it reviews the SPCtechnique and presents a tool and an application soreaders can learn how to construct SPC charts andoffers suggestions as to when they can be used to im-prove health care delivery processes.
A case study is developed and followed to con-clusion to demonstrate one potential application ofthe SPC to health care. The intent is to provide thereaders with a basic understanding of how to buildSPC charts and encourage them to access and analyzethe case data using Excel to evaluate performance ontheir own. The case question is: are medicines de-livered in a timely manner to maximize medical ef-fectiveness and patient comfort? The control chartsdeveloped are used to draw conclusions regardingthe operation of the system, explore potential rea-sons for the performance observed, and speculateon system changes that could improve the timeli-ness of medicine delivery. After working through thecase, it is hoped that the reader will be motivatedto apply the principles and techniques to their ownwork.
One of the major difficulties in advancing healthcare quality is the lack of specificity in defining healthcare processes, establishing performance standards,and measuring compliance with standards after theyare defined. Attempts to improve operations and out-comes are difficult if not impossible when standardsand measures are ill-defined or absent. Efforts to de-fine medical processes continue to be hotly debated,and opponents argue that there is too much variationin medical practice to establish one way of treatingpatients and dictating to physicians will not improvepatient care and may hurt patients.3
A Statistical Process Control Case Study 223
On the other hand, countless research has beendone to define proper medical practice; much of thiswork is amendable to and should be tracked by theSPC to improve health care outcomes for patients.McGlynn et al found that patients received 54.9% ofrecommended care, more important than their con-clusion was their technical supplement that providedthe indicators of what should occur and when itshould occur for 30 conditions.4
Other industries recognize that poor performanceis the result of variation, that is, deviations from howthings should be done that adversely impact out-comes. Controlling variation is the key to processimprovement. Health care providers are arguablysaddled with more sources of variation than pro-ducers of other goods and services but the issue re-mains; is there too much variation in treatment andwill a more systematic approach to health care de-livery processes improve outcomes? The SPC is atool health care workers can use to determine whenvariation is a routine part of patient care, neces-sary and beneficial for patients and the health caresystem, and when variation is excessive and poten-tially harmful. Armed with this information, it is theduty of health care providers to reduce inappropriatevariation.
IMPROVING HEALTH CARE
Root cause analysis or routine monitoring?
Current quality control efforts revolve around rootcause analysis, that is, a comprehensive examina-tion of an adverse event with the goal of reducingharm to patients by preventing the reoccurrence ofthe event. JCAHO-accredited health care organiza-tions must conduct a root cause analysis focusingon systems and processes when a sentinel event oc-curs. Root cause analysis is limited by its initiatingcause; it is undertaken after an adverse event has oc-curred and examines only those processes that werepart of the care episode. In arguing for a broader ap-proach it has been noted “when multiple sources ofvariation are present, isolated observations provideinsufficient information on which to base objectivedecision making.”5
The SPC is a broader approach that continuouslyexamines processes to identify undesirable trends inperformance and institute corrective action beforeharm arises. Fasting and Gisvold used the SPC inanesthesia to study a range of adverse events that areless severe and more frequent than sentinel eventsthat contained the potential for harm. As a resultof their study, they were able to reduce anesthesiaaccidents.6 The SPC complements root cause analysisand extends process improvement efforts by addingongoing monitoring of a broad range of events to qual-ity management. JCAHO (LD.5.2) notes that sentinelevent analysis is reactive and does not meet the intentof the JCAHO patient safety standard.7 The SPC is notconcerned with particular cases but rather with theongoing function of systems and thus is a proactiveapproach to operations that monitors performance todetect changes in system performance before prob-lems arise.
The 6 steps of SPC
The goal of the SPC as envisioned by WalterShewhart is to determine when a system is out of con-trol and requires adjustment to improve its output.An in-control system simply indicates that it is oper-ating close to its historical performance; this perfor-mance, however, may fail to meet generally acceptedstandards or customer expectations. A system thatis out of control indicates performance that is sig-nificantly different from historical performance. Thegoals of the SPC are to identify when performancedeviates sufficiently to endanger quality and improvein-control performance to improve outcomes.
Shewhart’s second goal was to devise a qualitymonitoring and improvement system that could beoperated effectively by workers whose expertise liesin areas other than statistics.8 Such a system requiredclear signals to indicate when system performancefluctuates too much and be easy to use. Shewhart cre-ated his system in a manufacturing environment forthose trained in engineering and, as will be demon-strated, it is as applicable for service industries andthose trained in medical sciences. The SPC is de-signed for ease of use; the technique can be describedand completed in 6 steps.
224 QUALITY MANAGEMENT IN HEALTH CARE/VOLUME 15, ISSUE 4, OCTOBER–DECEMBER 2006
The SPC can be used to monitor the behavior ofany system that produces outputs that can be mea-sured numerically. The SPC can monitor the percent-age of defects in a sample (P charts), the number of de-fects in a sample (c charts), or output characteristics(average and variance, X̄ and R charts) among otherthings. These applications require employees to rou-tinely collect simple performance measures to moni-tor quality: percent defective, number of defects, theaverage weight or length of a product, time requiredto deliver a product or service, and the variance inthese characteristics. The case uses X̄ and R chartssince the concern is the interval between prescribedmedication time and the actual delivery of medicinesto patients. An earlier article in this journal providesa concise explanation of when the different controlcharts should be used.9
Step 1: Data collection
Data collection is the most time consuming part ofthe SPC process. A manager wanting to assess per-formance using the SPC must determine the desiredsample size, how often samples are to be drawn,procedures to ensure that the samples are random,and who is responsible for data collection. Managersmust realize that they face a trade-off between thecost of collecting data (sample size and sampling fre-quency) and the accuracy of information obtainedfrom the collected data. Larger samples typically pro-duce more accurate information but are more costlyand time consuming to collect. The events sampledmust be randomly drawn to ensure that the sampleis not biased; that is, the information gained fromthe sample must be representative of the larger pop-ulation for it to accurately assess the performance ofthe nonsampled phenomena. The procedures and re-sponsibility for data collection should ensure that thedata collector has no incentive to collect either favor-able or nonfavorable performance. After the data arecollected, it should be recorded in a spreadsheet.
Step 2: Calculation of descriptive statistics
After the data are recorded, descriptive statisticsmust be calculated. Descriptive statistics provide theinformation necessary to understand how the sys-
tem is operating, the first statistic is the mean. Themean is the measure of central tendency and re-ports “average” performance. The mean is used todetermine whether performance meets the desiredstandard. The mean for a sample is calculated asfollows: X̄ = ∑
xi/n, the sum of the sampled ob-servations divided by the sample size. If Excel isused to record data, the mean is calculated as fol-lows: =AVERAGE(range of observations). In medica-tion management the mean can be used to determinewhether patients received the specified dose—largeror smaller dosage is a problem. Likewise, science andcommon sense dictate that the most effective inter-ventions are delivered proximate to a medical event;the case seeks to determine whether medicine is de-livered to patients within (plus or minus) 2 hours ofthe prescribed time.
The second statistic, range (R), provides a mea-sure of the variance in the sample. Range is the high-est value in the sample minus the lowest value andis used to assess the variability in performance. Ex-cel can be directed to scan a series of numbers andidentify the maximum and minimum values and per-form the subtraction by the command: =MAX(rangeof observations)-MIN(range of observations).
The need for both statistics is demonstrated byassuming that the average turnaround time for a labo-ratory test is 60 minutes. Those awaiting results pre-fer a process in which 50% of tests are availablein 65 minutes and the remaining 50% in 55 min-utes to a process where one half of results are avail-able in 10 minutes and the other half take 110 min-utes. Average turnaround time, X̄ , for each process is60 minutes but the first process has a 10-minute range(65–55 minutes) while the second has a 100-minuterange (110–10 minutes). Lack of predictability in thesecond process is a problem. The person awaiting re-sults from the second process does not know whetherhe or she will receive them quickly or have to waitalmost 2 hours. Processes with the narrowest rangeare generally superior; the process with a 10-minuterange provides users with a clear idea of when re-sults will be available and allows them to schedulework accordingly. The variance in the second processcan lead to inefficiency if delays in the availability of
A Statistical Process Control Case Study 225
Figure 1. The 6 steps of statistical process control.
information impacts the effectiveness of treatmentor providers must repeatedly check for resultsbecause they do not know when they will beavailable.
Differences in system performance are the resultof variance, and all systems are affected by variance.Variance is categorized as natural or special cause;natural variation is inherent to a process and is theresult of noncontrollable forces such as the capabil-ities of labor and equipment, the impossibility of100% accurate measurements, and/or environmen-tal factors (ie, the weather). Special cause variationcan be traced to a controllable cause(s) that shouldnot be present in a properly operating system. Spe-cial cause variation may arise from system design, in-puts used, fatigue, wear and tear, lack of equipmentmaintenance, lack of training, etc. Any of these causescould reduce performance below desired or achiev-able levels.10 When assignable variation arises, it isthe responsibility of managers to identify its causeand control it to improve outcomes.
Process improvement is built on meeting a desiredstandard and reducing the variance in performance.Managers must monitor performance on an ongoingbasis, using the mean and range to ensure that perfor-mance targets are met. The X̄ chart monitors how ac-tual performance varies from historical performancewhile the R chart monitors uniformity—the differ-ence between the best and worst performance in asample. Both charts seek to identify when perfor-mance should be investigated, that is, when observedperformance deviates significantly (is too high or toolow) from historical performance.
Step 3: Calculation of control limits
The calculation of control limits requires 4 formu-las and provides a quantitative answer to how muchvariance will be accepted before a process is investi-gated. The formulas for the upper control limit (UCL)and lower control limit (LCL) of the X̄ chart are asfollows:
UCL : ¯̄X + (A ∗ R̄)
LCL : ¯̄X − (A ∗ R̄)
The new terms in these formulas are ¯̄X , aver-age performance across the samples collected ( ¯̄X =∑
X̄ i/number of samples); this is an average of aver-ages, R̄, the average range across the samples (R̄ =∑
Ri/number of samples), and A, the control chartfactor. ¯̄X and R̄ establish the historical performanceagainst which individual samples will be judged. ¯̄Xspecifies what average performance has been overa large number of samples or extended time periodand R̄ defines how performance has varied across thesamples or time period. The control chart factor ismultiplied by the average range to determine the ac-ceptable range of performance.
The magnitude of the control chart factor (A) is de-termined by sample size that is the number of items(n) in a sample. A larger sample size generally pro-duces more accurate statistics, and consequently thecontrol chart factor is reduced producing tighter lim-its (see Fig 2). When selecting a control chart factor,it is a common mistake for students of the SPC toconfuse the number of samples taken with the sam-ple size—control chart factors are determined by the
Figure 2. Control chart factors.11 LCL indicates lower controllimit; UCL, upper control limit.
226 QUALITY MANAGEMENT IN HEALTH CARE/VOLUME 15, ISSUE 4, OCTOBER–DECEMBER 2006
sample size (the number of observations in the sam-ple) rather than by the number of samples drawn.
The product of the control chart factor and the aver-age range is added (or subtracted) from the historicalperformance (the average of the averages) to deter-mine the upper (or lower) control limit. For example,when samples of 25 are drawn, the average output ofa process is expected to routinely fluctuate aroundits historical performance by ±15.3% of its averagerange (±0.153 ∗ R̄). Investigation is required when-ever the average for a sample varies by more than15.3% of the average range.
The upper and lower limits for the R chart alsorequire 2 formulas:
UCL :D2 ∗ R̄
LCL :D1 ∗ R̄
D1 and D2 are control chart factors and, similar to A,these factors produce tighter control limits as sam-ple size increases. When samples of 10 are drawn,a sample range could be up to 77.7% above or be-low the average range (1.000 ± 0.541 ∗ R̄) before in-vestigation is warranted. Using a larger sample of 25,acceptable variation is reduced to 54.1% above or be-low the average range (1.000 ± 0.541 ∗ R̄). It is onlywhen a sample range exceeds the control chart limitsthat investigation and potential corrective action arerequired.
Step 4: Graphing performance
Steps 2 and 3 provide all the necessary informa-tion to graph actual performance, the average, andthe range between the highest and lowest values,against historical and expected performance. Sam-ple averages (or ranges) are graphed as XY chartswith the x-axis defining when the sample was col-lected (reported in a chronological order) and they-axis recording the value of the sample average orrange. The centerline (CL), ¯̄X or R̄, and the upperand lower control limits are also graphed to providethe baselines against which actual performance ismeasured. Routine monitoring of performance, afterthe control limits are established, requires the rela-tively simple task of collecting data, calculating the
mean and range, and charting the values for newsamples.
Step 5: Interpreting performance
Step 5 is the examination of actual and expectedperformance to determine whether the process is incontrol or out of control. An in-control or stable pro-cess is one where actual performance, X̄ or R, fallswithin the control limits with data points lying oneither side of the centerline, without exhibiting a pat-tern. Figure 3 presents 6 configurations to look forwhen examining control charts. Chart 1 shows an in-control process in which data points do not breachthe upper or lower control limits and are randomlydistributed around the centerline. Chart 2 presentsthe classic out-of-control process where multiplesample values (samples 2, 5, and 8) breach the es-tablished control limits. Charts 3–6 show no samplesbreaching the control limits but all have recogniz-able patterns that indicate instability in the processor a systemic change. Sample values fall only on oneside of the centerline in chart 3, an upward trend be-ginning in period 5 is evident on chart 4, a cyclicalpattern occurs on chart 5, and the lack of variancestarting in period 6 on chart 6 suggests that none ofthese systems are operating as they have in the past.
Charts 2–6 suggest that the process has changed;it is the manager’s job to identify if and why perfor-mance has changed and the impact of this changeon patients. SPC charts may record positive or nega-tive changes to system performance. Once the changeand its impact are understood, managers need to in-stitutionalize those changes that improve outcomesor initiate corrective action for those that reduce theeffectiveness of the health care process.
Step 6: Investigation
Steps 1 through 5 are necessary to identify when aprocess should be reviewed, control charts clearly in-dicate when a limit is exceeded or the starting pointof a trend but they do not identify what has changedor the impact of the change on patients. Step 6 is themost challenging part of the process; does breach-ing a control limit or an identifiable pattern indi-cate that the process is out of control and requires
A Statistical Process Control Case Study 227
Figure 3. Control chart examples. UCL indicates upper control limit; CL, centerline; and LCL, lower control limit.
correction (or does it signal improved performance)?Correspondingly, in the absence of limit breaches ortrend, is a process performing at a high enough level?The answers are not straightforward, and the natureof sampling ensures that occasionally nonrepresen-
tative samples are drawn. A particular sample mayinclude a disproportionate number of high (or low)values and breach an upper (or lower) control limit,indicating a process change when no change has oc-curred. The first task of an employee, after a control
228 QUALITY MANAGEMENT IN HEALTH CARE/VOLUME 15, ISSUE 4, OCTOBER–DECEMBER 2006
limit is exceeded, is to determine whether the systemis truly out of control or whether a nonrepresentativesample has been drawn.
When a sample exceeds a control limit (Fig 3,chart 2), the first step in the investigation is to in-crease the sample size to determine whether the out-of-range result holds as more observations are exam-ined. If the expanded sample produces a mean orrange that falls within the control limits, the man-ager can assume that the process is in control, re-sume monitoring, and avoid tinkering with a stablesystem. One of the primary goals of the SPC is to fo-cus employee effort on areas that require correctionby eliminating processes that do not need attention.
If the out-of-range results persist after the samplesize is increased, the employee can assume that non-representative sampling did not produce the controllimit breach and the harder task of determining whyperformance has changed arises. Breaches of con-trol limits are designed to be rarely occurring eventsso employees will not spend significant amounts oftime or efforts investigating trivial variation in per-formance. The SPC can maximize the effectivenessof improvement efforts by concentrating employeeefforts toward assignable variation and controllablecauses and away from stable processes.
Given the rarity of breaches of limits or patternsdemonstrated on charts 3–6 in Figure 3 that indi-cate processes are not functioning as they have inthe past, the manager’s and employees’ job is to pin-point the cause(s) of change and enact corrective ac-tion if the change has impaired outcomes. The SPCenhances employees’ ability to pinpoint causes byproviding an early detection system to identify per-formance that is inconsistent with historical perfor-mance. Prompt identification of change may enhanceemployees’ ability to pinpoint the cause of changewhile the causes of change are still fresh in theirmind.
The uses to which the SPC can be applied in healthcare are numerous: are generally accepted standardsof care being followed, is health care provided dif-ferently to different populations, are waiting timesappropriate, and does performance vary with the per-sonnel delivering care, the location of service, or the
time or day of service?12 The next section applies the6-step process to a set of hypothetical data to illus-trate the SPC technique. The readers are encouragedto calculate the descriptive statistics and control lim-its and create the control charts for themselves.
A MEDICATION MANAGEMENT CASE
Effective and high-quality medical care requiresthat medicines be administered on a timely basis,and the performance standard expected in this caseis that medicines should be administered within2 hours of the prescribed time. Rather than exam-ining every dose delivered, the SPC allows the use ofa sample of a small number of drug administrationsto determine whether the system is in control, is itmeeting the 2-hour window, or is it out of compli-ance? A 100% sample may not be particularly infor-mative as it is unlikely if 100% of drugs are deliveredwithin 2 hours since natural variation is at work, thatis, patients have the right to refuse medication and doso, the patient may be receiving other treatments andbe unavailable for medication, etc. A 100% samplewould also be arduous, if not impossible, to collectgiven that a 500-bed hospital may dispense 160,000medications in a month. The 6-step SPC techniquedescribed above is followed to analyze medicationmanagement.
Step 1: Collect data
A random sample of 50 medications was drawnfor the day, evening, and night shifts every day fora month; a total of 4650 observations (50 medica-tions × 3 shifts per day × 31 days) and recorded ina spreadsheet. The sample size of 50 was arbitrarilydetermined; it is hoped that a sample size of this mag-nitude would persuade skeptical employees that thedata were valid. As the validity of the SPC processis demonstrated, the sample size could and shouldbe reduced. Technical note: with a sample size of50, an X̄ and SD (standard deviation) chart is recom-mended but this case will use the more understand-able R chart.9 A copy of this data can be obtained athttp://personal.ecu.edu/rossth/QMHCv15i4.xls andpasted into Excel.
A Statistical Process Control Case Study 229
Step 2: Calculate descriptive statistics
The first sample (Day 1, Day Shift, Monday) pro-duced an average delivery time of 111.38 minutes,=AVERAGE(B5:B54). The difference between actualand prescribed administration time was recorded asabsolute values so medicines administered prior toand after the prescribed time are both recorded aspositive values. Once the Excel formula is entered,it can be copied to the remaining columns (throughcolumn CP) to calculate average administration timefor each shift each day. The performance of the firstsample can be contrasted against the most timely av-erage delivery time of 100.12 minutes (#52), and theleast timely, 134.44 minutes (#71). The average timebetween the prescribed medication time and the ac-tual administration of medicines for all 93 samples(3 shifts per day × 31 days), ¯̄X , is 115 minutes. TheExcel formula is =AVERAGE(B56:CP56). The center-line is thus established at 115 minutes on the basisof historical performance, as stated earlier an indus-try average or patient expectation, if known, could beused to establish expected performance.
Sample #1 has a range of 98 minutes betweenthe most on-time delivery of medicine and the leasttimely delivery, =MAX(B5:B54)-MIN(B5:B54). Thisformula must again be copied across to column CPto calculate the range for each shift each day. Sample#1’s range of 98 minutes can be contrasted with thelow of 33 minutes (#41) and the high of 100 (#10). Theaverage time between the most on-time and the leaston-time administration for the 93 samples is 65 min-utes. The average and the range indicate that averagemedication administration time is 115 ± 65 minutesor actual administration of medicine ranges from 82.5to 147.5 minutes (115 ± 65/2) before or after the pre-scribed time.
A cursory review of performance, based on the de-scriptive statistics, provides a manager with a goodidea of where she or he should devote her or his at-tention. For example, the highest mean delivery timeoccurs on the Monday through Friday evening shiftsand the greatest variance occurs on the day shifts dur-ing the week. Is this performance acceptable? Shouldthe manager devote his or her time and energy to
investigating the delivery processes on these shifts?The SPC will answer these questions; at this pointthe high mean and wide range suggest that desiredperformance is not being achieved.
Step 3: Calculate control limits
X̄ UCL: 115 + ((0.75 × 1/√
50) × 65) = 121.9 minutes
Centerline (the mean) = 115.0 minutes
X̄ LCL: 115 − ((0.75 × 1/√
50) × 65) = 108.1 minutes
R UCL: (1.55 − (0.0015 × 50)) × 65 = 95.9 minutes
Centerline(therange) = 65.0 minutes
R LCL: (0.45 + (0.001 × 50)) × 65 = 32.5 minutes
Enter the X̄ control limits and centerline directly be-low the calculation of the average medication time inthe spreadsheet and copy across all columns. Simi-larly, the R chart limits and centerline should be en-tered and copied below the range for each sample.
The control limits indicate whether the medicationprocess is stable and subject to only natural varia-tion, and average medication time for a sample of 50should fluctuate between 108 and 122 minutes. Sim-ilarly, the range between the most and least on-timedelivery should vary from 33 to 96 minutes. If thesethresholds are exceeded, the SPC indicates an unsta-ble process or a potential change in performance thatrequires investigation. Breaches of the upper limit inthis case indicate deterioration in performance (lesstimely administration of medicines) while down-ward breaches may indicate positive changes in theprocess and improved performance.
Step 4: Graph actual and expected performance
Once the averages and ranges for each sample havebeen calculated and the upper and lower controllimits and centerlines are entered, Excel can createcontrol charts through the INSERT function. AfterINSERT is selected, the user selects CHART and LINE(type of chart) and enters the desired data range forthe X̄ chart, the data range entered must include themean for an X̄ chart (or the range for the R chart), theupper and lower control limits, and the centerline.In Figure 4, the x-axis reports the sample number,1 though 93.
230 QUALITY MANAGEMENT IN HEALTH CARE/VOLUME 15, ISSUE 4, OCTOBER–DECEMBER 2006
Figure 4. X̄ and R charts. UCL indicates upper control limit;LCL, lower control limit.
Step 5: Interpret graphs
The X̄ chart demonstrates that medication is rou-tinely delivered outside the desired 2-hour window.The sample means reveal that average performanceranges from 100 to 134 minutes. The R chart demon-strates that there is only a 33-minute differencebetween the most on-time and the least on-time de-livery of medicine on the most uniform shift (#41), onthe other hand shift sample #10 shows a 100-minutedifference in performance.
Given average performance of 115 minutes and theaverage range of 65 minutes, patients are receivingtheir medication on average a minimum of 50 min-utes before or after their prescribed time (115–65) orup to 170 minutes before or after their prescribedtime. At this point the reader should see we havea potent tool to evaluate how a process has and is op-erating. Is the process operated acceptably? In spiteof the fact that the LCL is set at 108 minutes, the orga-nization in this case should be striving to reduce its
average medication time below the current 115 min-utes. In addition, the control charts provide an earlydetection device for changes in system functioningover time, more (or less) on-time delivery of medica-tion or more (or less) consistent delivery of medicinesshould be reflected in the sample average and rangeallowing a manager to recognize positive or negativechanges in their operations.
In this case, we can clearly see that performanceis unacceptable, and the X̄ chart shows numerousbreaches of the UCL and increasing delivery timeson the last 3 days of the sampling period (samples#84–93). The manager should explore both of theseissues.
The sample ranges on the R chart generally liewithin their control limits but there are many sampleslocated around the upper and lower control limits.There are no data points substantially above or belowthe calculated control limits yet the multiple samplesat or slightly beyond the UCL and the multiple con-secutive samples clustering around 40 minutes sug-gest the need for investigation. Further investigationwill reveal that there are systematic differences inaverage performance and the range between the moston-time and the least on-time administration timeson various shifts.
Similar to the finding of the X̄ chart, the range onthe last 3 days of the month is significantly differentfrom performance throughout the month. Contrary tothe X̄ chart, which showed an upward trend signal-ing a divergence between actual delivery time andthe prescribed time, the R chart shows a downwardtrend, that is, a reduction in the difference betweenthe most on-time and least on-time delivery of med-ication. Simple calculations reveal that the averagemedication time and range were 114.3 and 69.4 min-utes during the first 10 days of the month, during thelast 10 days it was 117.2 ± 60.5 minutes. The con-clusion is that medications at the end of the monthwere less likely to be delivered at the prescribed timebut when they would be delivered they became morepredictable.
The control charts demonstrate a wide fluctuationin performance across shifts and days, indicatingan unstable medication management process. These
A Statistical Process Control Case Study 231
differences provide valuable information to under-stand system performance; which shifts provide themost on-time delivery of medicines, are there differ-ences in performance related to the day of the week,and why is end-of-month performance different fromthat in the rest of the month?
Step 6: Investigate when indicated and fix asappropriate
Control charts do not judge performance, in thiscase, both charts indicate that investigation is re-quired and suggest that the medication managementsystem is out of control. The UCL on the X̄ chartshows that medicines are routinely not administeredwithin the desired 2-hour window. Having receivedthis signal, it is the responsibility of managers andemployees to determine whether the system requiresfixing. The first question that should be asked is:what factor(s) prevents the timely administration ofmedicines? This is an open-ended question but theSPC can make identifying the cause(s) easier by exam-ining performance on different days or shifts. Diag-nosing and improving a system are easier when sub-standard performance can be isolated to a particularshift, day, or unit; that is, can differences be detectedbetween shifts, days, or units with high performanceand those with low performance?
DIAGNOSING THE PROBLEM
Given that medication times are failing to meet thedesired standard, what is wrong? Are there uniquefactors occurring on different shifts or days that pre-vent prompt administration of medicines? At thispoint the manager could decide to draw a larger sam-ple to see whether the results persist, but we willassume that they are valid and proceed to diagnosingthe problem.
Sorting the data allows the performance of indi-vidual shifts or days to be graphed against the estab-lished control limits. If the data are sorted by shifts(use the Excel SORT function: select DATA, SORT,OPTIONS, ORIENTATION, LEFT TO RIGHT, row 4,day, evening, and night shifts), the charts in Figure 5can be created. The x-axis now reports the day of the
month, 1 though 31, rather than the sample numbersince only 1 shift per day is graphed.
Medications are routinely delivered in less than115 minutes on the day shift, and there are only 5(out of 31) samples in which the medication timeis above the centerline, that is, the historical aver-age. Performance according to the X̄ chart meets theestablished standard yet the R chart raises concern.Although average administration time is within theestablished control limits, the variability in perfor-mance is troubling. The upper limit is 95.9 minutes;12 samples breached this limit, suggesting a unifor-mity problem. The large range indicates that indi-vidual patients routinely receive their medicationsbeyond the desired 2-hour window, thus violatingthe established standard. The greatest variation oc-curs Monday through Friday while performance onSaturday and Sunday is more uniform (the 2 datapoints clustering around 40 on the 6th, 7th, 13th,14th . . . days of the month). What factors are differ-ent between weekdays and weekends, which couldaccount for this difference in performance?
This finding demonstrates the need for both con-trol charts; the X̄ chart may show performance withincontrol limits but performance may be unacceptableif the range is large. Assuming that one half of pa-tients received their medication within 1 hour andthe other half in 3 hours of the prescribed time, the X̄chart would report an acceptable average of 2 hoursbut the average of 2 hours with a 2-hour range clearlyindicates that medications are not delivered withinthe 2-hour goal.
Analysis of the X̄ chart for the evening shift showsthat the Monday through Friday shifts routinely pro-duce samples above the UCL. Obviously, this shouldbe the chief concern of employees, and graphingthe performance of the second shift against histori-cal performance reveals no samples falling under themean performance time of 115 minutes and 23 sam-ples breaking the UCL. The R chart reinforces theconcern as it shows a difference between the most
232 QUALITY MANAGEMENT IN HEALTH CARE/VOLUME 15, ISSUE 4, OCTOBER–DECEMBER 2006
Figure 5. Performance by shift. UCL indicates upper control limit; LCL, lower control limit.
A Statistical Process Control Case Study 233
on-time and least timely delivery of medicines is80 minutes. Twenty observations are above the cen-terline and these are in consecutive runs of 5, againemphasizing that employees working the eveningshift Monday through Friday are less likely to de-liver medications at the prescribed time and are lessuniform in the delivery of medication than employ-ees working on the weekend. As asked in the anal-ysis of the day shift, what factors are at work thataccount for the different performance of the eveningshift between weekdays and the weekend? After iso-lating evening shift performance, it is clear that per-formance on this shift is failing to achieve the stan-dards set and further investigation is needed.
The night shift has the promptest medication timeswith the majority of samples falling below the LCLand only 3 samples produce an average above thecenterline. Breaking the lower limit may indicate su-perior performance but it could also reflect a bro-ken reporting system—data may not be recorded orrecorded properly.13 The 3 samples above the center-line were also above the UCL and occurred on thelast 3 days of the month. This may be the result of asystem change that requires rectification or perhapsit was due to an unplanned absence and performancewill return to its previous level with the return of theabsent employee. The R chart shows very consistentperformance on the night shift, its range falls belowthe centerline indicating superior performance com-pared with the day and evening shifts. Once again,the pattern reflects strings of 5 and 2 showing that theweekend shifts are more consistent in their deliverytimes than those of the Monday through Friday shifts.It is interesting to note that the end-of-month changethat indicated a movement away from the prescribedtime reduced the variation in performance.
While on-time delivery of medication is the goal ofthe organization, the lack of consistency between theshifts indicates process instability. Unstable in thiscase means an inability to predict outcomes; the orga-nization cannot predict when medications will be de-livered given the differing performance across shifts.The manager should explore the reason(s) for inabil-
ity of the evening weekday shift to deliver medicineswithin 2 hours of their prescribed time and the vary-ing performance of the day shift. There appears to beat least 1 factor that accounts for the lack of on-timeadministration of the evening shift and wide range indelivery times on the day shift during the week, giventhe performance demonstrated on Saturday andSunday. The difference between weekday and week-end performance may be due to patient volume,staffing, or the assignment of duties.
Comparing performance across shifts
The discussion above was based on analyzing per-formance on a shift; analyzing performance acrossshifts indicates that there are 1 or more factors thatexplain the different performance of the day, evening,and night shifts. Average medication time clustersaround 110 minutes on the day shift, 125 minuteson the evening shift, and 105 minutes on the nightshift. The night shift provides medicines 20 minutescloser to their prescribed times than the evening shiftand 5 minutes closer than the day shift. Similarly,there are pronounced differences in the range be-tween the shortest and longest administration times,100 minutes on days, 80 minutes on evenings, and60 minutes on nights. The night shift demonstratesconsistently superior performance as in more timelydelivery of medicines and more uniform deliverytimes than the other 2 shifts. Obviously the reasonsfor the differences in performance across shifts mustbe understood, whether it is wholly or partially dueto the distribution of nursing and pharmacy dutiesbetween shifts, to improve performance.
The stratification of the control charts by shiftmakes it apparent that performance improvement ef-forts should begin on the evening shift. Managers andemployees should embrace process improvement asan ongoing process (ie, continuous quality improve-ment [CQI]); hence, over time the goal should be ad-ministration of medicines at or at least closer to theprescribed across all shifts and days but the first stepbegins by analyzing the process on the shift with thewidest gap between desired and actual performance.
The CQI is an ongoing process of striving for bet-ter performance; once an organization improves its
234 QUALITY MANAGEMENT IN HEALTH CARE/VOLUME 15, ISSUE 4, OCTOBER–DECEMBER 2006
Figure 6. Cause and effect diagram.
performance and SPC indicates these improvementsare stable, it should attempt to reduce its controllimits and institute tighter control over processes toproduce even better patient outcomes. An institu-tion that successfully implements the SPC can ex-pect better patient outcomes, higher patient and em-ployee satisfaction that accompany better outcomes,and more effective use of resources.
Identifying the major causes of a problem
The first task in improving outcomes is to iden-tify an area for improvement; in this case, the SPCwas used to recognize the untimely delivery of med-ications on the evening shift on weekdays. The nexttask is to explore and ultimately identify the poten-tial causes of substandard performance. Why is per-formance failing to meet expectations? Fishbone orcause and effect charts are often employed to explorethe potential causes of performance problems. Fish-bone charts begin the exploration process by identify-ing the major reasons why unacceptable performancecould occur and then exploring each reason to iden-tify the specific organization practices that could con-tribute to the problem. Off-schedule medication maybe the result of 4 or more major causes, for brevity 4causes; staffing, caseload, process design, and phar-macy, are identified in Figure 6.
After the major causes are identified, employeesshould explore the issues within each major causeto identify if, why, and how it impacts the deliv-ery of medicines. For example, examining staffingas a major potential cause affecting the timelinessof medication may lead the QI team to investigatestaffing levels, employee qualifications and training,productivity of personnel, or any numbers of staffingissues. Examining caseload may lead the team to in-vestigate whether the number of patients and/or theintensity of care required affects the timeliness ofdrug delivery. Similarly, questions of process designmay explore job assignments and scheduling (ad-missions, surgeries, ancillary tests, discharges, andhousekeeping duties), while pharmacy issues mayinclude the delivery of medications from the phar-macy, medication errors (dosage and/or type), illeg-ibility of orders, and adverse drug reactions and/orcontraindications.
In this case the reasons for off-schedule medicationmay arise from too few employees and too many pa-tients, communication problems, poor oversight, orany other factor suggested on the fishbone diagram.Any or all of these issues (plus others not identified)could impact the timeliness of the medication man-agement process.
The role of manager, quality or process improve-ment director, and other members of the health
A Statistical Process Control Case Study 235
care delivery process is to evaluate and eliminatethe causes identified on the fishbone diagram untilthe most probable factor(s) is identified and correc-tive action taken. The identification process may in-volve reaching consensus among the involved partiesor running investigations and tests.
After the most likely cause is identified and correc-tive action taken, the team should determine whetherthe action produced the intended effect, and weremedications delivered more timely? If timelinessdoes not improve, then further review is required.If corrective action is successful in improving perfor-mance, outcomes must continue to be monitored toensure that the improvement is not lost over time andas a baseline for further improvement thus initiatinga CQI cycle.
The SPC improves management by setting clearperformance standards for employees, establishinga consistent evaluation standard for managers andemployees to use, and providing a tool to moni-tor processes when managers are absent; that is, thehead nurse has the ability to monitor evening andnight processes in spite of the fact that she or hemay generally work in the day shift (or weekendperformance when she or he works Monday throughFriday).
Improving patient care is a formidable task, whichshould not be hampered by a lack of or misunder-standing of quality management techniques. This ar-ticle reviewed the SPC technique, demonstrated howa widely available spreadsheet package can be usedto record and analyze data, and analyzed an SPC case.Readers were encouraged to access the case data, per-form their own calculations, create their own graphsto complete the case, and consider how a comparableprocess could be used to monitor and improve carein their organization. This case demonstrates that theSPC can be used anytime timeliness is a factor af-fecting medical outcomes and/or patient satisfactionsuch as the timeliness of discharge, procedure time,test turnaround time, registration time, waiting time,and etcetera.
The SPC compliments and extends current healthcare efforts to improve health care processes and out-comes. Analysis of the case demonstrates that the SPCcan provide a wealth of information to understandhow current processes are performing and a basis toinstitute improvement. More important than the in-formation generated however is how instilling thisway of thinking into employees will change how theyapproach their work. The discussion shows that therecan be multiple causes for substandard performance,these causes are not revealed by the SPC but ratherprovide employees with a starting point to applytheir analytical and problem-solving skills. Employ-ees will determine how successful the quality im-provement process is and they must see themselvesas stakeholders in the process. We know that healthcare workers are committed to improving the healthof their patients; the SPC is simply a tool to assistthem in these efforts by quantifying performance andsignaling when a process has changed sufficiently toimpact outcomes.
The SPC is a tool specifically designed for thosenot trained in management science or statistics toimprove the quality of their work and remains un-derutilized in the health care field. Quality improve-ment in health care is not going to be the result ofa top-down approach but will become only an inte-gral part of the health care delivery system when allemployees embrace quality improvement tools. Thefirst step is to convince health care workers that qual-ity improvement tools are relevant to their work andthat they can measure performance themselves. It isonly when employees begin to measure and analyzetheir performance that we can expect to see ongoingand widespread improvement in health care deliveryprocesses and outcomes.
1. Lighter DE, Fair DC. Quality Management in Health Care. 2nded. Sudbury, Mass: Jones and Bartlett Publishers; 2004.
2. Galvin R. Deficiency of will and ambition: a conversation withDonald Berwick. Health Affairs. 2005;24(Web Exclusive):W5-1–W5-9.
3. Timmermans S, Maulk A. The promises and pitfalls of evidencebased medicine; non-adherence to practice guidelines remains
236 QUALITY MANAGEMENT IN HEALTH CARE/VOLUME 15, ISSUE 4, OCTOBER–DECEMBER 2006
the major barrier to the successful practice of evidence-basedmedicine. Health Aff. 2005;24:18–28.
4. McGlynn EA, Asch SM, Adams J, et al. The quality of healthcare delivered to adults in the United States. N Engl J Med.2003;348(26):2635–2645.
5. Laffel G, Blumenthal D. The case for using industrial qual-ity management science in health care organizations. JAMA.1989;262(20):2869–2873.
6. Fasting S, Gisvold S. Statistical process control methods allowthe analysis and improvement of anesthesia care. Can J Anesth.2003;50:767–774.
7. 2001 JCAHO Standards in Support of Patient Safety and Med-ical/Health Care Error Reduction. Available at: http://www.dcha.org/JCAHORevision.htm. Accessed April 10, 2006.
8. Humble C. Caveats regarding the use of control charts. InfectControl Hosp Epidemiol. 1998;19(11):865–868.
9. Amin S. Control charts 101: a guide to health care applications.Qual Manag Health Care. 2001;9(3):1–27.
10. Shewhart W. Economic Control of Quality of ManufacturedProduct. New York, NY: D Van Nostrand; 1931.
11. Gaither N, Frazier G. Operations Management. Mason, Ohio:Southwestern Thomson Learning; 2002.
12. Benneyan J. Statistical quality control methods in infectioncontrol and hospital epidemiology, part 1: introduction andbasic theory. Infect Control Hosp Epidemiol. 1998;19(3):194–214.
13. Finison LJ, Finison KS, Bliersbach CM. The use of control chartsto improve healthcare quality. J Health Qual. 1993;15(1):9–23.