Monday, April 15, 2013

04 . 15 . 13 | Broad Based Economic Control

Our previous two posts introduced and explored economic control. The first post, dated March 18th 2013, was an introduction to this principle. The second post, dated April 4th 2013, illustrated in more detail using the healthcare industry. As the third and final part of this series, we’ll explore how economic control can serve any business, and any industry.

Economic control charts find their own level, meaning they find the stable process (“level playing field”) within the instabilities (spikes etc.). The “tramlines” reveal the inherent stable process. There are strict, objective rules in the mathematics for how this is done. Once set up, no changes are made to the limits (“tramlines”) until the process improves, as determined by the chart. New points are added each day, week or month, without changing the limits. Modern software encourages the limits to be recalculated each time but this is wrong (like moving goalposts around). 

The landmark text that introduced economic control in 1931, in a brilliant 300+ page development, used physics, mathematics, statistics and economics. Suddenly (on page 304) appears the seemingly simple formula: process average ± 3 standard deviations. This became known as the control chart. More correctly, it’s an economic control chart [1].

The important element of this deceptively simple formula (used in most industries for nearly 100 years now) is that the control limits are set economically (so that management actions will save and make most money). It is often said that control charts have a false-alarm rate of 0.27% but this is irrelevant. Nowhere in the original text does that number appear. The whole point is the limits are set economically. Further, the 0.27% is inexact in real processes, so it’s a red herring at best.

First impressions of the 3-sigma limits are often that they’re really wide and we should have a tighter standard. In fact they’re not “wide”. They are what they are. This error arises from missing the economic aspect.

It’s also often said that variations inside limits are random. They are not. All fluctuations (large or small) are caused by something(s). Now, the data within limits do often follow random patterns. So when someone says: “What if that point outside limits is just a false alarm?” the answer is there is no such thing. Since all variations have causes, a large one is by definition worth something. The hand-wringing comes because it is thought it may be pure chance. It isn’t. Now the causes that conspired to create a large number may have fallen into a perfect storm by chance, but they’re still real and there’s money to be made. This is a little like serendipity. No-one minds that we stumble into breakthroughs serendipitously. Same here.

The way to see this is to think of variations inside limits as having many causes, unknown. These will be impossible or expensive to figure out. So we use statistical design to do that. Variations outside limits will be economic to figure out and fix, ignore (e.g. month effects) or bake in (if good). In other words their cause(s) are easily found out and exploited for improvement.

Points outside limits (and/or a few other patterns) render the process unstable and are easy to figure out, then remove (or bake in if it’s a good spike), to make the process stable (standardized).

Here we’re not na├»ve that all processes can be made stable (standardized), such as if weather is involved in outside plant. Then we use standard workarounds. In general though, economic control gives a method to assess and standardize processes and their measurement systems.

If an improvement has just been implemented and a single point then crosses the limit (in the improvement direction) then it is a very big deal. This is where that crude, approximate rule of thumb of 0.27% comes in handy. If we’re plotting data monthly then we’d only expect to cross that limit roughly once every 62 years. So this is like a 62-year flood. (100/(0.27/2) x 12 = 61.7 years.) That information alone adds management fuel to the emerging improvement.

There is a deeper aspect to economic control. All of statistics (in the main techniques statisticians call parametric) is based on distribution theory. Averages, standard deviations, significance tests, regression etc. all are based on this. Certain mathematical requirements precede use of all these things. When we take courses in statistics we see those but read them much like the small-print in a legal contract. Most fundamental in that small-print is that the process be stable. If it is not, distribution theory breaks down. So, for example, even a simple pre-post test to see if something we did improved the process will be wrong if we ignore instabilities. “Wrong” is not strong enough a word – a better one would be arbitrary. Economic control is the only way to adjust so that we find the correct answer. This is profound for competitive advantage in a business.

This surprising comment that all of statistics* breaks down on unstable processes, is in the literature under the heading of analytic statistics. It is not well known even among qualified statisticians.

Industrial processes are almost always unstable, so statistics will not work unless the adjustments are made. The adjustments are simple, fast and follow rigorous rules that cannot be bent.

Anyone can do economic control charts. They are self-teaching devices where experience increases skill in usage. Common sense and a copy of the formulae/rules is enough to get started. Access to an expert  speeds this learning curve and avoids stumbling through common mistakes.

This all happens very fast. Economic control does not slow the business but speeds it. If a business were able to keep up with economic control it would be moving fast indeed. With a little practice, firms will find the economic control charts ready and waiting each morning. Of course they are only used on a few big things that make most money fastest.


* Statisticians use a code here: i.i.d. ~ independent, identically distributed




REFERENCES:

1. Shewhart, Walter, A. Economic Control of Quality… Van Nostrand (1931)

Wednesday, April 3, 2013

04 . 02 . 13 | Healthcare Economic Control

Building on our previous post which introduced the principles of economic control, this set of charts shows the work that preceded the healthcare case described in Case Studies: Healthcare | Health at Home, Not Hospital.

The first of these shows how measurement error (meaning “noise” not “mistake”) was initially unstable but was quickly fixed by removing non-applicable cases. The chart’s spike revealed this flaw in the tracking systems and found it also throughout all the data (not just the spike). That stabilized measurement error (i.e. the data all then fell inside tighter tramlines). Calls for measurement perfection were advised against since it would have been uneconomic (i.e. a severe drain on resources) and often impossible. This economic aspect is one of the most valuable features of economic control.

 

The second chart shows that the nurses (and the patients they cared for) in the largest of 3 simultaneous statistical designs, are homogeneous in terms of chronic health events, similar to the retail stores in the first post. So this assured a “level playing field” for the study that followed.





The next pair of charts are a hybrid. On the top chart, the gray areas are measurement error (i.e. noise) and the outer limits are the process (i.e. chronic events). This gray area offers a simple way to always know measurement error will not get in the way. It is clear the gray portion is not obscuring the chronic events month by month. The gray is about a quarter of distance between the outer dotted lines. Since statisticians use squared (not linear) distances, only about a quarter2 = 1/16= 6 ¼% of the process is really obscured. A good rule of thumb here is 25% tops, but 10-15% preferred.

This is surprising, given the visual impression, so the square law clarifies.

The lower chart is also measurement error but looks at precision (i.e. how much measurement error varies). The top chart was measurement accuracy (i.e. how close to the true mark it gets and how well it discriminates process shifts).

 This simple hybrid method allows processes to be improved and all questions about measurement error answered (really pre-empted) in real time, in the months ahead.

This case produced about a third improvement (against experimental prediction of a quarter) in a 3-month study plus a couple of months to solve implementation problems. The implementation population was double the size of the random sample used in the study.