Our previous two posts introduced and explored economic control. The first post, dated March 18th 2013, was an introduction to this principle. The second post, dated April 4th 2013, illustrated in more detail using the healthcare industry. As the third and final part of this series, we’ll explore how economic control can serve any business, and any industry.
Economic control charts find their own level, meaning they find the stable process (“level playing field”) within the instabilities (spikes etc.). The “tramlines” reveal the inherent stable process. There are strict, objective rules in the mathematics for how this is done. Once set up, no changes are made to the limits (“tramlines”) until the process improves, as determined by the chart. New points are added each day, week or month, without changing the limits. Modern software encourages the limits to be recalculated each time but this is wrong (like moving goalposts around).
The landmark text that introduced economic control in 1931, in a brilliant 300+ page development, used physics, mathematics, statistics and economics. Suddenly (on page 304) appears the seemingly simple formula: process average ± 3 standard deviations. This became known as the control chart. More correctly, it’s an economic control chart .
The important element of this deceptively simple formula (used in most industries for nearly 100 years now) is that the control limits are set economically (so that management actions will save and make most money). It is often said that control charts have a false-alarm rate of 0.27% but this is irrelevant. Nowhere in the original text does that number appear. The whole point is the limits are set economically. Further, the 0.27% is inexact in real processes, so it’s a red herring at best.
First impressions of the 3-sigma limits are often that they’re really wide and we should have a tighter standard. In fact they’re not “wide”. They are what they are. This error arises from missing the economic aspect.
It’s also often said that variations inside limits are random. They are not. All fluctuations (large or small) are caused by something(s). Now, the data within limits do often follow random patterns. So when someone says: “What if that point outside limits is just a false alarm?” the answer is there is no such thing. Since all variations have causes, a large one is by definition worth something. The hand-wringing comes because it is thought it may be pure chance. It isn’t. Now the causes that conspired to create a large number may have fallen into a perfect storm by chance, but they’re still real and there’s money to be made. This is a little like serendipity. No-one minds that we stumble into breakthroughs serendipitously. Same here.
The way to see this is to think of variations inside limits as having many causes, unknown. These will be impossible or expensive to figure out. So we use statistical design to do that. Variations outside limits will be economic to figure out and fix, ignore (e.g. month effects) or bake in (if good). In other words their cause(s) are easily found out and exploited for improvement.
Points outside limits (and/or a few other patterns) render the process unstable and are easy to figure out, then remove (or bake in if it’s a good spike), to make the process stable (standardized).
Here we’re not naïve that all processes can be made stable (standardized), such as if weather is involved in outside plant. Then we use standard workarounds. In general though, economic control gives a method to assess and standardize processes and their measurement systems.
If an improvement has just been implemented and a single point then crosses the limit (in the improvement direction) then it is a very big deal. This is where that crude, approximate rule of thumb of 0.27% comes in handy. If we’re plotting data monthly then we’d only expect to cross that limit roughly once every 62 years. So this is like a 62-year flood. (100/(0.27/2) x 12 = 61.7 years.) That information alone adds management fuel to the emerging improvement.
There is a deeper aspect to economic control. All of statistics (in the main techniques statisticians call parametric) is based on distribution theory. Averages, standard deviations, significance tests, regression etc. all are based on this. Certain mathematical requirements precede use of all these things. When we take courses in statistics we see those but read them much like the small-print in a legal contract. Most fundamental in that small-print is that the process be stable. If it is not, distribution theory breaks down. So, for example, even a simple pre-post test to see if something we did improved the process will be wrong if we ignore instabilities. “Wrong” is not strong enough a word – a better one would be arbitrary. Economic control is the only way to adjust so that we find the correct answer. This is profound for competitive advantage in a business.
This surprising comment that all of statistics* breaks down on unstable processes, is in the literature under the heading of analytic statistics. It is not well known even among qualified statisticians.
Industrial processes are almost always unstable, so statistics will not work unless the adjustments are made. The adjustments are simple, fast and follow rigorous rules that cannot be bent.
Anyone can do economic control charts. They are self-teaching devices where experience increases skill in usage. Common sense and a copy of the formulae/rules is enough to get started. Access to an expert speeds this learning curve and avoids stumbling through common mistakes.
This all happens very fast. Economic control does not slow the business but speeds it. If a business were able to keep up with economic control it would be moving fast indeed. With a little practice, firms will find the economic control charts ready and waiting each morning. Of course they are only used on a few big things that make most money fastest.
* Statisticians use a code here: i.i.d. ~ independent, identically distributed
1. Shewhart, Walter, A. Economic Control of Quality… Van Nostrand (1931)