Sunday, November 10, 2013

11 . 11 . 13 | Efficient Orthogonal Design

The March 2013 government publication Efficient Orthogonal Design (AHRQ 13-0024-EF) authored by Mathematica under the direction of Dr. Randy Brown, is now gaining attention from health plans and the healthcare industry as a whole.

The research used in part informal working sessions comingout of the multi-year collaboration between Nobi’s Kieron Dey and Dr. Brown. Citing also published papers by KK Moore, the report is important for its introduction of large, orthogonal designs to test many interventions simultaneously in healthcare, also known as comparative effectiveness studies within the healthcare industry. The paper highlights an example testing about a dozen interventions in a disability study that worked to reduce hospitalizations within a population including both physical and behavioral disabilities.

One of the publication’s strengths is in adapting the original theory into a practical guide that any organization can adopt using analytical expertise already on staff.



See More:
- The Agency for Healthcare Research and Quality
- AHRQ Efficient Orthogonal Design Article Search

Monday, April 15, 2013

04 . 15 . 13 | Broad Based Economic Control

Our previous two posts introduced and explored economic control. The first post, dated March 18th 2013, was an introduction to this principle. The second post, dated April 4th 2013, illustrated in more detail using the healthcare industry. As the third and final part of this series, we’ll explore how economic control can serve any business, and any industry.

Economic control charts find their own level, meaning they find the stable process (“level playing field”) within the instabilities (spikes etc.). The “tramlines” reveal the inherent stable process. There are strict, objective rules in the mathematics for how this is done. Once set up, no changes are made to the limits (“tramlines”) until the process improves, as determined by the chart. New points are added each day, week or month, without changing the limits. Modern software encourages the limits to be recalculated each time but this is wrong (like moving goalposts around). 

The landmark text that introduced economic control in 1931, in a brilliant 300+ page development, used physics, mathematics, statistics and economics. Suddenly (on page 304) appears the seemingly simple formula: process average ± 3 standard deviations. This became known as the control chart. More correctly, it’s an economic control chart [1].

The important element of this deceptively simple formula (used in most industries for nearly 100 years now) is that the control limits are set economically (so that management actions will save and make most money). It is often said that control charts have a false-alarm rate of 0.27% but this is irrelevant. Nowhere in the original text does that number appear. The whole point is the limits are set economically. Further, the 0.27% is inexact in real processes, so it’s a red herring at best.

First impressions of the 3-sigma limits are often that they’re really wide and we should have a tighter standard. In fact they’re not “wide”. They are what they are. This error arises from missing the economic aspect.

It’s also often said that variations inside limits are random. They are not. All fluctuations (large or small) are caused by something(s). Now, the data within limits do often follow random patterns. So when someone says: “What if that point outside limits is just a false alarm?” the answer is there is no such thing. Since all variations have causes, a large one is by definition worth something. The hand-wringing comes because it is thought it may be pure chance. It isn’t. Now the causes that conspired to create a large number may have fallen into a perfect storm by chance, but they’re still real and there’s money to be made. This is a little like serendipity. No-one minds that we stumble into breakthroughs serendipitously. Same here.

The way to see this is to think of variations inside limits as having many causes, unknown. These will be impossible or expensive to figure out. So we use statistical design to do that. Variations outside limits will be economic to figure out and fix, ignore (e.g. month effects) or bake in (if good). In other words their cause(s) are easily found out and exploited for improvement.

Points outside limits (and/or a few other patterns) render the process unstable and are easy to figure out, then remove (or bake in if it’s a good spike), to make the process stable (standardized).

Here we’re not naïve that all processes can be made stable (standardized), such as if weather is involved in outside plant. Then we use standard workarounds. In general though, economic control gives a method to assess and standardize processes and their measurement systems.

If an improvement has just been implemented and a single point then crosses the limit (in the improvement direction) then it is a very big deal. This is where that crude, approximate rule of thumb of 0.27% comes in handy. If we’re plotting data monthly then we’d only expect to cross that limit roughly once every 62 years. So this is like a 62-year flood. (100/(0.27/2) x 12 = 61.7 years.) That information alone adds management fuel to the emerging improvement.

There is a deeper aspect to economic control. All of statistics (in the main techniques statisticians call parametric) is based on distribution theory. Averages, standard deviations, significance tests, regression etc. all are based on this. Certain mathematical requirements precede use of all these things. When we take courses in statistics we see those but read them much like the small-print in a legal contract. Most fundamental in that small-print is that the process be stable. If it is not, distribution theory breaks down. So, for example, even a simple pre-post test to see if something we did improved the process will be wrong if we ignore instabilities. “Wrong” is not strong enough a word – a better one would be arbitrary. Economic control is the only way to adjust so that we find the correct answer. This is profound for competitive advantage in a business.

This surprising comment that all of statistics* breaks down on unstable processes, is in the literature under the heading of analytic statistics. It is not well known even among qualified statisticians.

Industrial processes are almost always unstable, so statistics will not work unless the adjustments are made. The adjustments are simple, fast and follow rigorous rules that cannot be bent.

Anyone can do economic control charts. They are self-teaching devices where experience increases skill in usage. Common sense and a copy of the formulae/rules is enough to get started. Access to an expert  speeds this learning curve and avoids stumbling through common mistakes.

This all happens very fast. Economic control does not slow the business but speeds it. If a business were able to keep up with economic control it would be moving fast indeed. With a little practice, firms will find the economic control charts ready and waiting each morning. Of course they are only used on a few big things that make most money fastest.


* Statisticians use a code here: i.i.d. ~ independent, identically distributed




REFERENCES:

1. Shewhart, Walter, A. Economic Control of Quality… Van Nostrand (1931)

Wednesday, April 3, 2013

04 . 02 . 13 | Healthcare Economic Control

Building on our previous post which introduced the principles of economic control, this set of charts shows the work that preceded the healthcare case described in Case Studies: Healthcare | Health at Home, Not Hospital.

The first of these shows how measurement error (meaning “noise” not “mistake”) was initially unstable but was quickly fixed by removing non-applicable cases. The chart’s spike revealed this flaw in the tracking systems and found it also throughout all the data (not just the spike). That stabilized measurement error (i.e. the data all then fell inside tighter tramlines). Calls for measurement perfection were advised against since it would have been uneconomic (i.e. a severe drain on resources) and often impossible. This economic aspect is one of the most valuable features of economic control.

 

The second chart shows that the nurses (and the patients they cared for) in the largest of 3 simultaneous statistical designs, are homogeneous in terms of chronic health events, similar to the retail stores in the first post. So this assured a “level playing field” for the study that followed.





The next pair of charts are a hybrid. On the top chart, the gray areas are measurement error (i.e. noise) and the outer limits are the process (i.e. chronic events). This gray area offers a simple way to always know measurement error will not get in the way. It is clear the gray portion is not obscuring the chronic events month by month. The gray is about a quarter of distance between the outer dotted lines. Since statisticians use squared (not linear) distances, only about a quarter2 = 1/16= 6 ¼% of the process is really obscured. A good rule of thumb here is 25% tops, but 10-15% preferred.

This is surprising, given the visual impression, so the square law clarifies.

The lower chart is also measurement error but looks at precision (i.e. how much measurement error varies). The top chart was measurement accuracy (i.e. how close to the true mark it gets and how well it discriminates process shifts).

 This simple hybrid method allows processes to be improved and all questions about measurement error answered (really pre-empted) in real time, in the months ahead.

This case produced about a third improvement (against experimental prediction of a quarter) in a 3-month study plus a couple of months to solve implementation problems. The implementation population was double the size of the random sample used in the study.

Tuesday, March 19, 2013

03 . 18 . 13 | Principles of Economic Control

Nobi’s large statistical designs (to test which of 20+ changes improve a business quickly, and quantify by how much) are well understood. Less well known is the novel way we integrate economic control.

Economic control charts just plot the measurement we’re improving, then place limits (that look like tramlines) showing the extremes the process will normally confine to. Some examples will speak best.


The chart above ensured a “level playing field” when randomizing retail stores to a statistical design for testing 11 changes in stores and marketing, bringing a 9.8% “comps” increase from 2 of the 11 (see also homepage Case Studies: Retail Sales | Innovation During a Recession).

The next chart shows an inventory reduction project that started when it did.



Usually there are gaps to close in initial implementation, which typically take 1-2 meetings using the scientific method and economic control. Although little or no adherence monitoring is needed in general, it is used more to close implementation gaps, in order to know what’s been going on at all times and places. A small, random sample of transactions is inspected for the changes being implemented. The work is completed with people in the trenches. This is quite subtle in the details but with good business advantage.

Economic control is essential when using statistical designs for rapid improvement. It’s simple to use, though deceptively clever. It is important to managers in guiding economic decisions, and to scientists in providing objectivity in unstable business data (where the rules of statistics break down).