Friday, October 24, 2014

10 . 24 . 14 | Dr. Little Shares His Take on Dey’s Book

Dr. Kevin Little, Principal and Founder of Informing Ecological Design, LLC recently penned a review of Kieron Dey’s Competitive Innovation and Improvement: Statistical Design and Control. We had expected the book to be controversial among academics and researchers since it contains no mathematical notation and also explains common errors in experimental work by inappropriate use of mathematical models in both design and analysis. So we were pleasantly surprised to find an independent review by Dr. Kevin Little (PhD is statistics from University of Wisconsin at Madison) an experienced consultant with unusually strong technical skills in this field (as well as many others).

Dr. Little provides a balanced review rightly noting nothing new in the statistical control sections and centering on the parallel use of statistical design and control in unorthodoxly large and diverse business settings, as especially innovative.

The review also notes the case in Chapter 1 alone as being worth the price of the book; this is good insight since it frees healthcare of the widely perceived need to randomize patients to the experiment allowing improvement in live care/disease management operations. It also shows why measurement error is never an issue in problems of this type (unless "cleaned up"!) The case was early in a set of about a dozen similar cases (some of which were of better design or larger improvement) and was chosen in part because it shows how real cases look, imperfections fixed along the way. The dozen cases met with fierce resistance from statistical colleagues which was important to sound design and overcoming similar unfounded concerns in the future. Chapter 1 conveys the sparks caused by pulling it uphill in this way as a feature of good science.

View the full text of Dr. Little's review on his blog: 
http://iecodesign.com/index.php/our-blog/208-statistical-design-and-control-new-book-by-kieron-dey

Thursday, April 3, 2014

04 . 03 . 14 | New Book: “Competitive Innovation and Improvement”

When asked why the book was written (and a little about what’s between the covers,) Kieron Dey said:

“I first got the idea about separating tiny signals from large amounts of noise from time spent in radar design and wondered why similar was not much used in industry to solve problems. Where statistical design was used, it tended to be on a small scale and not much in processes involving lots of people.

The idea to combine statistical design and control came from a book on survey sampling. This fusion was controversial for years among professionals, for no reason. Everything used is in the literature. 

“Intent-to-treat" is also used throughout (which means, roughly, allowing an element of laissez-faire, to get real world results, not forced ones that don't hold longer term).

 Simultaneous design (where more than one design runs at once, overlaid) was added in 2012. The simultaneous designs have been important in cross-channel optimization in retail, and in complex healthcare improvements. This was the last addition as the theory was tricky and it finally fell in place in 2011. It found that what had seemed weaknesses (where interactions across designs might be a problem) in fact hid a large strength, which is in Chapter 8 with real cases. The method had to be simplified so that users could apply easily.

Finally, the scientific method is used throughout (which folds nicely into comparative effectiveness research, DMAIC or PDCA etc.) and the book explains what (and how simple) this is. The scientific method allows the same method to be used for existing and new processes: hence the “improvement and innovation” in the title. Innovation becomes less elusive in this way – it can be designed rather than waiting for inspiration. Also, getting back to pure, simple science means using right-brain (creativity) as well as left (analytic) so more people can contribute, valuably for the enterprise (which can be business, industry, research or government).

There is no mathematical notation so that anyone can read and use these well-established methods. Scientists and researchers will find Chapter 8 challenging on scientific method and randomization, so there's something for everyone. Mathematics is used a lot behind the scenes of the book but the real world is used more: to understand how businesses work and make them work better.

There are about 20 exercises peppered through the book, for the reader to accelerate what would be learned in field experience and get started on real business competitive problems.


Surprisingly, it turns out to be a management tool, not one technical people alone can accomplish; it’s not top down though and the book explains why.

The Book is available for pre order on Amazon at: goo.gl/9QSVMB

Sunday, November 10, 2013

11 . 11 . 13 | Efficient Orthogonal Design

The March 2013 government publication Efficient Orthogonal Design (AHRQ 13-0024-EF) authored by Mathematica under the direction of Dr. Randy Brown, is now gaining attention from health plans and the healthcare industry as a whole.

The research used in part informal working sessions comingout of the multi-year collaboration between Nobi’s Kieron Dey and Dr. Brown. Citing also published papers by KK Moore, the report is important for its introduction of large, orthogonal designs to test many interventions simultaneously in healthcare, also known as comparative effectiveness studies within the healthcare industry. The paper highlights an example testing about a dozen interventions in a disability study that worked to reduce hospitalizations within a population including both physical and behavioral disabilities.

One of the publication’s strengths is in adapting the original theory into a practical guide that any organization can adopt using analytical expertise already on staff.



See More:
- The Agency for Healthcare Research and Quality
- AHRQ Efficient Orthogonal Design Article Search

Monday, April 15, 2013

04 . 15 . 13 | Broad Based Economic Control

Our previous two posts introduced and explored economic control. The first post, dated March 18th 2013, was an introduction to this principle. The second post, dated April 4th 2013, illustrated in more detail using the healthcare industry. As the third and final part of this series, we’ll explore how economic control can serve any business, and any industry.

Economic control charts find their own level, meaning they find the stable process (“level playing field”) within the instabilities (spikes etc.). The “tramlines” reveal the inherent stable process. There are strict, objective rules in the mathematics for how this is done. Once set up, no changes are made to the limits (“tramlines”) until the process improves, as determined by the chart. New points are added each day, week or month, without changing the limits. Modern software encourages the limits to be recalculated each time but this is wrong (like moving goalposts around). 

The landmark text that introduced economic control in 1931, in a brilliant 300+ page development, used physics, mathematics, statistics and economics. Suddenly (on page 304) appears the seemingly simple formula: process average ± 3 standard deviations. This became known as the control chart. More correctly, it’s an economic control chart [1].

The important element of this deceptively simple formula (used in most industries for nearly 100 years now) is that the control limits are set economically (so that management actions will save and make most money). It is often said that control charts have a false-alarm rate of 0.27% but this is irrelevant. Nowhere in the original text does that number appear. The whole point is the limits are set economically. Further, the 0.27% is inexact in real processes, so it’s a red herring at best.

First impressions of the 3-sigma limits are often that they’re really wide and we should have a tighter standard. In fact they’re not “wide”. They are what they are. This error arises from missing the economic aspect.

It’s also often said that variations inside limits are random. They are not. All fluctuations (large or small) are caused by something(s). Now, the data within limits do often follow random patterns. So when someone says: “What if that point outside limits is just a false alarm?” the answer is there is no such thing. Since all variations have causes, a large one is by definition worth something. The hand-wringing comes because it is thought it may be pure chance. It isn’t. Now the causes that conspired to create a large number may have fallen into a perfect storm by chance, but they’re still real and there’s money to be made. This is a little like serendipity. No-one minds that we stumble into breakthroughs serendipitously. Same here.

The way to see this is to think of variations inside limits as having many causes, unknown. These will be impossible or expensive to figure out. So we use statistical design to do that. Variations outside limits will be economic to figure out and fix, ignore (e.g. month effects) or bake in (if good). In other words their cause(s) are easily found out and exploited for improvement.

Points outside limits (and/or a few other patterns) render the process unstable and are easy to figure out, then remove (or bake in if it’s a good spike), to make the process stable (standardized).

Here we’re not naïve that all processes can be made stable (standardized), such as if weather is involved in outside plant. Then we use standard workarounds. In general though, economic control gives a method to assess and standardize processes and their measurement systems.

If an improvement has just been implemented and a single point then crosses the limit (in the improvement direction) then it is a very big deal. This is where that crude, approximate rule of thumb of 0.27% comes in handy. If we’re plotting data monthly then we’d only expect to cross that limit roughly once every 62 years. So this is like a 62-year flood. (100/(0.27/2) x 12 = 61.7 years.) That information alone adds management fuel to the emerging improvement.

There is a deeper aspect to economic control. All of statistics (in the main techniques statisticians call parametric) is based on distribution theory. Averages, standard deviations, significance tests, regression etc. all are based on this. Certain mathematical requirements precede use of all these things. When we take courses in statistics we see those but read them much like the small-print in a legal contract. Most fundamental in that small-print is that the process be stable. If it is not, distribution theory breaks down. So, for example, even a simple pre-post test to see if something we did improved the process will be wrong if we ignore instabilities. “Wrong” is not strong enough a word – a better one would be arbitrary. Economic control is the only way to adjust so that we find the correct answer. This is profound for competitive advantage in a business.

This surprising comment that all of statistics* breaks down on unstable processes, is in the literature under the heading of analytic statistics. It is not well known even among qualified statisticians.

Industrial processes are almost always unstable, so statistics will not work unless the adjustments are made. The adjustments are simple, fast and follow rigorous rules that cannot be bent.

Anyone can do economic control charts. They are self-teaching devices where experience increases skill in usage. Common sense and a copy of the formulae/rules is enough to get started. Access to an expert  speeds this learning curve and avoids stumbling through common mistakes.

This all happens very fast. Economic control does not slow the business but speeds it. If a business were able to keep up with economic control it would be moving fast indeed. With a little practice, firms will find the economic control charts ready and waiting each morning. Of course they are only used on a few big things that make most money fastest.


* Statisticians use a code here: i.i.d. ~ independent, identically distributed




REFERENCES:

1. Shewhart, Walter, A. Economic Control of Quality… Van Nostrand (1931)

Wednesday, April 3, 2013

04 . 02 . 13 | Healthcare Economic Control

Building on our previous post which introduced the principles of economic control, this set of charts shows the work that preceded the healthcare case described in Case Studies: Healthcare | Health at Home, Not Hospital.

The first of these shows how measurement error (meaning “noise” not “mistake”) was initially unstable but was quickly fixed by removing non-applicable cases. The chart’s spike revealed this flaw in the tracking systems and found it also throughout all the data (not just the spike). That stabilized measurement error (i.e. the data all then fell inside tighter tramlines). Calls for measurement perfection were advised against since it would have been uneconomic (i.e. a severe drain on resources) and often impossible. This economic aspect is one of the most valuable features of economic control.

 

The second chart shows that the nurses (and the patients they cared for) in the largest of 3 simultaneous statistical designs, are homogeneous in terms of chronic health events, similar to the retail stores in the first post. So this assured a “level playing field” for the study that followed.





The next pair of charts are a hybrid. On the top chart, the gray areas are measurement error (i.e. noise) and the outer limits are the process (i.e. chronic events). This gray area offers a simple way to always know measurement error will not get in the way. It is clear the gray portion is not obscuring the chronic events month by month. The gray is about a quarter of distance between the outer dotted lines. Since statisticians use squared (not linear) distances, only about a quarter2 = 1/16= 6 ¼% of the process is really obscured. A good rule of thumb here is 25% tops, but 10-15% preferred.

This is surprising, given the visual impression, so the square law clarifies.

The lower chart is also measurement error but looks at precision (i.e. how much measurement error varies). The top chart was measurement accuracy (i.e. how close to the true mark it gets and how well it discriminates process shifts).

 This simple hybrid method allows processes to be improved and all questions about measurement error answered (really pre-empted) in real time, in the months ahead.

This case produced about a third improvement (against experimental prediction of a quarter) in a 3-month study plus a couple of months to solve implementation problems. The implementation population was double the size of the random sample used in the study.

Tuesday, March 19, 2013

03 . 18 . 13 | Principles of Economic Control

Nobi’s large statistical designs (to test which of 20+ changes improve a business quickly, and quantify by how much) are well understood. Less well known is the novel way we integrate economic control.

Economic control charts just plot the measurement we’re improving, then place limits (that look like tramlines) showing the extremes the process will normally confine to. Some examples will speak best.


The chart above ensured a “level playing field” when randomizing retail stores to a statistical design for testing 11 changes in stores and marketing, bringing a 9.8% “comps” increase from 2 of the 11 (see also homepage Case Studies: Retail Sales | Innovation During a Recession).

The next chart shows an inventory reduction project that started when it did.



Usually there are gaps to close in initial implementation, which typically take 1-2 meetings using the scientific method and economic control. Although little or no adherence monitoring is needed in general, it is used more to close implementation gaps, in order to know what’s been going on at all times and places. A small, random sample of transactions is inspected for the changes being implemented. The work is completed with people in the trenches. This is quite subtle in the details but with good business advantage.

Economic control is essential when using statistical designs for rapid improvement. It’s simple to use, though deceptively clever. It is important to managers in guiding economic decisions, and to scientists in providing objectivity in unstable business data (where the rules of statistics break down).

Monday, November 26, 2012

11 . 29 . 12 | Continued Push Into Healthcare

2012 cases in healthcare numbered roughly a dozen in:
  • Reducing hospitalizations
  • Reducing re-admits
  • Reducing exacerbations for disabled populations
  • Increasing engagement in CM/DM and in wellness programs 
  • Improving treatments
  • Streamlining utilization management
  • Predictive modeling for CM/DM selection
In all these studies, the device Nobi pioneered of cluster randomizing (e.g. by nurses, not patients) made the findings easy, fast and pure. This has remained controversial (without reason) but with more researchers adopting, the issue is inching toward more mainstream acceptance.

In most cases, widespread acceptance has required backing out precursors (such as HCC score) analytically, to show the findings do not change (and instead strengthen). No surprise here since the theory has been around since the 1920s but remains notoriously hard to grasp. These more practical exercises of showing users in their own language have been well received.