First Thoughts
  By Dan Gilmore - Editor-in-Chief  
  June 25, 2009  
  Lessons from Supply Chain Disasters  

A number of readers emailed me after we published our updated list on The Top Supply Chain Disasters of All Time a few weeks ago and asked what lessons a review of that list might hold.


So, this week I thought I would offer some thoughts on “disaster avoidance/risk management” based both on our list, as well as the dozens of other challenged projects or initiatives that I have come to know, one way or another, over my career.


When you look at the disasters on our list, it is actually astounding the range of supply chain problems that were involved. Certainly there were several major technology system failures (Foxmeyer, adidas), but we also had outsourcing snafus (Boeing), global sourcing mistakes (Isotoner, Mattel), poor forecasting (Cisco, Apple, Toys"R"Us), an engineering disaster (Denver Airport baggage handling system), supply chain network optimization challenges (Loblaws), a wild business strategy (Webvan) and several other categories.


In fact, as I noted in the original column, it is striking that all of our most recent disasters on the list had little or nothing to do with technology problems. They were all problems resulting from failures of strategy or execution. Technology meltdowns are simply much less likely today.

Gilmore Says:

Many disasters, large and small, have been caused because of a hard deadline for a project.

Click Here to See
Reader Feedback

That noted, below are some thoughts on how to minimize the chance of a severely challenged supply chain project or initiative.


“Big Bang” go lives are risky business: Several of the disasters on our list were related to “big bang” technology roll-outs in which several systems (ERP, WMS, planning, for example) all went live at once. The risks of this approach are well documented, and given that, I'm amazed how often I still talked to companies that take this approach. I must admit that most of them seem to make it through with some battle scars, but generally OK in the end.


The main issue, of course, is that companies don’t want to do the integration twice, such as first integrating an old existing warehouse system with a new ERP and then later integrating a new WMS that was planned from the beginning. Still, big bang is something I would be very leery of, and if I had to do it, would make sure I had some outside help with successful experience leading a similar endeavor elsewhere.


Being a pioneer often leads to arrows in the back: This is a tough one for me, because if it wasn’t for pioneers, we wouldn’t have any progress, would we? Still, whether it’s a piece of software or a new physical system of some kind, there is no question that being a guinea pig increases the odds of failure dramatically.


Step one is first understanding whether or not you are that guinea pig, which sometimes, in software at least, isn’t always so easy. A long time software executive once told me: “Every new version of software has a “beta” customer” [the first company to implement the software]. The question is whether they know it or not.”


I think being a pioneer can be just fine in some cases, as long as everyone understands the situation and the risk that is being taken, and appropriate internal approach is adopted (plan on the thing not working at first, whatever it is), and you structure your relationship with any vendor involved appropriately (you are well compensated for the risk being incurred – and the benefits to the vendor from success).


Do not ignore early warning signs: In many, if not most, of the disasters we listed, as well as my own experience, a post mortem on disasters would usually show there were dozens of indications that there were emerging problems. However, these warnings were ignored or minimized, usually because someone doesn’t want to fess up that things are not going as promised. The message here should really be to executives: Do not create an environment in which project leaders or others are reluctant to speak frankly about issues or concerns. Often, major disaster at the end can be avoided by accepting small slippages in schedule or budget.


Avoid hard cut-offs/transitions: Many disasters, large and small, have been caused because of a hard deadline for a project, such as a new system that has to be working for the peak season or any of a number of other examples. These hard deadlines, especially if the project schedule is tight to meet them, are perhaps the largest contributor to the issue above about ignoring warning signs. That’s because the schedule comes to dominate all the thinking, and creates a sense of urgency that can push our common sense and pervert the correct focus on actual project success.


Get some outside perspective: Any of us can become too deep into a project or strategy to have a truly objective view of where things stand or how they are going. I think most of us would agree that an outside perspective might be hugely valuable in avoiding risk and identifying focus areas.


I would highly recommend finding an outsider to use as a sounding board at the beginning and occasionally throughout a project or strategic initiative. That could be a consultant, though they will likely be angling for some work as part of the deal. You might instead consider an academic, or maybe even better, a supply chain professional in a non-competitive company. You could agree to do this on a reciprocal basis over time – and I am sure avoid a lot of mistakes as a result. 


Beware the ROI trap: Many projects have gone sour because they had trouble meeting the company’s ROI requirements. So, to make the ROI, key elements for success are whittled away. That may be the number of people devoted to the project, projections about when the benefits will really be received (leading to schedule acceleration), under investing in training and change management, etc.  The temptation to do this can be very powerful and, in my opinion, the best medicine is to vet this scenario and changes in resources or whatever in a very open and transparent fashion across the team.


Be brutally honest about your skill sets: Boeing’s plans to radically redesign the way it builds airplanes for the Dreamliner 787 through use of widespread outsourcing of major components likely was the right one. However, it seems clear now that Boeing simply lacked the experience and skills to make this work just about from scratch on a massive basis. We all understand the value of experience – why do we often forget that lesson? Boeing should have started on a smaller scale.


Limit the number of moving parts: I know from my own experience that there is real danger in having too many “moving parts” in a project or initiative. Rarely do most companies even clearly identify all those moving parts. From the outside, for example, it appears that Loblaw's execution issues with its new network design stemmed largely from too many moving parts that had to be synchronized to make it all work. Document the moving parts and variables – and if they are too many, scale or sequence the project in a new way.


I could go on, but these are the top of the list for me. I’d love your thoughts on keys to avoiding supply chain disasters large and small.


What would you add to our list of lessons for avoiding supply chain disasters? Which of the above seem most common or important to you? Do you have any interesting additions to add to the list? Let us know your thoughts at the Feedback button below.

  Send an Email