Fork me on Github
Fork me on Github

Joe Dog Software

Proudly serving the Internets since 1999

up arrow Gub’mint (IT) Mule

bureaucratsSean Gallagher has an interesting piece on (ars)technica. He asks, “Why do government IT projects fail so hard and so often?” Gallagher provides several reasons, most of which are symptoms of a large organization. Let’s examine that list.

1. The government uses antiquated technologies. Its bureaucracy is slow to move and slow to adapt. Older technologies remain long after their life cycle expires largely because the approval process for new ones is long and arduous. You can imagine many frustrating meetings that end with, “Fsck it. We’ll put it on XP.”

2. Its user base is really large. Gallagher cits as one example a DOD email rollout that touched 1.5 million users. That’s an astonishing number for an in-house IT department. Certainly there are web companies with more users — there are 425 million GMail accouts, for instance — but Google does web for profit. The army’s IT department is an expense.

3. Flawed metrics. Gallagher notes that many government IT dashboards are filled with nice metrics that contain a lot of nines. Unfortunately, those nines have little bearing on end user experience. If department CIOs measured things that mattered, they’d be filled with zeros.

I’m sure these are valid criticisms but how do they vary from other large organizations? I work for a large corporation and this week my company finally moved my laptop off Windows XP. There’s no way the entire organization will be XP-free by end-of-life-cycle. It took a Great Recession to convince management that Linux was a viable alternative to HP-UX.

Our user base is 15,000 and every internal roll-out contains some glitches or problems. The army rolls out applications to a user base that is two orders of magnitude larger than ours. Where we roll out in a carefully controlled environment, they have to provide service to all corners of the world. Some of those corners are pre-fab barracks on an Afghanistan mountain top.

I’m yet to meet a person in my company who likes our out-sourcing partner. The bean counters like how little they cost, but they’re not happy with their services either. Yet if you look at this partner’s dashboard, you’ll find it’s filled with as many nines as those government CIOs. Faulty metrics aren’t limited to the public sector.

The Affordable Care website famously crashed during its rollout. Yet on the surface it wasn’t subject to many of government IT’s shortcomings. Most of the work was handled by a private partner. They used apache webservers on Linux. The site was fronted by the Akamai CDN which greatly reduces load by moving content close to the users. Most importantly, the site wasn’t tied to antiquated government infrastructure. Yet it failed. Why?

When you examine the site you find it’s simply not optimized for heavy traffic. The pages are too heavy and they contain too many elements. Fifty-six javascript files? Really? Then we learn the system was tested under load that was an order of magnitude less then what they received on October 1st. The site was basically slash-dotted.

Certainly there are good failures and bad failures. Collapsing under the weight of your own popularity is a good one. Still, with better planning and better coding the Affordable Care site could have experienced a more successful roll-out. Those operations require a high level of expertise which brings us to what I suspect is the real reason government IT projects fail: constrained by the tax payer’s dime, government can’t attract the talent necessary to service a very large user base. In other words, we get what we pay for.