How did the Obama Administration blow $400M making a website?

Posted by Load Impact on Nov 15, 2013

By doing software development and testing the way it's always been done.

There is nothing new in the failure of the Obamacare site. Silicon Valley has been doing it that way for years. However new methodologies and tools are changing all that.

There has been a huge amount of press over the past several weeks about the epic failure of the Obamacare website. The magnitude of this failure is nearly as vast as the righteous indignation laid at the feet of the administration about how this could have been avoided if only they had done this or that. The sub text being that this was some sort of huge deviation from the norm. The fact is nothing could be farther form the truth. In fact, there should be a sense of déjà-vu-all-over-again around this.

The record of large public sector websites are one long case study in epic IT train wrecks.

In 2012 the London Olympic Ticket web site crashed repeatedly and just this year the California Franchise Tax Board's new on-line tax payment system went down and stayed down - for all of April 15th.

So, this is nothing new.

As the Monday morning quarterbacking continues in the media one of my favorite items was a CNN segment declaring that had this project had been done in the lean mean tech mecca that is Silicon Valley, it all would have turned out differently because of the efficiency that we who work here are famous for. And as someone who has been making online software platforms in the Bay Area for the past decade, I found that an interesting argument, and one worth considering and examining.

Local civic pride in my community and industry generates a sort of knee jerk reaction. Of course we would do it better/faster/cheaper here. However if you take a step back and really look honestly at how online Software as a Service (SaaS) has been done here over most of the past 20 or so years that people have been making websites, you reach a different conclusion. Namely, it's hard to fault Obama Administration. They built a website in a way that is completely in accordance with the established ways that people have built and tested online Software platforms for most of the past decade in The Valley.

The only problem is it doesn't work. Never has.

The problem then isn't that they did anything out of the ordinary. On the contrary. They walked a well worn path right off a cliff very familiar to the people I work with. However, new methodologies and tools are changing that. So, the fault is that they didn't see the new path and take that instead.

I'd like to point out from the start that I've got no special knowledge about the specifics of HealthCare.gov. I didn't work on this project. All of what I know is what I’ve read in the newspapers. So starting with that premise I took a dive into a recent New York Times article with the goal of comparing how companies in The Valley have faced similar challenges, and how that would be dealt with using the path not taken, of modern flexible -- Agile in industry parlance -- software development.

Fact Set:

  • $400 million
  • 55 contractors
  • 500 million lines of code

$400 million -- Lets consider what that much money might buy you in Silicon Valley. By December of 2007 Facebook had taken in just under $300 million in investment and had over 50 million registered users -- around the upper end of the number of users that the HealthCare.gov site would be expected to handle. That’s big. Comparisons between the complexity of a social media site and a site designed to compare and buy health insurance are imperfect at best. Facebook is a going concern and arguably a much more complex bit of technology. But it gives you the sense that spending that much to create a very large scale networking site may not be that extravagant. Similarly Twitter had raised approximately $400 million by 2010 to handle a similar number of users. On the other hand eBay, a much bigger marketplace than HealthCare.gov will ever be, only ever asked investors for $7 million in funding before it went public in 1998.

55 contractors -- If you assume that each contractor has 1,000 technical people on the project you are taking about a combined development organization about the size of Google (54,000 employees according to their 2013 Q3 Statement) for HeathCare.gov. To paraphrase the late Sen. Lloyd Benson 'I know Google, Google is a friend of mine and let me tell you... you are no Google'

500 million lines of code – That is a number of astronomical proportions. It’s like trying to image how many matches laid end to end would reach the moon (that number is closer to 15 billion but 500 million matchsticks will take you around the earth once). Of all the numbers in here, that is the one that is truly mind boggling. So much to do something relatively simple. As one source in the article points out, “A large bank’s computer system is typically about one-fifth that size.” Apples latest version of the OSX operation system for computers has approximately 80 million lines of code. Looking at it another way, that is a pretty good code to dollar ratio. The investors in Facebook probably didn’t get 500 million lines of code for their $400 million. Though, one suspects, they might have been pretty appalled if they had.

So if the numbers are hard to mesh with Silicon Valley, what about the process -- the way in which they went about doing this, and the resulting outcome? Was the experience of those developing this similar, with similar outcomes, to what might have taken place in Silicon Valley over the past decade or so? And, how does the new path compare with this traditional approach?

The platform was ''70 percent of the way toward operating properly.”

Then - In old school Silicon Valley there was among a slew of companies the sense that you should release early, test the market, and let the customers find the bugs.

Now – It’s still the case that companies are encouraged to release early, and if your product is perfect it was thought that you waited too long to release. The difference is that the last part -- let the customers find the bugs -- is simply not acceptable, excpet for the very youngest beta test software,. The mantra with modern developers is, fail early and fail often. Early means while the code is still in the hands of developers, as opposed to the customers. And often means testing repeatedly -- ideally using automated testing. This, as opposed to manual tests, that were done reluctantly, if at all.

“Officials modified hardware and software requirements for the exchange seven times... As late as the last week of September, officials were still changing features of the Web site.”

Then -- Nothing new here. Once upon a time there was a thing called the Waterfall Development Method. Imagine a waterfall with different levels each pouring over into the next, each level of this cascade represented a different set of requirements each dependent on the level above it and the end of the process was a torrent of code and software the would rush out to the customer in all its complex feature-rich glory called The Release. The problem was that all these features and all this complexity took time -- often many months for a major release, if not longer. And over time the requirements changed. Typically the VP of Sales or Business Development would stand up in a meeting and declared that without some new feature that was not on the Product Requirement Document, some million-dollar deal would be lost. The developers, not wanting to be seen as standing in the way of progress, or being ordered to get out of the way of progress, would dutifully add the feature or change a requirement, thereby making an already long development process even longer. Nothing new here.

Now -- The flood of code that was Waterfall has been replaced by something called Agile, which as the name implies, allows developers to be flexible, and expect that the VP of Sales will rush in and say, “Stop the presses! Change the headline!” The Release is now broken down into discrete and manageable chunks of code in stages that happen on a regular weekly, if not daily, schedule. Software delivery is now designed to accommodate the frequent and inherently unpredictable demands of markets and customers. More importantly, a problem with software can be limited in scope to a relatively small bit of code with where it can be quickly found and fixed.

“It went live on Oct. 1 before the government and contractors had fully tested the complete system. Delays by the government in issuing specifications for the system reduced the time available for testing.”

Then -- Testing was handled by the Quality Assurance (QA) team. These were often unfairly seen as the least talented of developers who were viewed much like the Internal Affairs cops in a police drama. On your team in name only, and out to get you. The QA team’s job was to find mistakes in the code and point them out publicly, and make sure they got fixed. Not surprisingly, many developers saw little value in this. As I heard one typically humble developer say, “Why do you need to test my code? It’s correct.The result of this mindset was that as the number of features increase, and time to release remained unchanged, testing got cut. Quality was seen as somebody else's problem. Developers got paid to write code and push features.

Now -- Testing for quality is everybody's job. Silos of development, operations and QA are being combined into integrated Dev/Ops organizations in which software is be continuously delivered and new features and fixes are continuously integrated into live websites. The key to this is process -- known by the refreshingly straight name of Continuous Delivery -- is automated testing that frees highly skilled staff from the rote mechanics of doing testing, and allows them to focus on making a better product, all the while assuring the product is tested early often and continuously. A Continuous Delivery product named Jenkins is currently one of the most popular and fastest growing open source software packages.

“The response was huge. Insurance companies report much higher traffic on their Web sites and many more callers to their phone lines than predicted.”

Then -- The term in The Valley was victim of your own success. This was shorthand for not anticipating rapid growth or positive response, and not testing the software to ensure it had the capacity and performance to handle the projected load and stress that a high volume of users places on software and the underlying systems. The reason for this was most often not ignorance or apathy, but that the software available at the time was expensive and complicated, and the hardware needed to do these performance tests was similarly expensive and hard to spare. Servers dedicated solely for testing was a luxury that was hard to justify and often appropriated for other needs.

Now -- Testing software is now often cloud-based, on leased hardware, which means that anybody with a modicum of technical skill and an modest amount of money can access tools that would have been out of reach of all but the largest, most sophisticated software engineering and testing teams with extravagant budgets. Now, not only is there no excuse for not doing it, is in fact inexcusable. Software is no longer sold as licensed code that comes on a CD. It is now a service that is available on demand -- there when you need it. Elastic. As much as you need, and only what you need. And, low entry barrier. You shouldn't have to battle your way through a bunch of paperwork and salespeople to get what you need. As one Chief Technical Officer at a well know Bay Area start-up told me, “If I proposed to our CEO that I spend $50,000 on any software, he'd shoot me in the head.” Software in now bought as service.

It’s far from clear at this point in this saga as to the what, how and how much it will take to fix the HealthCare.gov site. What is clear is that while the failure should come as no surprise give the history of government, and software development in general that doesn’t mean that the status quo need prevail for every. It’s a fitting corollary to the ineffective process’ and systems in the medical industry that the healthcare.org itself is trying to fix. If an entrenched industry like software development and Silicon Valley can change the way it does business and produce its services faster, better and at a lower cost; then maybe there is hope for the US health care industry doing the same.

By: Charles Stewart (@Stewart_Chas)

Topics: DevOps, New York Times, London Olympic, HealthCare.gov, Obamacare, IT, Silicon Valley, California Franchise Tax Board, Load Testing, capacity testing, CNN, Blog, software development

Recent Posts

Popular posts

Posts by Topic

see all

Subscribe to Email Updates