Hackers at the Big Iron Gates

By , Friday, January 18th 2013

Categories: Analyst Blogs

Tags: Open Compute Project, Open Compute Summit,

This week’s Facebook-initiated Open Compute Summit has been depicted by TechCrunch blogger Alex Williams as an event that benefits Facebook first and foremost as it builds out its data source business model. His well-written post basically says that significant revenue growth for Facebook can come from selling data generated by Facebook users. But to maximize profitability under this data sales model, Facebook wants to drive the cost of data center infrastructure to some point below that of commodity boxes—servers and storage arrays for example. How to do that?

One way is to apply the open source software process to hardware, driving down IT hardware costs while driving up the ability to customize IT infrastructure.

The open source software community has traditionally assumed that their software would run on commodity hardware. It has to run on something, so why not write software that runs on the cheapest hardware available.Apache Hadoop is a classic example of this paradigm. Hadoop is open source, Java-based software running on racks of commodity servers that are wired together using a standard, low cost 1 Gb Ethernet fabric.

But servers are boxes full of different components—processor modules, interface cards, power supplies—and the list goes on. Suppose data center IT people could roll their own Hadoop clusters using servers that they built from a list of standardized parts? Indeed, if given that opportunity would they even build boxes that look like today’s commodity servers?

That’s the point of this blog post. The Open Compute Summit and the Open Compute Project it showcased is about much more than just giving Facebook and other social media giants like them a lower cost IT floor.  There’s the potential for enterprise IT to extract significant value from the Open Compute Project as well by rolling their own data center components. No more racks full of stuff displaying the logo plates of the Ciscos, Dells, EMCs, HPs and IBMs of the world. No logos at all in the enterprise data center.

I can hear those vendors now: “You’re dreaming. That will never happen. Our customers depend on us and will never replace us with stuff they build.” Maybe, but I remember hearing similar pronouncements at the coming out party for open source software.

Here’s why I think that resting on one’s own iron is not something the enterprise IT vendors should be doing as a response to the Open Source Project. Technology refresh cycles—the process of replacing the outdated with the new—adds significant cost to IT budgets. This process makes IT even costlier when technologies that progress at different rates are bound together in the same device. Consider the ubiquitous and lowly power supply commonly found in incredible abundance within today’s data centers.

During his opening keynote, Frank Francovsky of Facebook and Chairman of the Open Compute Foundation used power supplies as an example. Suppose a new power supply technology were to come along that could dramatically reduce the data center’s electricity bill. However, because each device in the data center typically has its own embedded power supply, enterprise data centers would have to replace all of their devices in order to take advantage of this new energy saving technology. What would more likely happen is that devices would be replaced on a typical three-year refresh cycle with those that feature low power consumption. Energy costs would go down gradually.

But if power supplies were standard parts that could be used interchangeably among devices (servers, switches, storage arrays, etc.) and swapped out at will, there would be no need to wait. Energy cost savings could be substantially realized once the lower power-consuming power supplies became available.

Enterprise IT has historically been challenged to justify its very existence. Decades ago, the likes of EDS and other IT services outsourcers challenged enterprise data centers with a lower cost for IT services model. This caused many enterprise executives to conclude that IT was not required to be one of their core business competencies and either threw their data center management staff, or in many cases their entire data centers, overboard.

Now and in the same way, the many cloud services providers are challenging enterprises once again to a duel over the cost of IT. If the cloud vendor can host enterprise apps for a fraction of the cost charged by internal IT, how then do enterprise executives justify internal IT infrastructure? And if internal IT is to compete on a level footing with the cloud, doesn’t it then have to lower its cost to provide the same services?

I believe that the cloud services providers will find the Open Compute model compelling for the two reasons for which the Project was started: lower infrastructure cost and greater operational flexibility. But in order to compete with them, enterprise IT executives will have to follow suit by reducing their costs and maximizing their flexibility. Therefore, the Open Compute Project will be worth at least a review by enterprise IT. As one Summit attendee quipped on his Facebook page: “Facebook has just turned a computer server into a Lego-style assembly project.”

 Read more by John at Forbes…

Forgot your password? Reset it here.