It’s Not about the Protocol

By , Tuesday, February 18th 2014

Categories: Analyst Blogs

Tags: blog, FC, FCoE, Fibre Channel, Infiniband, It's not about the Protocol, Network Protocols, russ fellows,

Recently, I inadvertently became involved in a tweet fest, kicked off because a group of people believed I was attacking their protocol of choice.  To devoted followers, a protocol is akin to religion, something that requires devotion and defense at all costs.

I tend to be more agnostic, particularly where protocols are involved.  The real question is not what protocol to use, but rather what options provide the best choices for a particular environment?  I often talk with enterprise users about their storage environments and their options for the future.  I nearly always find that the recommendations we make depend on a multitude of factors.  The old adage of “It depends” applies in spades.

One of the biggest considerations IT users have is how to evolve their current environment.  Rarely is a specific protocol or technology the magic that solves all of their problems.  Also, there is nothing inherently wrong with FCoE, FC, Ethernet, IP, SCSI, iSCSI, iSER, SDP, InfiniBand or any of the other related technologies.

OK, so if protocols aren’t the most important factor, what is?

There are several items that are important in addition to the obvious considerations of cost and performance.  For many enterprises with significant IT investments, compatibility with their existing infrastructure is a significant factor.  This can work both for and against a protocol, depending on whether it is already present or not.  Another important consideration is broad industry support.  IT users need to have multiple options for every technology choice, including adapters, networking equipment and storage devices.  After these considerations comes performance along with ease of use and cost.

Again, this is a generalization and each customer has his or her own set of priorities.

Network Performance Primer

When it comes to storage networking, one key consideration is that each protocol is dependent on specific networking technology.  That is to say that today, Ethernet does not have a standard that allows it to operate at 16 Gb.  It does have standards for 1, 10, 40 and 100 Gb/s, but those are not equal to 16.  Similarly, Fibre Channel does not have a standard that allows it to run at 40 Gb or 100 Gb/s.

Therefore, it is impossible to arbitrarily compare protocol stacks of FCoE to FC at arbitrary speeds.  Both the protocol and transmission speed must be considered together.  This is an important fact that gets lost in the protocol wars.

As I explained in a previous blog, the underlying networking transmission methods and speeds have a much bigger impact on performance than many other factors.

It’s not about the protocol; it’s about the transmission speed.  So, the question of “Does one 16 Gb FC connection equal two 8 Gb FC connections?”  Hint, the answer is “No” and to see why read my previous blog, “Why 2 * 8 does not equal 16. “

Why is 16 Gb FC faster than 8 Gb FC or 10 Gb FCoE?

The answer to this question, once again is more about the underlying network transport than the protocol.  This also holds when comparing 16 Gb FC to 10 Gb FCoE, which is then bridged to 8 Gb FC.  This is a common situation and is done quite often in production deployments that use FCoE as a top of rack aggregation technology.

When comparing 16 Gb FC to 10 Gb FCoE with end-to-end FCoE, the answer is different, but once again not what you may expect.  In this case, the total performance of two 10 Gb FCoE connections are nearly identical to one 16 Gb FC connection.  Again, in order to understand why queuing delays with multiple links reduces the performance for multiple links, consult my previous blog.

To reiterate, these results are mathematical, not opinions.  These results are due to transmission efficiency, not because of the protocols involved.

What about 40Gb and 100Gb Ethernet?

Yes indeed these technologies exist.  However, to say that 40 GbE is an enterprise standard is a stretch, much less suggesting that end-to-end server to storage 100 GbE exists in in production enterprise environments.

Next there is the issue of an entire eco-system supporting end-to-end connectivity.  This is a critical factor for Enterprise IT professionals, more so than nearly everything else, including performance and price.

Currently, I am not aware of any storage vendor shipping production 40 GbE FCoE units.  Certainly there are target PCIe cards and lab development efforts underway to support this.  However, it is an understatement to say that 40 GbE FCoE target storage devices are not shipping in significant quantities today.

Next is the issue of 40 GbE adapters.  Recently a number of system vendors have certified 40 GbE adapters for their systems, although again port shipments are limited to date.   As a percentage of total ports, 40 GbE ports have a minuscule segment of the market today.

Finally there is the issue of availability of 40 GbE network ports.  Although 40 GbE switch ports exist, their deployments are limited, and their cost is near $10K / port.

Taken together, the reality is that the number of, end-to-end 40 GbE FCoE production deployments is extremely limited.

Thus, using 40 GbE with end-to-end FCoE is not a viable option for most enterprises in 2014, and likely will not be viable for several more years.

What about Multiple 10 GbE Ports?

Could multiple 10 GbE links be used to improve performance?

Certainly this could be done.  However, there are several problems with this approach.  First, additional ports add management overhead and complexity.  Part of the argument for converged Ethernet is about reducing the number of connections.  If instead the number of connections must be increased to 2X the alternatives, then this argument looses some of its appeal.

Could a configuration of 10 GbE ports be configured to equal or exceed any given number of FC ports?  The answer certainly is “Yes.”  … Of course there is a “but” here.

The caveat is this:  Multiple slower connections have lower performance than a single faster connection.  This is due to lower queuing efficiency for multiple links vs. a single link.  Again, read my previous blog or Wikipedia for more background on queuing theory.

If two, 10 GbE ports are required to meet a performance goal, while a single 16 Gb port could do the same, which is simpler to manage and cable?  The answer is quite clear.  And this is in an ideal situation with end-to-end FCoE over 10 Gb links, which has only marginally better performance than a single 16 Gb link.  (see why 2 * 8 != 16, or consult your local queuing theory practitioner).

However, in the case where the FCoE links are bridged to 8 Gb FC, the performance for these two links gets worse still (due primarily to 8 Gb FC’s less efficient encoding, again NOT because of protocols).  In this case, it would take 3 links to equal the performance of a single 16 Gb FC connection.

So again, the questions and answers are quite simple.  For environments that need more than 10 Gb of storage network connectivity, which choices offer the fastest bandwidth with the fewest number of cables and connections?

OK, so What about InfiniBand?

Many of the principles presented here could be applied to multiple technology choices.  Again, as I said before, it’s not about the protocol.  With IB running at 40 or 56 Gb/s, doesn’t this make it a better choice than either Ethernet with FCoE or FC?

The answer is,  “For network performance – Yes, but for the overall ecosystem – No.”  Again, the rationale comes down not to protocols, or even networking performance.  It is quite clear that 40 or 56 Gb/s IB wins a network performance competition hands down.

To reiterate, the issue is about end-to-end support.  While there are initiators (or HCA’s) shipping and supported by major system vendors, and while there are some IB networking options, the problem is a lack of storage target devices.

To state that the IB fabric choices are limited is again an understatement.  There is only a single vendor producing IB switching equipment in production quantities.

As for storage target devices, several vendors do ship and support IB target devices.  However, the storage system choices are very limited and many of the most popular storage products do not support IB connectivity.  Additionally, there are limitations for support of IB by operating systems and hypervisors.

So it comes down to what are the most realistic options.  While InfiniBand wins the speed battle, for most enterprises it looses the broad product availability war.  The eco-system for end-to-end IB is not sufficiently developed to make it a viable option for many enterprise environments.

Where Does that Leave Us?

Now that we have compared several options, 40 GbE with any storage protocol, 56 Gb IB with any protocol and 16 Gb FC with FCP, the benefits of each option should be clear.  Each may be suitable in a specific situation, but for most enterprise customers looking for broad product availability and the highest speed end-to-end environment with the fewest connections, the answer most enterprises choose today is FC.

(Note: the answer for your environment always depends on multiple issues, there is never one correct answer.  We always advise and help our clients investigate the alternatives.)

Forgot your password? Reset it here.