Sometimes the ways we expect things to work just don’t match up with the science. Our common sense is a collection of our experiences, providing expectations as to how the world works.
Some phenomena are visible and easily observed. Other times it can be difficult to know what is really happening. Computer systems are incredibly complex systems with many hidden operations. As a result, attempts to apply our common sense to computer systems can mislead us.
A good example of this is comparing networking choices. Common sense may suggest that two networks operating at a given speed are equivalent to one network operating twice as fast. To understand how well this holds up, lets explore two issues: efficiency and waiting.
Networking is a topic that can kill even a sci-fi conference, too dry and technical for even the most technical among us. We just want things to work, not have to figure out the science behind moving 1’s and 0’s between two locations.
Fair enough, however it is worth understanding that there are inefficiencies with every technology, networking including. For many years a technology known as 8B/10B was used to transfer binary data. Recently a more efficient encoding method known as 64B/66B has been deployed.
In short, 8B/10B is about 80% efficient while 64B/66B, is about 97% efficient. So everything else being equal, a network using 64B/66B is 21.25% more efficient than one using the older technique. Specifically:
Percentage of Efficiency Improvement: ((97% – 80%) / 80%) * 100 = 21.25%.
An experience we have all had is that of waiting in line for something. We all hate waiting in line, including computational tasks. The science of efficiency is broadly known as Operational Research, utilizing modeling and statistics along with a mathematical field known as Queuing Theory. In evaluating waiting times specifically, queuing theory provides a useful set of tools.
Without going into the complexities behind the theory – it can be shown that in general, a line that moves twice as fast is more efficient than two lines moving at half its speed. That is, 1 + 1 does not add up to 2. The exact improvement depends upon arrival rate distributions, service time distributions and other formula. Additionally, the way in which lines or queues are formed waiting for the resource is an important consideration.
Comparing a situation with a separate queue for each slower network link, to an alternative with a single queue for the higher speed network, the theory shows for an M/M/1 queue the improvement is approximately 25%. That is, a single line with one high-speed resource is 25% more efficient than two lines waiting for a resource that is ½ the speed.
Essentially, 1 + 1 = 1.6 , which is 25% less than our expected result of 2.
So that brings us to put these ideas together. Greater efficiency due to reduced overhead, and greater throughput and efficiency due to faster service rates both serve to provide an additive benefit. Thus, two network connections with separate workloads and a less efficient transmission method do not equal a single, more efficient network operating twice as fast.
At this point, no networking protocols have been mentioned. While specific technologies may have benefits when compared to another technology, the underlying principles behind efficiency and reduced waiting time hold true.
Which brings us back to the premise of this discussion. Common sense may tell us that 2 * 8 = 16, but as I have explained, that is not always the case. All other things being equal, one fast network can be significantly better than two slower networks, especially if it is also more efficient in transmitting data. Sometimes appearances are deceiving and sometimes 2 times 8 doesn’t equal 16.