Categories: Analyst Blogs
A debate among economists has been brewing over whether or not humanity will continue to innovate at the pace experienced over the last 150 years—a time that produced automobiles, mass distribution of electricity, radio, television, and the Internet to name a few. One side argues that we have seen the last of the major innovations and that what will follow will be incremental, like the replacement of the PC with the tablet for example. The opposing side on this debate shouts “absurd.” If anything we’ve only just begun to innovate.
Few of us who have watched the advance of computing technologies would fall-in on the side of the stagnation theorists in this debate. The advances in computing over just the last 20 years have been dramatic. The number of innovative technologies growing from conception to mainstream usage has not only exploded but their appearance will continue to accelerate. Here’s why I believe this to be true.
There are as I see it at least four major forces driving computing innovation forward. Furthermore, the interplay among these forces is unpredictable and will generate innovations we can’t yet imagine. These are:
The early glass-house style of computing was remote from the individual. It was controlled by the few, was outrageously expensive by today’s standards, and required specialized training. The advance from mainframes to client server was mostly incremental in that only brought humans closer to compute power by a small degree. PCs opened the flood gates and now little stands in the way. Computing resources that were unimaginable in the glass-house days are now available to anyone with a credit card. And, it’s available to all of us anywhere at any time and on a variety of devices that are constantly multiplying and evolving.
Whether or not we realize it, we commonly experience computing as a historical activity. Data is created and saved. Then it is processed and presented in a way that we as humans can consume as information. All of this takes time. And for decades, the time lag between data creation and information delivery has been acceptable. Yes, time to information has gotten dramatically shorter, but we’re still mostly locked into a create-store-process-deliver mode, so much so I think that it’s hard to imagine any other way. Our PCs constantly reinforce that mindset.
Mobility has broken us out. For example, an app tells me where I am on a map. My dot moves as I move. Points of interest to me pop up as I get nearer to them. I experience no time lag in the information given me and I have other apps that behave similarly.
We have seen that mobility has already stimulated demand that changes computing behavior and the use of tablets and smart phones. I believe that the same phenomenon will jump to applications. I can get real-time apps for the way I want to live. Why not for the way I want to work? Multiple technologies will converge to make real-time personal and work life apps a reality and I’ll explore them in a future blog.
Open compute clouds have demonstrated the ability to encompass and stitch together (converge) disparate technologies, compute platforms and data stores. We think of them mostly as large entities that are offered as application platforms by cloud services providers and private clouds for enterprise IT. The hybrid cloud model connects the two. But like their atmospheric brethren, compute clouds could take any form, shape, and size. They could appear, disappear, and move. We could easily see, for example, home-based “microclouds” for mass-market consumption by consumers. These could unite and control any electronic device in the home from thermostats to toys and entertainment centers with the “control plane” presented as a smartphone app. Open compute clouds overcome technological and artificial (read proprietary) barriers. They converge and unite better than any other middleware solution we’ve yet seen and we’ve only seen the beginning.
I admit that I’m a grey-haired hippy from the flower power days of the late sixties. When I first encountered the Open Source movement, the “tear down the walls” refrain from that Jefferson Airplane song immediately came to mind. In the words of Eric Raymond who wrote the first open source-defining book, “Today, the open source movement is bidding strongly to define the computing infrastructure of the next century. “Open Source has produced what is now a dominant enterprise OS – Linux. It also dominates Big Data and is now is setting its sights on cloud (OpenStack) containerization (Docker) and containerized application management (kubernetes). But for me, the single word that defined the open source movement from the start is “unselfish.” Yes, unselfishness is not a great foundation for a business model, but it has to date been a powerful unifier diverse engineers and problem solvers into creative communities that are now threatening the computing establishment.
These are the drivers of innovation as I see them. Note, they are only partially about technology. That’s because the energy that powers these forces is the human spirit.