LightEdge’s VP of Product Development, Mike McHenry, shares his take on the recent Intercloud announcement, and dives into some key trends that led to this advancement in technology.
Earlier this week at Interop, Cisco announced a groundswell of support from over 30 Service Providers around the globe for their new Intercloud initiative. I am very excited that LightEdge was a part of this milestone announcement, but even more importantly, I am eager to see this partnership between Cisco and the Service Providers begin solving some of the major challenges we are all facing in IT.
LightEdge has always believed that the most important part of being a service provider is proximity to our customers. Being close to our customers allows us to provide customized and PERSONALIZED support essential to effectiveness of their vital infrastructures. Having a local cloud also means we are able to keep the customer’s data close to them and their end users, as a result. Taking several disruptive technologies from recent years into account, it’s clear to see why this is such a high priority for us.
When you look at the past 30 years of technology, you will see two major trends driving the lifecycle of disruptive technologies. Initial introduction of the technology (high cost, inelastic consumption, low market adoption, distant from consumers) and commoditization (low cost, elastic consumption, high market adoption, proximal to consumers).
First, let’s examine how these items impact computing. When computing technology was first introduced it was expensive, unwieldy, and consisted mainly of large rooms full of computers available only to massive corporations. As technology evolved, these enormous mainframes shrunk to smaller form factors: desktop computers, laptops, tablets, and even smartphones. The commoditization of technology displayed here, clearly maps the transition toward the hands of the end consumer.
Another example is networking, and on a broader scale, the Internet. Initially designed and used by the military and higher education, we now live in a world where everyone is connected all the time. Again, a disruptive technology was introduced to the market, and as time went by, the technology found its way into the hands of the end users. When you really stop and think about the growth of technology over the last 30 years it is staggering. Take an Internet connected tablet computer, for example. We literally have the knowledge of the entire human race at our fingertips! But even more amazing is its cost. Many centuries ago, kings would have gone to war for such knowledge and power, and we’re able to put that technology into the hands of our children to be used as a toy. Utterly amazing!
Data is another extremely disruptive technology, but one that I do not believe has completed its journey to commoditization yet. We have access to amazing quantities of data at our fingertips, like the 3.5” hard drives have recently hit the 8TB mark. However, for all that storage at our fingertips, it’s barely a dent in what that end consumer WANTS to consume. Look no further than Netflix to see my point.
How does this tie back to Intercloud you might ask? First, we need to consider the journey of “cloud” over the years. In the broadest sense, I think of “cloud” as data. It is compute and network as well, but cloud is really a creation out of necessity to get data into the hands of end users. I tend to think of this journey as starting much earlier than the term “cloud” was coined. In the early days of technology, users would house their data on mainframes. As personal computers became prevalent, data pushed closer to the end user. Good, right? Not so fast. The growth of data quickly outpaced the capacity of end user devices. The need for elastic consumption of the technology pushed back on the need for proximity of the service, and as a result the data was pushed back toward centralized file servers. Even they could not keep pace with the data explosion. Data centers were born, then virtualization, and finally the “cloud”. Clouds are now reaching a point where they cannot cope with the explosion of all this data. In addition, Private Cloud attempts to move the data closer to the end users, but complexities and limitations that arise with the Hybrid Cloud approach make it difficult.
Intercloud is a major step toward that ultimate goal of getting data into the hands of the end users. In a nutshell, the vision is one of allowing any “cloud” to move data to any other “cloud”. The great thing about this approach is that not only does it help us push data closer to end users, but it also facilitates the next level of scale necessary should the data explosion continue (all signs point to “yes”). Imagine a world where “business data” is sent to your laptop for a flight. After landing, you walk into your local office and that business data is transparently moved to a private cloud at your location. Later in the day, you need to build a complex report and the local private cloud doesn’t have enough horsepower to complete the report. The data is sent to a local data center which has a farm of computers to help you pull information out of your business data…all happening behind the scenes.
This is a tough thing to imagine. Any IT person knows there are a million details behind that seemingly simple vision. This is why I am so excited about Cisco’s direction in this space. They are not only bringing Intercloud to the table, but technologies such as ACI (network) and VACS (application containers). While other cloud providers have chosen to create closed ecosystems, Cisco is embracing diversity. Whether Cisco or HP compute, EMC or Netapp storage, VMware or HyperV virtualization, UCS Director or OpenStack orchestration, Cisco wants to be the one stitching it all together. Cisco’s roadmap aligns well with our vision of providing elastic, local, “best of breed” IT services to our customers, and accentuates the risks present to those providers who choose to be a closed-technology shop.