Recent Recognitions

November 11th, 2014

We would like to congratulate Philip Anschutz, founder of the Anschutz Corporation and owner of LightEdge since 2007, on receiving the 2014 International Entrepreneur of the Year Award from the Henry W. Bloch School of Management at UMKC for his innovation and involvement within countless organizations around the globe. We couldn’t be more proud to be tied to such an amazing entrepreneur with an outstanding reputation for making a difference and continuously raising the bar on excellence.


It is with this same spirit, LightEdge makes it a priority to give back to the communities we are a part of. This year, we have had the honor of being named a 2014 Economic Impact Award winner by the Greater Des Moines Partnership and the Business Record for our contributions to the economic vitality and quality of life in the Greater Des Moines area. In April, CEO, Jim Masterson, announced plans to bring over 40 new jobs to our headquarter city within the next year alone, adding to the 33% employee growth rate 2014 has brought to our company as a whole. It’s an exciting time of expansion and evolution for LightEdge, and we are so happy to be able to make a difference in the wonderful Midwestern communities we call home.

LightEdge’s Kansas City office has moved!

October 27th, 2014

KC lobby area

As our Kansas City team continues to grow, we are excited to announce that we have moved our office from Overland Park, KS to Kansas City, MO. Now that we are in a larger office at Briarcliff, we have been able to centralize all of our Kansas City employees into one space. Similar to our recent office move in Des Moines, we are confident that this will continue to result in even better collaboration amongst our teams to consistently improve our product and service offerings for customers. This new location will also bring our team even closer to our Kansas City Data Center located in SubTropolis Technology Center.

Need to send us some snail mail or get our new address dropped into your remittance system? You can now find us at:

4100 N Mulberry Drive
Suite 100
Kansas City, MO 64116

To view LightEdge’s full contact information for all our Midwestern locations, please visit:

The Evolution to Intercloud

October 2nd, 2014

LightEdge’s VP of Product Development, Mike McHenry, shares his take on the recent Intercloud announcement, and dives into some key trends that led to this advancement in technology.

Earlier this week at Interop, Cisco announced a groundswell of support from over 30 Service Providers around the globe for their new Intercloud initiative. I am very excited that LightEdge was a part of this milestone announcement, but even more importantly, I am eager to see this partnership between Cisco and the Service Providers begin solving some of the major challenges we are all facing in IT.

LightEdge has always believed that the most important part of being a service provider is proximity to our customers. Being close to our customers allows us to provide customized and PERSONALIZED support essential to effectiveness of their vital infrastructures. Having a local cloud also means we are able to keep the customer’s data close to them and their end users, as a result. Taking several disruptive technologies from recent years into account, it’s clear to see why this is such a high priority for us.

When you look at the past 30 years of technology, you will see two major trends driving the lifecycle of disruptive technologies. Initial introduction of the technology (high cost, inelastic consumption, low market adoption, distant from consumers) and commoditization (low cost, elastic consumption, high market adoption, proximal to consumers).

First, let’s examine how these items impact computing. When computing technology was first introduced it was expensive, unwieldy, and consisted mainly of large rooms full of computers available only to massive corporations. As technology evolved, these enormous mainframes shrunk to smaller form factors: desktop computers, laptops, tablets, and even smartphones. The commoditization of technology displayed here, clearly maps the transition toward the hands of the end consumer.

Another example is networking, and on a broader scale, the Internet. Initially designed and used by the military and higher education, we now live in a world where everyone is connected all the time. Again, a disruptive technology was introduced to the market, and as time went by, the technology found its way into the hands of the end users. When you really stop and think about the growth of technology over the last 30 years it is staggering. Take an Internet connected tablet computer, for example. We literally have the knowledge of the entire human race at our fingertips! But even more amazing is its cost. Many centuries ago, kings would have gone to war for such knowledge and power, and we’re able to put that technology into the hands of our children to be used as a toy. Utterly amazing!

Data is another extremely disruptive technology, but one that I do not believe has completed its journey to commoditization yet. We have access to amazing quantities of data at our fingertips, like the 3.5” hard drives have recently hit the 8TB mark. However, for all that storage at our fingertips, it’s barely a dent in what that end consumer WANTS to consume. Look no further than Netflix to see my point.

How does this tie back to Intercloud you might ask? First, we need to consider the journey of “cloud” over the years. In the broadest sense, I think of “cloud” as data. It is compute and network as well, but cloud is really a creation out of necessity to get data into the hands of end users. I tend to think of this journey as starting much earlier than the term “cloud” was coined. In the early days of technology, users would house their data on mainframes. As personal computers became prevalent, data pushed closer to the end user. Good, right? Not so fast. The growth of data quickly outpaced the capacity of end user devices. The need for elastic consumption of the technology pushed back on the need for proximity of the service, and as a result the data was pushed back toward centralized file servers. Even they could not keep pace with the data explosion. Data centers were born, then virtualization, and finally the “cloud”. Clouds are now reaching a point where they cannot cope with the explosion of all this data. In addition, Private Cloud attempts to move the data closer to the end users, but complexities and limitations that arise with the Hybrid Cloud approach make it difficult.

Intercloud is a major step toward that ultimate goal of getting data into the hands of the end users. In a nutshell, the vision is one of allowing any “cloud” to move data to any other “cloud”. The great thing about this approach is that not only does it help us push data closer to end users, but it also facilitates the next level of scale necessary should the data explosion continue (all signs point to “yes”). Imagine a world where “business data” is sent to your laptop for a flight. After landing, you walk into your local office and that business data is transparently moved to a private cloud at your location. Later in the day, you need to build a complex report and the local private cloud doesn’t have enough horsepower to complete the report. The data is sent to a local data center which has a farm of computers to help you pull information out of your business data…all happening behind the scenes.

This is a tough thing to imagine. Any IT person knows there are a million details behind that seemingly simple vision. This is why I am so excited about Cisco’s direction in this space. They are not only bringing Intercloud to the table, but technologies such as ACI (network) and VACS (application containers). While other cloud providers have chosen to create closed ecosystems, Cisco is embracing diversity. Whether Cisco or HP compute, EMC or Netapp storage, VMware or HyperV virtualization, UCS Director or OpenStack orchestration, Cisco wants to be the one stitching it all together. Cisco’s roadmap aligns well with our vision of providing elastic, local, “best of breed” IT services to our customers, and accentuates the risks present to those providers who choose to be a closed-technology shop.

Cisco Intercloud:

Cisco ACI:

Cisco VACS:

LightEdge joins Cisco Intercloud Partnership Ecosystem

September 29th, 2014

Today’s Intercloud announcement further solidifies our strategic long-term partnership with Cisco.  LightEdge customers rely on us for local high-touch network, colocation, consulting, managed hosting, and cloud services.  Although LightEdge is aggressively building data centers in the Midwest, we need to be able to service those customers requiring both a national or global footprint.  Cisco’s Intercloud offering with application centric infrastructure, open standards, data center infrastructure, and marketplace allows for us to do just that.  In addition, partnering on this project, allows us to offer continual innovation of our existing Cisco powered cloud services. Whether it’s in our data centers, at the customer premise, in the Intercloud, or a hybrid of the three, LightEdge and Cisco are now jointly working together to provide end-to-end managed solutions for our customers.
For additional information on Intercloud, check out the full press release from Cisco here: Intercloud Press Release.

Cisco_Powered_Universal_600px_225_RGB-2  LE_CMYK-(No-Tag)

VMworld 2014 Recap

September 12th, 2014

LightEdge Senior Cloud Engineer, Jon Hildebrand, has returned from VMworld in San Francisco, and brought some major highlights back to share. Check out many of the big announcements that were made in his guest blog post below.

VMworld 2014 Recap

The annual U.S. version of the VMworld Conference came and went by, but by no means did it leave us with little to think about.


One of the biggest announcements that VMware made at the conference was that they were going to be getting into the hardware business. Specifically, VMware has chosen to enter the hyper-converged hardware space. What this means is that the hardware is setup in a way in which compute, storage, and network components are all contained in a very small form factor.

In a domain previously occupied by Nutanix and Simplivity, VMware chose to enter the hardware realm with a few of their own software technologies interwoven with the hardware technologies in EVO:RAIL. While the hardware itself doesn’t really look much different from the offerings by the other hyper-converged players, what VMware has chosen to do is tightly integrate EVO:RAIL with many of its own software layers. As an example, while the other vendors have other ways of accessing the storage, VMware chose to use its own VSAN technology for accessing the storage layer in the appliance. Also, there is extremely tight integration between EVO:RAIL’s configuration and the vSphere software layers to drastically reduce the turnaround time for full deployment of the hardware.

As this is just the first generation of this technology, there are some limitations, however, VMware has EVO:RACK planned, which leads to the belief that any scaling issues that RAIL may have on Day 1 of its release, should be resolved in future iterations. This will be interesting watching the back and forth between VMware, Nutanix, and Simplivity as they vie for dominance in the hyper-converged market.

Embracing OpenSource Projects

Having long been a major contributor to the OpenStack project, VMware announced a fully validated OpenStack architecture. VMware would also be providing full support for this architecture, just like the rest of its product lines.

It’s been well-documented VMware’s contributions to the OpenStack project, especially in the realm of software-defined networking (it does help when you acquire the company that was responsible for a lot of that initiative – Nicira, now VMware NSX). VMware has now entered the realm of a fully validated OpenStack architecture and is offering it up to its customers in the form of VIO (VMware Integrated OpenStack). VMware hopes to be able to enable more OpenStack usage while at the same time reducing the complexity that it takes to build an OpenStack configuration for an organization.

VMware is offering up many of its technology layers to match up with various projects in the OpenStack initiative. NSX will be primarily powering Neutron, vSphere will be powering Nova, and Cinder/Glance will be powered by VSAN. I will be keeping an eye on this as it moves forward as OpenStack seems to be making a move towards more adoption in the overall virtualization community.

One of the most interesting open source initiatives that VMware announced partnerships with was Docker. For those not knowing what Docker is, Docker is “an open platform for developers and systems administrators to build, ship, and run distributed applications.” Both the companies are partnering up to ensure that the Docker engine gets “first-class citizen” rights in a breadth of VMware products (Workstation/Fusion, vSphere, vCloud Air). Also, VMware, just like the OpenStack initiative, will contribute heavily to Docker’s development. Lastly, VMware and Docker will create interoperability between the Docker Hub and VMware’s management tools (vCenter, vCloud Automation Center, vCloud Air).

As applications are being updated to more of a distributed model and written for operating systems that may not be today’s mainstream, it will be interesting to see how this partnership with Docker continues to grow and expand to further take both containers and virtualization to a whole new level. Keep an eye on this, folks!

All-Flash Storage Explosion

Stepping onto the floor of the Solutions Exchange, one could easily pick out that storage companies were going to dominate the floor. Actually, all-flash storage vendors were the dominant force on the Solutions Exchange floor. Having attended VMworld in 2013, I felt that there were now even more storage vendors in the all-flash realm that had just popped up on the radar in the last 12 months. Regardless of feelings about brand new vendors out there, it was apparent that each vendor had a unique approach to why they felt their all-flash storage devices were going to be the best device for you to choose from. At this moment, all-flash storage does seem overkill for a vast majority of applications that are out there, but given the quickening pace that application development works, all-flash storage will likely end up being needed for all workloads.

It will be interesting to see how many of these storage companies will be there next year. Unlike that kind of certainty, it is rather apparent that as prices come down in the flash arrays, we will be seeing more and more of these types of devices in our datacenters.


Currently, vSphere 6.0 is in beta, and while some features were removed from NDA to be discussed at VMworld 2014, I look forward to VMworld 2015 where I’m sure this suite will be fully realized. Some of the technologies that are coming from this beta look very interesting (VVols and higher-latency vMotion are two that spring to mind). It would be worth keeping an eye on this beta as it evolves towards General Availability.

December 2014
« Nov