The Evolution To Cloud Computing (How Did We Get Here?)
Most of us know the basic of Cloud Computing so in this blog, I will focus on why it is important to understand how we arrived at this point in the information technology industry. I won’t spend too much time reminiscing about the past, as too many television documentaries do, but there is value in understanding the origins of Cloud Computing.
The basic concepts behind cloud computing have been part of the IT industry all along. Dust off an old mainframe concepts book and you will be surprised by the similarities to today’s cloud computing. History does tend to repeat itself and this applies to the computer industry as well — I will explain how historical technology trends have brought us here, how we’ve already “been here” for 30 years, and how these historic principles are still valuable today.
From Mainframe Centralized Compute to Distributed Compute and Back Again
In the early days of computer technology, the mainframe computer was a physically very large, centralized computing platform with terminals used by end-users. These terminals could be compared to thin client devices in todays industry and the mainframe the centralized cloud computing platform. This centralized mainframe held all of the computing power (CPU), memory, and storage systems managed by a small staff for shared use by a massive number of users.
There are further similarities when comparing mainframe computing environments to today’s cloud. Although the mainframe was incredibly large physically, it wasn’t all that powerful in terms of memory and CPU until well into the 1980s. What the mainframe excelled at was throughput of Input/Output processing — its ability to move data through the system. When managed wisely, the mainframe hosted a centralized IT staff to maintain security, account management, backup/recovery, system upgrades, and customer support — all components of today’s modern cloud system.
Virtualization is another area that existed 30+ years ago, and is heavily utilized in mainframe computing. Multiple customers and users all share the overall system, but each physical server hosts multiple virtual machines (VMs)–each VM running an operating system, applications, and having its own allocation of memory, CPU, storage, and network. Today’s high-density server hardware is exponentially more powerful than most mainframes were 30 years ago but the practicality and benefits of virtualization have been well proven over the past three decades.
For about 20 years starting In the late 1980’s and into the year 2000, the industry began a huge shift to distributed computing with small servers (compute devices). Each server held more memory, CPU, and storage than even the best mainframe, but had inferior levels of shared computing, shared management, and I/O performance. After 20 years of deploying countless new, smaller servers across thousands of data centers, computer resources (CPU, memory, storage, networking) and management (security, operations, backup/recovery) are now spread out across organizations, and sometime even across multiple contractors or providers.
By most business models, the cost of managing our systems actually went up; however, the cost of compute power is a fraction of what it once was due to ever-increasing performance at lower prices each year. Combine this with the pace of technology improvements and some in the industry started experiencing over-allocation of compute resources where large network servers had more horsepower than needed with lots of unused “idle time”.
Transforming to Consolidated Computing
Most large organizations have numerous data centers, server farms, storage, and applications spread across offices, campuses, and cities worldwide. Organizations have realized that the cost of these numerous and distributed facilities is too expensive and inefficient for many reasons (available skilled labor, physical location/asset costs, maintenance and upkeep, etc.). The consolidation of server farms and data centers is in full swing both as a cost savings measure but also to better focus an organization’s IT staff. Some organizations have come to the maturity to realize they have huge internal IT resources and costs that could better be utilized to focus on core customer-facing services rather than a huge IT department that might be better to outsource. This will eventually bring down operational and management costs, doing more with fewer IT facilities and personnel, which are some of the costliest assets.
Servers are being consolidated at an increasing pace and achieving densities within data centers never before thought possible. Using smaller high-density server hardware and storage systems, one-rack of equipment in a data center can easily host an equivalent of over 10 equipment racks containing legacy servers. This consolidation of data center facilities, server farms, and storage systems is packing so many computer resources in small spaces that the data center/building’s power and HVAC are the limiting factors. Some data centers are now utilizing alternative sources of energy, advanced air handling systems and other “green” energy technologies to supplement the normal power system (not to mention the environment benefits of drawing less power from the grid).
The Evolution to Today’s Cloud Environment
In a relatively short period of time, we have gone from centralized compute processing with thin end-user/edge devices (called terminals) to highly distributed compute environments, and are now headed back to centralized computing once again. History is repeating itself — let’s hope we are making some intelligent decisions and doing it even better this time. Some could argue that mainframes still play a huge role in today’s IT industry, and they may have been the “right” business model all along.
As we consolidate many of the distributed computing platforms, data centers, and the occasional elimination of a mainframe system, it is important to realize where we are headed and why. As we look at today’s cloud computing environment and our immediate future, decisions are being made that make this shift back to centralized computing better than it was back in the early IT days of the 1960s and 1970s.
Here are some of the benefits and challenges faced today in the shift from centralized, to distributed, and back to centralized compute (or at least a hybrid).
- Managed Service contracts replaced with Cloud Service providers at less cost, less risk to consuming organization
- Organizations pay for the usage of cloud which is carefully monitored and measured — in past managed services models, it was difficult to see actual results based on IT spending
- Centralized compute resources, within the cloud provider, are managed by fewer personnel with heavy use of automation and consistent processes resulting in lower cost to the consumers
- Consuming organizations do not need a sophisticated IT staff which is expensive, hard to find and keep—internal technical talent/personnel will either focus on mission critical core business services or leave/transition to work for a cloud provider. This improves quality, maintainability, security, and reduces cost to the consumers of cloud.
- Not enough proven cloud providers at this time to truly give customers a wide selection of providers to choose from. Numerous legacy IT integrators claiming to provide cloud solutions but are essentially still legacy managed service providers.
- Organizations have significant legacy computing resources (servers, data centers, and IT personnel) that will need to be transitioned or eliminated in order to achieve true cost savings and flexibility provided by cloud providers/services
- Mission critical applications that are core to the business or the consuming organization must be transitioned to the cloud. This is neither quick nor easy, and will take some time. Businesses need to evaluate whether their custom/legacy application is truly needed and worth the re-investment, or if an alternative already cloud-enabled service is a better fit in the long-term.
- Procurement and budgeting for cloud services is a challenge to some commercial and government organizations. Existing procurement policies may need to be adapted.
- Existing security, operations and other processes within consuming organizations need to adapt to this new cloud computing model.
Where Are We Now?
Organizations now understand they may not be in the IT industry, and are therefore taking this opportunity to outsource their IT computing needs to cloud providers. They are not just outsourcing labor to an IT contractor this time, as was the norm over the past 20 years, but are hiring truly established cloud providers offering a pay-per-use service at little or not capital expense. The burden of building, maintaining, upgrading, and operating the compute systems is the responsibility of the cloud provider — giving the consuming organization ultimate flexibility and choice of providers without being locked in to one. This results in faster deployment of services at a lower cost, so that the consuming organization can focus on their core business functions and customers, not on their IT department. This is the paradigm shift that has taken 30 years to achieve.
Cloud computing results in faster deployment of services at a lower cost, so that the consuming organization can focus on their core business functions and customers, not on their IT department. This is the paradigm shift that has taken 30 years
So where are we headed in the cloud industry? There will be a reduction in the use of traditional “managed services” and “time and materials” IT contractors providing computer services labor. Both small and large consumers of the cloud will be able to select the provider, pay for the services utilized, and terminate their agreement if finances or priorities of the business change. Organizations won’t be stuck with unneeded computer systems, server farms, and data centers, leading to greater agility in their overall business decisions.
Categories: cloud computing