The future of data centers will rely on cloud, hyperconverged infrastructure and more powerful devices.
Credit: Thinkstock
A data center is the physical facility providing the compute power to run applications, the storage capabilities to process data, and the networking to connect employees with the resources needed to do their jobs.
Experts have been predicting that the on-premises data center will be replaced by cloud-based alternatives, but many organizations have concluded that they will always have applications that need to live on-premises. Rather than dying, the data center is evolving.
It is becoming more distributed, with edge data centers springing up to process IoT data. It is being modernized to operate more efficiently through technologies like virtualization and containers. It is adding cloud-like features such as self-service. And the on-prem data center is integrating with cloud resources in a hybrid model.
Once only available to large organizations that could afford the space, resources, and staff to maintain them, today’s data centers come in many forms, including colocated, hosted, cloud and edge. In all of these scenarios, the data center is a locked-away space, noisy and cold, that keeps your application servers and storage devices safe to do their thing 24 hours a day.
What are the components of a data center?
All data centers share a similar underlying infrastructure that enables reliable, consistent performance. Basic components include:
Power: Data centers need to deliver clean, reliable power to keep equipment running around the clock. A data center will have multiple power circuits for redundancy and high availability, providing a backup through Uninterrupted Power Supplies (UPS) batteries and diesel generators.
Cooling: Electronics generate heat, which, if not mitigated, can cause damage to the equipment. Data centers are designed to draw heat away while providing cool air to eliminate overheating equipment. This complex balance of air pressure and fluid dynamics involves the uniform placement of cold aisles where the air is pumped in and hot aisles to collect it.
Network: Within the data center, devices are interconnected so they can talk to each other. And network service providers deliver connectivity to the outside world, facilitating access to enterprise applications from anywhere.
Security: A dedicated data center provides a layer of physical security far beyond what can be achieved when computer gear is stored in a wiring closet or other location not specifically designed from the ground up for security. In a purpose-built data center, equipment is safely tucked away behind locked doors and housed in cabinets with protocols to ensure only authorized personnel can access the equipment.
What are the types of data centers?
On-premises: This is the traditional data center, built on the organization’s property with all the necessary infrastructure. An on-premises data center requires an expensive real estate and resource investment, but it is appropriate for applications that can’t move to the cloud for security, compliance or other reasons.
Colocation: A colo is a data center owned by a third party that provides the physical infrastructure and management for a fee. You pay for the physical space, the power you consume, and network connectivity within the facility. Physical security is provided through locked data center racks or caged areas under lock and key. Access to the facility requires credentialing and biometrics to ensure authorization. There are two options within the colo model: You can maintain total control of your resources, or you can go with more a hosted option in which the third-party vendor takes responsibility for the physical servers and storage units.
IaaS: Cloud providers such as Amazon Web Services (AWS), Google Cloud Services, or Microsoft Azure provide Infrastructure as a Service (IaaS), allowing customers remote access to dedicated slices of shared servers and storage through a web-based user interface to build and manage a virtual infrastructure. Cloud services are paid for based on resource consumption, and you can dynamically grow or shrink your infrastructure. The service provider manages all equipment, security, power, and cooling; as the customer, you are never allowed physical access to it.
Hybrid: In a hybrid model, resources may be housed in multiple locations and interact as if in the same place. A high-speed network link between the sites facilitates faster data movement. A hybrid configuration is excellent for keeping latency or security-sensitive applications close to home while accessing cloud-based resources as an extension of your infrastructure. A hybrid model also allows for the rapid deployment and deprecating of temporary equipment, eliminating the need to over-provision purchases to support business peaks.
Edge: Edge data centers typically house equipment that needs to be closer to the end user, such as cached storage devices, which hold copies of latency sensitive data due to performance needs. It is common to place backup systems in an edge data center, giving operators better access to remove and replace backup media (such as tape) for sending to offsite storage facilities.
What are the four data center tiers?
Data centers are built around service level agreements (SLAs) that account for the potential risk of service interruption over a calendar year. To reduce downtime, a data center will deploy more redundant resources for greater reliability (for example, there may be four geographically diverse power circuits in the facility instead of two). Uptime is expressed as a percentage, often referred to as nines, reflecting the number of the digit 9 in the uptime percentage, as in “four nines” or 99.99%.
Data Centers are measured in 4 tiers:
- Tier 1: No more than 29 hours of potential service interruption in a calendar year (99.671% uptime).
- Tier 2: No more than 22 hours (99.741%).
- Tier 3: No more than 1.6 hours (99.982%).
- Tier 4 no more than 26.3 minutes (99.995%).
As you can see, there is a big difference between Tier 1 and 4 classifications, and as you would expect, there can be dramatic cost differences between tiers.
What is hyper-converged infrastructure?
The traditional data center is built on a three-tier infrastructure with discreet blocks of compute, storage, and network resources allocated to support specific applications. In a hyper-converged infrastructure (HCI), the three tiers are combined into a single building block called a node. Multiple nodes can be clustered together to form a pool of resources that can be managed through a software layer.
Part of the appeal of HCI is that it combines storage, computing, and networking into a single system to reduce complexity and streamline deployments across data centers, remote branches, and edge locations.
What is Data Center modernization?
Historically, the data center was viewed as a distinct collection of equipment serving specific applications. As each application needed more resources, equipment was procured, downtime was required to deploy it, along with ever-increasing use of physical space, power, and cooling.
With the development of virtualization technologies, our perspective shifted. Today, we see the data center holistically as a pool of resources to be partitioned logically and, as a bonus, used more efficiently to serve multiple applications. As with cloud services, application infrastructures containing servers, storage, and networks can be configured on the fly from a single pane of glass. More efficient use of hardware allows for more efficient, greener data centers, reducing the need for more cooling and power.
What is the role of AI in the Data Center?
Artificial intelligence (AI) allows algorithms to play the traditional Data Center Infrastructure Manager (DCIM) role, watching power distribution, cooling efficiency, server workload, and cyber threats in real-time and to make efficiency adjustments automatically. AI can shift workloads to underutilized resources, detect potential component failures and balances resources in the pool. It does all this with little human interaction.
Future of the data center
The data center is far from obsolete. CBRE, one of the largest commercial real estate investment and services firms, says the North American data center market grew new capacity by 17% in 2021, much of this due to hyperscalers like AWS and Azure, as well as social media giant Meta.
Enterprises are generating more data every day, whether that’s business process data, customer data, IoT data, OT data, data from patient monitoring devices, etc. And they are looking to perform analytics on that data, either at the edge, on prem, in the cloud, or in a hybrid model. Companies might not be physically building brand new, centralized data centers, but they are modernizing their existing data center facilities and expanding their data center footprint to edge locations.
Looking ahead, demand from autonomous vehicle technology, blockchain, virtual reality and the metaverse will only spur increased data center growth.
Related content
- newsSupermicro unveils AI-optimized storage powered by Nvidia New storage system features multiple Nvidia GPUs for high-speed throughput. By Andy PatrizioOct 24, 20243 minsEnterprise StorageData Center
- newsNvidia to power India’s AI factories with tens of thousands of AI chips India’s cloud providers and server manufacturers plan to boost Nvidia GPU deployment nearly tenfold by the year’s end compared to 18 months ago.By Prasanth Aby ThomasOct 24, 20245 minsGPUsData Center
- newsGartner: 13 AI insights for enterprise IT Costs, security, management and employee impact are among the core AI challenges that enterprises face.By Michael CooneyOct 23, 20246 minsGenerative AICareersData Center
- PODCASTS
- VIDEOS
- RESOURCES
- EVENTS
NEWSLETTERS
Newsletter Promo Module Test
Description for newsletter promo module.