search
close-icon
Data Centers
PlatformDIGITAL®
Partners
Expertise & Resources
About
Language
Login
banner
Article

Data Center Design in the Age of AI: Integrating AI with Legacy Infrastructure

Chris Sharp, Chief Technology Officer, Digital Realty

This is part 3 of our 3-part blog series about Artificial Intelligence (AI) and AI-ready data center infrastructure.

In the age of artificial intelligence (AI), how can enterprises evaluate whether their existing data center design can fully employ the modern requirements needed to run AI? There are major considerations as IT leaders develop their AI strategies and evaluate the landscape of their infrastructure.

This blog examines:

  • What is considered legacy IT infrastructure?
  • How to integrate new AI equipment with existing infrastructure.
  • Evaluating data center design and legacy infrastructure.
  • The art of the data center retrofit.

Though AI workflows raise new challenges and questions around unique power and cooling needs, IT leaders should evaluate the state of their data center design to fit the needs of emerging and evolving modern needs.

What is legacy IT infrastructure?

Identifying legacy infrastructure is part intuition and part experience. From the perspective of IT equipment, we could assume that anything which is not at the bleeding edge is legacy. However, this is often not true.

Many of the world’s IT systems do not run on the latest and greatest hardware. This will continue due to the typical budget, spending, and equipment update cycles of everyone from the hyperscale cloud to small enterprise.

Even in the age of AI, not every rack will be drawing 100 kW or need liquid cooling. Racks full of network, memory aggregation, or storage appliances may still be below 15 kW each and reliant on air cooling.

It becomes challenging to classify IT infrastructure as legacy or not based on its power draw alone. Various industry benchmarks show that new generations of central processing units (CPUs), graphics processing units (GPUs), network equipment, and other IT infrastructure assets are significantly faster than their predecessors, but this alone is often not enough to designate existing equipment as legacy infrastructure.

The best test is to decide whether current infrastructure is holding back the development and operational activities of the organization in ways that new generations of equipment wouldn't.

If it does, it should be classified as legacy infrastructure.

Integrating AI with existing IT infrastructure

In the case of IT equipment, we can think of integrating AI as either utilizing existing servers and their supporting equipment to perform new AI functions, or augmenting hardware deployed with new AI-specific equipment to perform new AI functions.

An example of the latter is taking an existing rack of CPU-based servers and adding two new GPU-based servers to provide more parallel computing muscle to launch a chatbot to a company’s internal users.

This may seem easier than accommodating a new AI high-density deployment, but it comes with three sets of challenges:

  • Adding GPU-based servers to an otherwise low rack density aisle may create hot spots that the building’s cooling system was not originally designed to handle.
  • It may create uneven power loads across the facility and lead to the need to re-allocate backup power resources.
  • It may lead to network congestion as the new equipment multiplies the data transferred per rack.

These factors can lead to new pressure on the data center, which you should consider as part of your IT stack itself.

Evaluating data center design and legacy infrastructure

The data center is as much a piece of your IT infrastructure as the servers that you deploy in it, and so we should consider how this concept of legacy infrastructure applies to the data center facility as well.

In technology terms, the data center industry is no spring chicken. Digital Realty alone supports around 2.4 gigawatts of customer IT equipment globally, and this didn’t happen overnight.

Since our founding in 2004, we have incrementally added to our global data center capacity each year, and all of the customer equipment in those facilities doesn’t go away. Many organizations replace all their servers every three to five years, but some servers may be deployed for up to eight years. When equipment is replaced, it’s done in phases, so that the organization’s applications operate without real downtime.

What this means is that the data center is always on. A data center operator can't simply lift all its customers’ IT equipment out, perform a wholesale upgrade of the facility, and then put it all back in. As time passes, the mix of customer equipment in the data center will typically contain some legacy and some non-legacy equipment.

Additionally, as the data center facility itself ages, some of its own characteristics, such as its airflow design, floor construction, and support for liquid cooling, may not be ideally suited to all the equipment that customers want to deploy.

For example, many data center facilities use a raised floor design— AI equipment drives higher rack densities not just in terms of power draw, but also because of their weight. In some cases, these racks can need a solid concrete slab floor.

This means that for certain use cases, some data centers can fit our definition of legacy infrastructure.

However, a well-designed data center is far more flexible regarding upgrades over time than a server or set of IT equipment across multiple racks. A data center may last over 15 to 20 years, depending on how well the operator designs, retrofits, and modularizes it over time.

AI has prompted a sea change in rack densities and other requirements that impact the data center. Often the data center operator can upgrade parts of the facility to accommodate these new needs.

The art of the data center design retrofit

This process is known as retrofitting, and the art of the retrofit is a key component in how effective the data center operator can design data centers for current and future generations of servers and other IT infrastructure. Imagine an older data center facility originally designed for 10 kW per rack on average. With the emergence of AI, that same facility may be expected to support 100 kW per rack without the luxury of a total shutdown and ground-up redesign.

The flexibility to support these types of changes to the data center over time is a key part of how we design and operate our data centers. For example:

  • Where a raised floor is no longer needed, it can be filled in.
  • Where liquid cooling is required, we can run piping from a new chiller unit and reservoir to the rack.
  • Where new network capabilities are required, we can bring in additional connectivity and optimize all of the network assets inside the facility itself to match.
play
Digital Realty CTO Chris Sharp, “The Benefits of Digital Realty’s Modular Design Approach”

Today, the data center is as flexible, modular, and highly tuned to evolve with the needs of its customers as any other part of your IT stack is. The requirements to support AI in the data center are certainly challenging, and we analyze all our data centers globally to stay on top of how to evolve our design and operations to accommodate developing AI requirements.

Empower your IT strategy with future-proofed AI-ready infrastructure. Download our AI for IT Leaders whitepaper.

This is part 3 of our 3-part blog series about Artificial Intelligence (AI) and AI-ready data center infrastructure.

Tags