Society relies on digital applications for work, education, transportation, entertainment, healthcare, and just about every other aspect of our modern lives. Through these digital applications, we create and consume massive amounts of data with the volume of data traffic globally having tripled from 2019 through 2022. All that data – all of the information which we think of as ‘in the cloud’ – is processed and stored inside a data centre.
While data centres have become cornerstones of the digital world, few people have ever been inside one. You might drive by a data centre on your daily commute and not even know it, but understanding how a data centre works can help explain the value of this emerging asset class.
It’s a well-worn adage in real estate: “location, location, location.” This is true for homes and shops, and it’s also integral to the value of data centres. The importance of location is why leading data centre developers have site selection teams dedicated to identifying properties that meet tenant-specific needs.
Being in a major city is important, but even within a particular urban hub, data centre developers need to find the locations closest to end-users with the highest grade infrastructure. Dallas, for example, is one of the world’s top data centre markets, with high marks on factors like power cost and network connectivity. Within the metro region, sites in the Eastern District of Texas are widely considered to be higher risk than sites in South Dallas.
To ensure that a data centre provides fast, stable service for users while yielding reliable returns for investors, data centre operators need to consider a number of elements.
Site Selection Factors
- Economical, stable power supply
- Low risk of natural disaster
- Strong network connectivity
- Favorable tax laws
- Renewable energy availability
- Access to technical talent
Mission Critical Assets
Looking at a data centre from the outside it may resemble a high-tech fortress. One hyperscale campus under development in Goodyear, Arizona will be the size of 35 football fields and the properties can be as large as institutional grade logistics distribution centres. Many data centres are placed to avoid, or designed to withstand, natural disasters like tornadoes and floods, as well as human-caused risks such as a truck collisions or airplane crashes.
The mission of a data centre is to ensure tenants can transfer data between their servers and storage devices and their end users. Delivering on that mission requires three components: network equipment, which manages the data flowing through the ‘pipes’ of the data centre; power infrastructure to keep the network, cooling, and IT equipment running; and cooling infrastructure to remove the heat generated by all those circuits.
In a concurrently maintainable data centre (designed to “Tier III” standards or better) mission-critical equipment is redundant. This requires at least two instances of each critical component, with sufficient spares to keep the network, power, and cooling systems running even if a component is offline due to maintenance or failure.
Network redundancy means at least two separate cable entry points, at least two distinct meet-me room data exchanges, and at least two sets of cable distribution systems. It’s critical to ensure physical network elements (such as a ‘pair’ of dark fiber connections) enter the data centre from independent sources to avoid single points of failure upstream of the data centre.
Redundant power infrastructure means two utility feeds from independent sources, two sets of uninterruptible power supplies (UPS), and two separate power distribution systems. Cooling infrastructure like air handlers, chillers, and pumps likewise need to be redundant.
Data comes in and out of the data centre via fiber-optic cables operated by a network provider or via ‘dark fiber’ dedicated to, and operated by, a single tenant. Most data centres are ‘carrier neutral’ meaning they allow any carrier to deploy their network infrastructure and/or run fiber-optic cables into the facility.
- On-site generators – A concurrently maintainable data centre (designed to “Tier III” standards) must be able to continue operating for at least 12 hours if utility power goes out. That requires on-site generating capabilities such as diesel generators and enough fuel stored on site to power them.
- Uninterruptible power supply – Instead of going directly to tenants’ IT equipment, power for the facility passes through a UPS system that protects servers, routers and other gear against disruptions like power surges and also provides temporary emergency power to keep the data centre running in case of a utility outage.
- Distribution – After passing through the UPS, power is distributed directly to the data halls and the tenants’ IT equipment.
The electricity used in a single data centre building can be enough to power 36,000 homes. The IT equipment using all this electrical capacity generates a lot of heat and that heat requires cooling.
There are a range of cooling infrastructure technologies on the market and the ‘best’ depends on the type of work the IT equipment is doing, on the local climate, and on tradeoffs between energy efficiency and water efficiency.
With all other factors being equal, closed-loop air-cooled chillers use less water but more energy than water-based evaporative cooling systems. In water-constrained markets and markets where renewable energy is readily available, leading data centre developers are increasingly relying on air-cooled chillers. These systems use water pumped through a closed loop of pipes to extract the heat from the data hall and expel it into the outside air.
A large-scale data centre houses hundreds of millions of dollars of IT equipment and – even more valuable – the IT systems and proprietary data that are the beating hearts of most companies.
This data lives in servers in the data hall. If you’re standing just inside a data hall, you’ll see a large room with rows and rows of servers stacked in racks.
Chilled supply air can be delivered to the server racks in many ways, including through a raised floor plenum, through ductwork above the racks, or through rows of fans lining the data hall, aptly referred to as ‘fan walls.’
As density within data halls increases, tenants may look to more advanced approaches for removing heat, including using liquid cooling in addition to or instead of forced-air. Oftentimes liquid cooling using equipment such as rear-door heat exchangers or even direct-to-chip cooling can be incorporated into traditional forced-air data halls.
Some data centre operators have been pioneering liquid immersion cooling to attain greater efficiency, however, the technology has yet to achieve widespread adoption due to requirements for specialized servers, equipment and materials to operate the systems.
How a particular data hall is configured depends on the particular needs of the tenant. Hyperscale companies that operate gigawatts of data centre capacity around the world typically prefer standardized deployments across their portfolio – but the configuration of one company’s data hall may be quite different from its competitors’.
Ensuring that data hall designs support the broadest set of tenants and allow for ready deployment of client-requested configurations without requiring one-off customisation means data centre operators must have deep relationships with tenants and experienced teams that understand operational requirements.
The equipment and expertise required to operate and maintain data centres is what provides the convenience of cloud services that power Netflix in million of homes and collaboration tools across hundreds of thousands of enterprises.
Given the complexity involved in designing, building and operating this type of digital infrastructure, experience, resources and specialist managers and partners are critical. For investors wanting to participate in this exciting, in-demand sector, we believe data centers represent an attractive long-term investment opportunity.
To find out more about data center innovation and investment visit www.principalam.com
About Principal Asset ManagementSM
With public and private market capabilities across all asset classes, Principal Asset ManagementSM and its specialist investment teams apply local insights with global perspectives to deliver compelling investment opportunities aligned with client objectives. Principal Asset Management is the global investment management business for Principal Financial Group® (Nasdaq: PFG), managing $525.2 billion in assets1.
1 Principal Asset Management AUM as of June 30, 2023.
Principal Asset ManagementSM is a trade name of Principal Global Investors, LLC.