IoT, Machine Learning and OCP: the next level of data centre energy efficiency
Published:
Content Copyright © 2018 Bloor. All Rights Reserved.
Also posted on: IT Infrastructure
What between the EU Code of Conduct, Green Grid and PUEs hovering around 1.1 for the most efficient data centres, you might be forgiven for thinking that there weren’t many more energy efficiencies to be squeezed out. Obviously, there are plenty of older data centres that are not particularly energy efficient, but even in new data centres there is still more that can be achieved.
I don’t want to get into a big debate about the importance, or otherwise of PUEs. They are a useful indicator of relative energy efficiency, but they are by no means the only measure that needs to be used. More can be done to reduce the power consumption of the servers themselves, and the use of IoT sensors and machine learning can help drive further efficiencies in even the most modern data centre.
One of the key guiding principles of the Open Compute Project (OCP) is to drive greater data centre energy efficiency. The OCP sever and rack designs reduce energy consumption by 29% according to a study by CERN. They also allow the aisle behind servers to run much hotter, because all access is from the front, which offers a very efficient means of heating a local housing grid.
But not everyone can switch quickly to OCP designs for servers and the data centres themselves. Not everyone can place their data centres in Iceland or the Nordics to take advantage of lower cooling requirements and cheaper sources of renewable energy. New cooling technologies help. Data Centre Infrastructure Management (DCIM) solutions and energy modelling tools capture a lot of data and help in the design and layout of new facilities, but crucially they don’t provide the ability to predict server loads and manage cooling and power utilisation proactively.
Internet of Things (IoT) sensors are low cost and simple to install. They can be used on their own, or to supplement existing monitoring devices in data centre equipment that doesn’t perhaps capture the granular, targeted data needed. The trick then is to correlate this sensor data, across all data centre equipment, with data on server loads in near real-time and feed that into machine learning algorithms. A loop back process and simple management console then ensures that this constantly updated and refined information can be used to predict power and cooling requirements against server loads, again in near real-time.
One of the by-products of such a predictive approach is that mechanical data centre equipment can be used much more effectively. This reduces energy usage, but also results in a reduction in the running time of the equipment by as much as 35%. This means there is less wear and tear on the equipment, lengthening its potential life, thereby reducing capital expenditure.
For hyperscale data centre operators, large cloud service providers and enterprises whose value propositions are based largely on electronic, rather than physical infrastructures, even small reductions in energy usage and capex will have a significant impact on costs and margins. Many enterprises that have a more traditional mix of business models and reliance on technology often have less efficient facilities. Using IoT and machine learning solutions will generate substantial savings and may be a simpler, more cost-effective way of reducing energy costs than investing in expensive DCIM and modelling solutions, or worse, investing in new data centre mechanical equipment that may not have been necessary.
I’ll be returning to this topic later this year to review the market and emerging vendors.