Teradata introduces Intelliflex
Published:
Content Copyright © 2016 Bloor. All Rights Reserved.
Also posted on: The IM Blog
Teradata has announced its next generation MPP (massively parallel processing) architecture, known as Intelliflex. To understand how this works it will help to understand how the existing platform is engineered. Here, each compute node connects to each storage device. And once you’ve cabled this up you can’t change it. Moreover, if that’s the configuration for one server then that has to be the configuration for all your other servers. So the way you scale is constrained by this approach: with a set relationship between CPUs and storage.
With Intelliflex you still have shared-nothing but instead of storage connecting directly to compute nodes, both of these connect through the Interconnect (BYnet) instead. And you can – more or less – have as many storage devices connecting to the Interconnect as you want, and as many compute nodes as you want. In other words, you can now scale these independently. That’s quite a big deal: if you’re processor bound you don’t have to buy more storage, if you’re I/O bound you don’t have to buy more processing power.
Another direct consequence of this architecture applies to maintaining consistent system performance. Historically, you would typically implement one standby node for every three or even every two active nodes. With Intelliflex it is perfectly reasonable to have one standby node for a dozen active nodes, all connected through the same Interconnect.
It is worth commenting here that, from a Teradata perspective, this is not new technology. I gather that the company knew how to do this way back in the last century but it is only now that disk manufacturers are catching up.
There are some other ramifications. For example, you can triple both your memory capacity and your memory bandwidth. Again, you need to understand how Teradata environments have previously worked. So, for example, you had a node with 24 cores and that had 512GB of memory. You can now split this so that you have three nodes, each with 8 cores and each with 512GB. You get the same degree of parallelism – you still have 24 cores – but more memory and more bandwidth. Of course, this feels like a cheat but it doesn’t mean it isn’t real and isn’t valuable.
There are other implications of this new level of flexibility. Some of these, such as reduced downtime for planned upgrades, have been announced along with Intelliflex but there are other potential consequences that have not yet been publicly aired. I’ll leave you to work what these might be.