As an OCP-certified solutions provider and manufacturer, StackVelocity is dedicated to creating cost-effective pathways to help our customers transition from an 19” EIA standard into the 21” OCP standard.
Making that transition is no small challenge. Many have found that transitioning to OCP requires you to make big trade-offs in your datacenter. You either don’t have enough power in the rack to fully populate the rack, or you have a lot of empty space in the rack, which is an inefficient use of space. It’s the classic “Catch 22”: You can add more power to the rack, but that means less space for compute; you can add more compute, but then you don’t have enough power. Ultimately, either situation means you have to put more racks in your datacenter in order to get the same configuration you’ve been running in 19”. That defeats the whole point of OCP—efficiency!
The reason this Catch 22 exists is because a lot of the equipment configurations that have been contributed to the OCP community until today have been contributed by large OEMs and companies with extremely vast datacenters. These huge operations often deploy in large, 10 OU chunks. And there’s a very practical reason why. Let’s take a minute to build out an OCP rack with the equipment that has been available to you in the past.
Typically, you find three servers side by side, but JBODs are available in pairs. So when you converge compute and storage, you start with three compute nodes, and you pair it with a pair of JBODs, and one compute node is left stranded. Or you can add another pair of JBODs, which covers your compute nodes, but now you have a JBOD stranded. So then you continue to alternately add compute and JBODs until you finally find the least common denominator—where compute and storage match up one-to-one—and what you have is a 10 OU block that gets distributed as a unit. Building out your OCP datacenter in 10 OU units is all well and good if you’re a humongous datacenter, but for smaller datacenters, deploying in 10 OU units is not a very efficient use of space!
Last year at OCP Summit, StackVelocity announced a new storage architecture designed to address this problem. We announced HatTrick Storage, and it is now in production. HatTrick Storage offers three JBODs in a 2 OU, which matches up nicely with the 2 OU three-compute sled. So with HatTrick, you now have a one-to-one compute to storage ratio in 4 OU instead of 10 OU. That means you can use a lot more of your rack for workloads, and you can power a lot more servers in your rack. With HatTrick you increase your storage density in the JBOD by 50 percent, but what’s really impressive about HatTrick is that it enables you to deploy compute more efficiently.
Moreover, each of the JBODs in HatTrick Storage is independently operated at 12Gbps SAS, and they are dual-zone capable. Why’s that important? Well, if you don’t need 15 drives or 30 drives for your compute, you can go to a dual-zone configuration, which enables what we call the “16th drive support.” Your 15 drives are split into two zones of seven, and in that last drive (the red one in Figure 1), we put two 2.5” SSDs, which means you now have seven drives and an SSD for each of your compute modules. So, with HatTrick Compute, StackVelocity has made a new configuration available in the OCP community: in addition to 30-drive and 15-drive ratios, you now have a 7-drive ratio at your disposal.
[Figure 1: HatTrick Storage Dual-Zone]
When you compare this new configuration to a 19” EIA standard (see Figure 2), it’s very close to the densities you can get in 1U configurations, and it far exceeds what you can get in a 2U (such as a Hadoop-type configuration). With HatTrick, we’re much denser than that at a rack level, which means fewer racks in your datacenter to do the same amount of work. That saves you money. It also conserves power, which saves you MORE money.
[Figure 2: The Impact of HatTrick Storage]
But that’s not all! We have more news!
Just last week at OCP Summit, we announced HatTrick Compute in our Executive Track and Expo Hall speaking sessions. This is a way to even further reduce the size of your basic OCP building block. Last year, with HatTrick Storage, we reduced the building block from 10 OU down to 4, and now, with HatTrick Compute, we can deploy in just 2 OU. With HatTrick Compute, we have two compute servers which are each taking one zone from a dual-zone HatTrick Storage, so that is seven drives and a 2.5” SSD per compute (Figure 3).
[Figure 3: HatTrick Compute Single]
Alternatively, we can deploy one compute server and two JBODs, which devotes 30 storage drives to one compute node (Figure 4).
[Figure 4: HatTrick Compute Single]
And so now, thanks to HatTrick Compute, instead of deploying servers in sets of three, you can deploy them as you need them—even one server at a time, or two servers at a time.
Allow me to summarize what all of this means to you in one word: *FLEXIBILITY* HatTrick Storage and HatTrick Compute are examples of how StackVelocity is delivering flexibility to the market and truly making OCP work for you.
If you’ve looked at OCP in the past and had a hard time figuring out how to make it work for your configuration and workload, take another look. StackVelocity can make your configuration OCP capable.