TCO Comparison of Flash-Powered Cloud Architectures vs. Traditional Approaches

Recently, GridIron and Brocade announced a new joint Reference Architecture for large scale cloud-enabled clustered applications that delivers record performance and energy savings.  While the specific configuration that was validated by Demartek was for Clustered MySQL Applications, the architecture and the benefits apply equally to other cluster configurations such as Oracle RAC and Hadoop.  The announcement is available here: GridIron Systems and Brocade Set New 1 Million IOPS Standard for Cloud-based Application Performance and Demartek’s evaluation report is available online at: GridIron & Brocade 1 Million IOPS Performance Evalution Report.

Let us take a closer look at the Total Cost of Ownership (TCO) profile of the Reference Architecture vis-à-vis  alternatives.  For the OpEx component, we’ll just use power consumption as the main/only metric.


  • Total IOPS needed from the cluster = 1 Million Read IOPS and 500,000 Write IOPS
  • Total capacity of the aggregate database = 50 TB


  • Cost of a server with the requisite amount of memory, network adapters, 4x HDDs RAIDed, etc. = $3,000
  • Number of Read/Write IOPS out of a server with internal/local disks = 500
  • Power consumption per average server = 500 Watts
  • It takes a watt to cool a watt; in other words if a server consumes 500 Watts, it takes another 500 Watts to cool that server
  • Cost of Power: USA commercial pricing average of $0.107/KWH
  • The cost of the many Ethernet switch ports vs. the few Fibre Channel switch ports is assumed to be equivalent and will be excluded from the calculations.

Option 1: Traditional Implementation Using Physical Servers

In this scenario, IOPS is more of a determining factor for the number of servers required rather than the capacity of the total database.

  • Number of servers (with spinning HDDs) required to hit 1 Million IOPS = 1,000
  • Assuming 40 servers per rack, total number of Racks = 1,000/40 = 25 Racks
  • Cost of the server infrastructure = 1,000 * 3,000 = $3,000,000
  • Power consumed by the serves = 500 * 1,000 = 500 kW
  • Power required for cooling = 500 kW
  • Total power consumption = 1000 kW
  • Annual OpEx based on power consumption = $0.107 * 1000 * 24 * 365 = $937,320
Option 2: Traditional Implementation Using Physical Servers AND PCIe Flash Cards in Each of the Servers
In this scenario, capacity of the total database (limited by the flash capacity of the PCIe flash cards) is more of a determining factor for the number of servers required rather than the IOPS from each server.
  • Capacity of each PCIe flash card = 300 GB
  • Two PCIe cards will be used to RAID/mirror per server
  • Number of servers required to get to 50TB total = 167
  • Assuming 40 servers per rack, total number of racks required = 5 Racks
  • Cost of the server infrastructure = 167 * 3,000 = $501,000
  • Cost of the PCIe flash cards ($17/GB) = 2 * 167 * 300 * 17 = $1,703,400
  • Total cost of server infrastructure including flash = $2,204,400
  • Power consumed by the servers = 500 * 167 = 83 kW
  • Power required for cooling = 83 kW
  • Total power consumption = 166 kW
  • Annual OpEx based on power consumption = $0.107 * 166 * 24 * 365 = $155,595

Option 3: Implementation Using GridIron-Brocade Reference Architecture

Two GridIron OneAppliance FlashCubes will be used for a mirrored HA configuration.  Each FlashCube has 50TB of Flash.

  • Number of servers required = 20
  • Rack Units of the two FlashCubes = 2 * 5 = 10 RU
  • Total number of Racks = 1 Rack
  • Cost of the server infrastructure = 20 * 3,000 = $60,000
  • Cost of the FlashCubes = 2 * 300,000 = $600,000
  • Total cost of the server infrastructure including flash = $660,000
  • Power consumption per FlashCube = 1,100W
  • Power consumed by the servers and FlashCube = 20 * 500 + 2 * 1,100 = 12.2 kW
  • Power required for cooling = 12.2 kW
  • Total power consumption = 24.4 kW
  • Annual OpEx based on power consumption = $0.107 * 24.4 * 24 * 365 = $22,871
Comparison Summary of Different Approaches
Traditional Traditional with PCIe Flash GridIron-Brocade Reference Architecture
Number of servers 1,000 167 20
Number of Racks 25 5 1
CapEx of Infrastructure $3,000,000 $2,204,000 $660,000
Power Consumption (kW) 1,000 166 24
OpEx* (just based on power) $937,320 $155,595 $22,871
*The difference in the management costs of 1,000 servers vs. 20 servers will be equally dramatic, but is not included in the calculations above.

Normalized Comparison of Different Approaches

By normalizing the values in the comparison table (where the values of the traditional approach is at 100% and the other values are relative to 100%), we get the following graph.  It is very clear from the graph that both the CapEx and OpEx are dramatically lower with the GridIron-Brocade Reference Architecture.

Normalized Comparison of Different Approaches to Building Large Clusters

Normalized Comparison of Different Approaches

Hot Rods are Cool Again for Big Data

Back in the 1930’s and lasting well into the 1950’s an innovative and brave group of tinkerers took it upon themselves to take stock roadsters and modify their engines in an unchecked passion of good old American ingenuity all in the name of speed.  High compression heads, overhead-cam conversion and radical cams became part of the hot rod lexicon.  Going faster was fun, and if you could get the fastest car, you’d get the social benefits of being popular.  But taking technology risks to make something better is what set the real innovators apart from the rest of the crowd.   Many of these innovations actually made it into the mainstream and became common practice production cars.

I can’t help but see a parallel universe emerging for Big Data.  You have this existing infrastructure of database applications, servers, fabrics and storage that you need to “tune-up” for the fastest execution time.  You need to “hot rod” your data.  And THAT was the inspiration for our product, how can you take a data roadster and turn it into a data hot rod?!!  IT innovators who can transform their legacy infrastructure to do MORE with their data are the next breed of innovators.  But how do you go to your custom shop and find the right parts, or tool your own in your machine shop.  That’s where we come in.

The GridIron TurboCharger is just that, the magic tuning ingredient you need to create your own data hot rod.  To make sure you recognize its ability in your IT environment, we actually did paint it racing orange and added the flames.  It really stands out, not only visually but working as a vital component in creating your data hot rod.

What are the hot rod problems we solve, the equivalent of high compression heads, cam-conversions and the like?  First we address the toughest problem in front of IT today, where do I put my flash?!!  Flash can go in so many places it makes your head spin, and the more places you put it, the harder it is to manage it.  Fusion IO is telling you to put it in your server, your storage array companies are saying to add it to their already overtaxed storage arrays and use tiering software to figure out your hot data and migrate it there.  We say something much simpler, put SSD at the heart of your engine, between the server and storage and offload BOTH.  Let us figure out your hot data in real time and place it in SSD available immediately to speed up your applications.

Customers who have deployed our appliances have seen their applications run times and reporting times on their data sets improve anywhere from 2x to 10x.  How much is it worth to YOUR business if you can drive THAT much faster?!!  I’m betting it is worth a lot.  But we’re not charging a fortune for this technology to make it easier for you to see the real benefits and to make your infrastructure a data hot rod.  And as a side benefit if you want to see how your app is tuned up, we have a graphical engine in our GridIron Management System (GMS) that lets you plot all kinds of data to see what benefit you are getting from front end to back end.  You might learn a lot more about your application than you ever thought you could.

So come to our shop, get a TurboCharger installed in your data engine.  If you don’t believe us, we’ll even let you test drive one.  But trust me, if you see what our customers see, you won’t be putting it back in its box.  Join our growing roster of customers and Put the Pedal to the Metal!!!

Herb Schneider
VP of Engineering and TurboCharger Enthusiast

Warp Speed Big Data – 1 Million IOPS using MLC

For anyone who ever doubted that MLC could deliver high performance – welcome to a new frontier! Here at GridIron, we have boldly gone where no company has gone before by being the first company to use MLC to drive one million IOPS. This is good news for us, obviously, but it is also good news for all those IT folks out there who are struggling to balance the performance challenges of Big Data and databases with efficiency and cost savings.

When we look at simple economics, it is clear that MLC, not SLC or eMLC, is the direction in which high volume Flash technology is headed. Already we see the falling price of MLC bringing it into alignment with the price of hard disk. That’s why we here at GridIron think it makes perfect sense to boldly direct engineering resources to developing Big Data solutions that incorporate MLC.

To ensure we are not delusional, we invited some independent third parties in to take a look at what we have accomplished. They have helped us confirm that we have indeed made MLC history. You’ll hear more about the specifics in upcoming posts.

We have repeatedly verified that we can run systems with production database loads at one MILLION IOPS (LOVE that number). Users can expect server consolidation of at least 10:1 and reduce power consumption by a staggering 60%. We are excited about what this performance breakthrough means for MLC technology and for the value it will bring to Big Data.

Warp speed for Big Data is here!

MLC Flash for Big Data Acceleration

Big data analysis demands bandwidth and concurrent access to stored data. Write load will depend on data ingest rates and batch processing demands. The data involved will typically be new data and updates of existing data. Indices and other metadata may be recalculated, but is generally not done in real time. The economics of supporting such workloads focus on the ability to cost effectively provide bulk access for concurrent streams. If only a single stream is being processed, spinning disk is fine. However, providing highly concurrent access to the dataset requires either a widely-striped caching solution or a clustered architecture with local disk (Hadoop). Because write lifetimes for flash are not stressed in this environment, utilizing wide stripes of MLC for caching is the most cost-effective way to provide highly concurrent access to the dataset in a shared-storage environment.

Now, a lot of the SLC versus MLC debate centers on blocking and write performance – specifically dealing with write latency and the blocking impact on reads. With traditional storage layout, data can be striped over only a few disks (4 data disks for stripes of RAID 5/6). This creates high read blocking probability for even the smallest write loads. By distributing the data over very wide non-RAID stripes (up to 40 disks wide), the affect of variable write latency can be mitigated by dynamically selecting least-read disks for new cache data and greatly reducing the impact of writes on the general read load. The wider the striping of physical disks in a caching media the greater the support for concurrent access and mixed read and write loads from the application. MLC is an excellent media choice, both technically and economically.

By employing affordable MLC as a write-through caching layer that is consistent with the backend storage, the effect of even multiple simultaneous flash SSD failures can be removed. Most traditional storage systems cannot survive multiple concurrent drive failures and suffer significant performance degradation when recovering (rebuilding) from a single device failure. Cache systems can continue operation in the face of cache media failures by simply fetching missing data from the storage system and redistributing to other caching media. However, it’s important to note that placing the cache in front of the storage controller is critical to achieving concurrency. The storage controller lacks the horsepower necessary to sustain performance – but that’s a topic for another day.

MLC is driving the price point of Flash towards that of enterprise high-performance spinning disk. The constant growth in the consumer space means that MLC will continue to be the most cost-effective flash technology and benefit the most from technology scaling and packaging innovations. Lower volume technologies such as eMLC and SLC do not share the same economic drivers and thus will continue to be much more expensive. The ability to utilize MLC efficiently and adapt the technology to meet the performance and access needs of Big Data will be hugely advantageous to customers and the vendors who can deliver intelligent, cost-effective solutions that utilize MLC – such as the GridIron TurboCharger™!

Un-structured, structured and relational data – how big is big?

So now that we are definitely in love with big data…how big does it have to be before we really consider it big?

Well…it depends.

Something is really not that big if it’s just sitting there and you are not hauling it around.

See – when Mr. Neumann put down the seminal architecture for programmed computers - he definitely chose sides! The ‘program‘ was the quarterback – and data played a decidedly subservient role – always at the beck and call to be hauled and mauled as programs saw fit.

Programs ‘fetch’ data – at their leisure, at their chosen time.

Even the operating term sounds more fitting for your Corgi than someone or something more serious!

So we have been writing code that merrily ‘fetches’ data and processes it. Works for most programs. Except when data grows. And grows. And grows…

It grows until it starts to be a real problem to just ‘fetch’ it. And it becomes a real pain to move it around. How you have to think about -

  1. perhaps change roles and send the ‘program’ to data instead of the other way around
  2. how to be smart about moving ONLY the required amount of data

For a PC-XT with a whopping 10MB Hard drive big data was just 10MB. That was the entire drive! The little 8088 CPU running at 4.77MHz on a 8-bit bus could scream at 4.77 MB/sec and could finish scanning the disk (theoretically) in less than 2 seconds.

My desktop is running on an i7-2600 CPU with 4 hyperthreaded cores at 3.4 GHz. This beastie can scan my 2TB hard-drive at the rate of a little under 100GB/sec (again, theoretically) – taking 20 seconds to do the scan.

Let’s take a look at that workhorse of enterprise relational data cruncher – Oracle RAC. A state-of-the-art 4-node RAC system should be able to scan in data at the rate of 4 to 5 GB/sec from the storage – or in excess of 20 GB/second. At that rate – a database can load at the rate of 60+TB/hour. Throw in scheduling overhead, network latency, and error checks and you  are looking at 10 to 20 TB/hour. That’s a very impressive number – giving you head-spinning bragging rights in Oracle OpenWorld data warehouse tutorial sessions…

Now consider a 50TB data Warehouse; not too extreme by Oracle standards but now we are talking about 3 hours to JUST LOAD THE DATABASE.

We’re not just ‘fetching’ data anymore, are we?

50TB is “big data” for Oracle RAC, even more so for single-instance Oracle installations.

Even the ne-plus-ultra of NoSQL – Hadoop is not used in isolation. Typically a Hadoop processing stage is followed by HIVE or other structured databases – even MySQL.

So a big ‘unstructured’ data setup may as easily feed into a big ‘structured’ data analysis stage.  So how big do they typically get before the big data characteristics start to show (difficulty in fetching the entire data, sending program to stationary data, etc., etc.). Here is my take:

Hadoop – The top dogs may sneer at something below a petabyte – but in reality a 100TB Hadoop/NoSQL cluster is getting big. You can’t just deal with it casually and it demands attention in care and feeding.

MySQL cluster – A 100 node cluster in the size range of 100TB is certainly getting there.

Oracle including RAC - 50TB and up…especially DSS (Decision Support System) and Warehouses. Folks up in Amazon and Ebay run some very impressive big data warehouses in Oracle. Then there are installations at “those who shall not be named.”

Hadoop is loved because it’s (supposedly) an open ended framework when it comes to data size. Petabytes of data pouring in as concrete? No problem – just add more nodes as your data grows – no need to change your program – the same java code works. But remember the story of war elephants crossing Alps – just because Mr. H. Barca decided to do it does not mean that you should consider it easy. Tilting at a 1,000 node cluster with Hadoop is a day’s work for Google but not for a typical enterprise CIO.

We’ll explore challenges unique to big structured/relational data and big unstructured data in the coming posts…