The Bandwidth Imbalance between Server Memory and Server PCIe Flash

Is flash in the server better suited as:

  1. memory addition or
  2. fast local disk?

While there is excitement in the industry about using PCIe flash as memory, here are some facts to consider:

  • The memory system of a Xeon CPU runs at 10 Million+ IOPS with latencies around 0.1 uS and pushing 30+ GBytes/sec
  • A typical PCI-e flash card performs at 100,000+ random IOPS with latencies around 100uS and pushing 1.5GB/sec

As you can see, adding a flash card to server memory means you are actually REDUCING the speed of memory in the server.  Additionally, Operating Systems don’t know how to deal with a NUMA architecture that’s non-uniform by

  1. 1:10 in access performance
  2. READ/WRITE asymmetry of 1:10 to 1:1000

Sandy Bridge class servers are ratcheting up server I/O demands further making the bandwidth imbalance between server memory and server flash even more drastic.

So faced with the choice of a) adding flash to server memory to slow it down and confuse the OS, or b) adding flash as a very fast local store, the obvious choice is (b)!

This blog post is a summary of a post in the Flash Tech Talk blog.  Read the full original post here:

About these ads

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s