Wednesday, April 13, 2016

Why IOPS don't matter ?

The commonly accepted measure of performance for storage systems has long be IOPS. The first storage system I ever installed at a customer site had controllers rated for 20K IOPS and we all thought it was awesome!
Over the year, I’ve developed an extensive knowledge of storage IO benchmarking tools (I’m somewhat of an IOmeter geek I have to admit) and have demonstrated hero IOPS numbers many times on a number of different storage systems.
However, after 18 months of working at Pure Storage, I’ve finally learned that IOPS don’t matter

Who needs all those IOPS anyway?

These days, you would struggle to find a storage system on the market that cannot deliver at least 100K IOPS. Some vendors brags million-IOPS numbers or even multi-million IOPS numbers.
We at Pure have rated our FA-400 systems for 400K IOPS. Do many customers need this? Really? No…
In the generic, multi-purpose enterprise storage market (aka “Tier 1”), used for your typical mix of enterprise applications, physical and virtual servers, my experience is that finding a customer that needs even 100K IOPS is very rare. I used to ask customers how any IOPS their current array was doing at peak time and the answer was invariably 30 to 40K IOPS. Rarely more (the exception being VDI environments and some very specific database workloads of course). I’ve stopped asking. I know that any one of our systems can do more IOPS than what 99% of the customers need.

How to measure IOPS? And what about Latency?

When testing a storage system, the standard practice has long been to use an industry standard benchmark tool such as IOmeter or vdbench to find out how many IOPS a system can deliver with different IO profiles
Unfortunately these IO profiles are usually based on outdated assumptions and my personal opinion is that they are not realistic.
Why is that? Because most of the profiles used in these benchmarks are based on small, 4KB or 8KB IOPS, whereas the average block size commonly observed on customer arrays in mixed workload environments is 32KB to 64KB.
The following chart shows the average block size across all the Pure Storage systems dialling home as of April 2014. As you can see, there’s actually a very small percentage (<5%) of systems with an average block size of less than 10KB. There’s actually less than 15% of the systems with an average block size below 20KB…
phonehome_blocksize_distribution
Even a single application rarely does just one type of IO – just one single database instance will have different IO profiles for the different components of the engine (data files, logs, indexes…)
So these synthetic benchmark tools will allow you to extract a number, unfortunately this number has no relationship with what you can expect in your environment.
What about latency, can you use the latency measured with these benchmark tools to evaluate and compare storage systems?
Well… not really :-)
Even if we ignore the fact that these tools tend to measure average latencies and miss outliers (one single IO taking longer than the other ones in the same transaction can slow down the whole transaction), the latency of an IO will vary depending on the block size. Since the block size is not realistic for IOPS benchmarking, the latency measured during these benchmarks is also pretty much useless
OK so then if neither IOPS nor latency are a good measure of the performance of a storage system, what is then?

Run the app, not a benchmark tool

The only real way to understand how fast an application will run on a given storage system is to run the application on this storage system. Period.
When you run a synthetic benchmark tool such as IOmeter the only application you will are measuring is IOmeter
Ideally, move your production applications to the storage system you are evaluating. If you can’t move the app, move a copy of this app, or a the test/dev server/instances and run the exact same tasks your users would run on your production app.
Measure how this app behaves with you real data and your real workload

Measure application metrics, not storage metrics

And what is the point of measuring IOPS and latency anyway? After all these are metrics that are relevant only to the storage admin
Will your application owner and end users understand what IOPS mean to them? Does your CIO care about storage latency?
No, outside of the storage team, these metrics are useless…
The real metric that application owners and users care about are metrics that relate to these apps and it’s application and user metrics that should be measured
  • How long does this critical daily task take to execute in the application?
  • How fast can your BI make data available to decision makers
  • How often can you refresh the test and dev instances from the production database?
  • How long does it take to provision all of these virtual servers the dev team needs every day?
  • How many users can you run concurrently without them complaining about performance issues?
  • How quickly can this OLAP cube be rebuilt? Can it now be rebuilt every day instead of every week?

Take the time to test properly and measure what really matters

Testing a storage system in your environment with your applications is the only responsible way of evaluating it. Don’t just believe spec sheets or vendor claims : test a real system in a Proof of Concept, in your environment, with your data.
But as important is measuring the correct metrics. At the end of the day, a storage system’s job is to serve data to applications, it is the impact on these applications that should be measured
If you want to evaluate a great all-flash array, contact your Pure representative today
We would love to have a chance to prove you that our systems do what we say they do, and work with you to understand what really matter to your users, application owner and ultimately, your business.
- See more at: http://blog.purestorage.com/why-iops-dont-matter/#sthash.m78UgMmX.dpuf

No comments:

Post a Comment