I want to present an excellent article authored by Vicom's CTO, Amitava Das. In additional to being responsible for Vicom's strategic direction in terms of the technology and solutions we represent, Das—and many of us as well—has to constantly deal with the practicality and constraints of the solutions adopted by our customers. If funding, time space, etc. were no issue, then the sky would be the limit in terms of the type, size, and quantity of technology and services they would choose to procure. Of course, this never being the case we have to work within those constraints. We see it today very much around storage as storage, data growth, use and retention continue to increase. And a big piece we see around this is Tier 1 storage and that fact that users want their applications and workloads to run on it.
Tier 1 storage is the most robust, highest performing, highest available, and thereby, of course, the most expensive. It is meant for the most important and demanding workloads which are mission critical. But here's the thing: all workloads are important to an organization and this is definitively perceived as so for the owners and users of those workloads. Therefore everyone feels that they deserve (need, some would say) to have their workloads run on Tier 1 storage. In organizations with very clear chargeback models, this can vet itself out in that Tier 1 storage is much more expensive then Tier 2 (so is the chargeback model) so workload owners might not want to, or be able to, pay for Tier 1. In addition, many organizations don't have the ability or choose not to implement chargeback based upon tiering models.
What Das discusses below is a methodology that can used to aid in the decision making for putting workloads on Tier 2 storage versus Tier 1. He gives guidance on how to evaluate workloads based upon IOPS/TB as the de facto criteria. This brings a definitive data driven way to determine where workloads could/should sit in terms of their appropriate tiers. So when users have to ask why they can't have their workloads on Tier 1 storage, they won't like it, but they will understand as it will based upon their actual performance needs and not just what they want.
Andy
Here is the article by Amitava Das, Vicom’s CTO:
IOPS PER TERABYTE The number of input/output [I/O] operations per second (IOPS) at given block sizes, read write ratios and within certain latency limits are currently common methodologies of evaluating the performance of a storage system, from a single Hard disk, or a complex storage solution.
Capacity is commonly measured today in Terabytes, or for larger clients in Petabytes Availability (Reliability), Performance, and Capacity are three common vectors to compare storage systems and determine fit for purpose. While availability requirements are pretty standardized (5 nines or more i.e. more than 99.999% uptime), customers usually use capacity and performance requirements and measured to determine the storage solution to use. Tiering is now available and a commonly used technology, it is common for customers to choose lower performance and higher capacity system for some data. According to Gartner, “Many applications and data management products such as backup and archival systems require large amounts of low-cost storage, and IOPS performance is not critical”
A common problem arises in determining rules for data placement on the various Tiers/pools. Cost drivers would indicate that a lower tier be used/considered first, unless some business or technical driver dictated the use/consideration of the next higher tier and so on. Outside of business reasons, from a PERFORMANCE perspective, it is difficult to set application based boundaries since:
Any application that can be served by a lower tier can be presumably be easily served by a higher tier
Evaluating an application by IOPs alone is problematic since filling a storage system by applications to capacity may create a problem where the individual application IOPS requirements may be within boundaries, but the aggregate performance requirements from all the applications may exceed the array’s performance capabilities
Thus it is recommended to establish a IOPS/TB metric for gating applications. The relevant gating number should be determined based on the Tier 2 platform acquired by the organization as follows:
Determine the total usable capacity of the system
Reduce this by 20% to account for flash/PiT/snap overhead area – call this EFFECTIVE TERABYTES
Determine the total IOPS delivered by the acquired configuration using benchmark data or vendor assurances
Divide IOPS in step c by EFFECTIVE TERABYTES in step b
This number is the gating metric for that Tier
Now as applications come in and need to be evaluated for placement (or observed from Storage Resource Management platforms), their performance/capacity ratio can be compared against the Gating metric in the previous step. If the ratio from the application is higher, the next higher tier needs to be used/considered. BY consistently placing applications with lower IOPS/TB requirements than afforded by the system, we can ensure that we never enter a state where the performance of the underlying storage array is overrun. Properly designed storage virtualization solutions will allow the transparent migration of workloads between tiers should an initially placed workload develop performance characteristics that now demand placement on a different tier.
Amitava Das CTO
Comments