Go update your iPhone Syncplicity client now

syncplicityWow. Don’t read this. Just go get the new EMC Syncplicity iPhone client now. It’s amazing. It isn’t so much as a sync and share app as it is a sexy remote control for enterprise document management policies.

It’s better than Finder.

Sure, it does enterprise grade sync and share, allowing users to securely collaborate with each other ad hoc, and corporate to push down their own folders of content.

Sure, without having to rely on “the magic folder” into which you put all of the stuff to be synced, it acts very much like the easiest backup and recovery client around.

Sure, it’s at the top right of Gartner’s Magic Quadrant (TM Gartner). (Dropbox was not top-right)

Sure, it allows your teams to find each other’s content by gravatar (if you can’t quite remember their name.

Sure, it adds native PDF and M$oft document annotation capabilities, and lets me add new slides to my powerpoint while traveling.

Sure, it adds contextual menu options to keep clutter out of the interface, while exposing an extremely powerful set of document management tools.

Sure, it provides intelligent insights to recognize (for instance) by looking at your calendar, that you were just in a meeting, and have been creating documents. Would you like to share this document with the meeting participants? (and it pre-populates the email with their addresses).

Sure, it tracks who’s looking at the shared links you provide, and lets you track who is downloading your content and from where in the world (plotted on a map).

Yes, it has all that, but, man is it a sexy app to use! I just loaded it onto my phone, and I swear I think I can navigate my folder tree on my phone better than I can in Finder on the Mac.

What it does is provide a continual context for where you are in the folder tree by using the “cards” interface more commonly in use these days. Riffle the DeckAll parent folders are cards that can be viewed by “riffling the deck from above” or by “fanning the cards” left to right.

I love that when you “fan the cards” the parent card UI is active, in IMG_1387that you can scroll down and jump straight into another folder without first having to “select that card, and then start scrolling”.

A few screenshots don’t really do it justice, so check out this UI video.

Great job Syncplicity.  You got this one right!

How I installed OpenStack at a Service Provider in 20 mins

I’ve been playing around with my skillz in python, puppet, OpenStack, deployment methods, etc for a few weeks.  Here’s a small example of deploying OpenStack in a non-production environment (no hardening or customization of any kind) in 20 mins.  Maybe it’ll help you in your self education.

For testing purposes I use a service provider called Digital Ocean. I’m sure what they do is competitive to all things EMC, so this is not an endorsement. They do, however have VERY CHEAP costs if you’re just doing testing (on the order of < 1 penny per hour) with linux based services (again YMMV, I have no idea how good they are).

Now when I say 20 mins, that’s how long it takes for the procedures to run once you figure out what to do. I’ve spent a couple of days playing with the site, VM’s, python, devstack, etc. to get the procedure in place. That said, it took 20 mins of wall clock time to get OpenStack running from the devstack.org package on a single machine deployment (ie: not production ready).

At the end of this post is the python code to talk to the Digital Ocean API and deploy the instance. Since it’s not very sophisticated code, I just edit it directly to do what I want to do as opposed to passing command line arguments (list, deploy, destroy).

So, to get a VM I called ‘devstack’ I executed my little script with settings hard-coded:

./go.py

The script returned the following output, parsed from the JSON response:

107.170.89.157 devstack 1790811

Then I did the following to install OpenStack:

$ sudo echo "107.170.89.157 devstack" >> /etc/hosts
$ ssh root@devstack
# adduser stack
# echo "stack ALL=(ALL:ALL) ALL" >> /etc/sudoers
# apt-get install git
# su - stack
$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack && ./stack.sh
Output:lots of output from stack.sh... <snip>
Horizon is now available at http://107.170.89.157/
Keystone is serving at http://107.170.89.157:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: NOTFORYOUREYESTOSEE
This is your host ip: 107.170.89.157
stack.sh completed in 794 seconds.

Here is the python script I mentioned above, It’s not great, but maybe it’ll give you some idea how to talk JSON, and return results to an online API.

#!/usr/bin/python

import requests, json, pprint, time, socket, sys


################
## make the json call to the public api
################
def jsonRequest(targetUrl):
 #set the headers for how we want the response
 headers = {'content-type': 'application/json','accept':'application/json'} 
 
 # make the actual request
 r = requests.get(targetUrl, headers=headers, verify=False)

 #take the raw response text and deserialize it into a python object.
 try:
 responseObj = json.loads(r.text)
 except:
 print "Exception"
 print r.text
 #print json.dumps(responseObj, sort_keys=False, indent=2)
 return responseObj
 

################
## return list of instances
################
def getDroplets(apiKey, clientId):
 #set the target url for the query
 targetUrl = "https://api.digitalocean.com/v1/droplets/?client_id=%s&api_key=%s" % (clientId, apiKey)

 # make the actual request
 resultList = jsonRequest(targetUrl)
 return resultList['droplets']

################
## Deploy a new instance
################
def deployDroplet(apiKey, clientId, hostName):
 sizeId = '66' #smallest; 62 = 2GB Ram
 imageId = '3240036' #ubuntu 64bit
 regionId = '4' #NY region
 sshKey1 = '153816' #brighs ssh key
 sshKey2 = ''
 privateNetworking = 'true' # create private network

 #set the target url for the query
 targetUrl = "https://api.digitalocean.com/v1/droplets/new?client_id=%s&api_key=%s&name=%s&size_id=%s&image_id=%s&region_id=%s&ssh_key_ids=%s,%s&private_networking=%s" % (clientId, apiKey, hostName, sizeId, imageId, regionId, sshKey1, sshKey2, privateNetworking)

 # make the actual request
 responseObj = jsonRequest(targetUrl)
 return responseObj['droplet']

################
## destroty an instance
################
def destroyDroplet(apiKey, clientId, dropletId):
 #set the target url for the query
 targetUrl = "https://api.digitalocean.com/v1/droplets/%s/destroy/?client_id=%s&api_key=%s" % (dropletId, clientId, apiKey)

 # make the actual request
 responseObj = jsonRequest(targetUrl)
 return responseObj

######################################

apiKey = "GOGETYOUROWNAPIKEY"
clientId = "GOGETYOUROWNCLIENID"


droplets = getDroplets(apiKey, clientId)
for droplet in droplets:
 #for key in droplet:
 # print key
 print "%s %s %s" % (droplet['ip_address'], droplet['name'], droplet['id'])
 #print "destroying droplet %s" % (droplet['name'])
 #destroyResult = destroyDroplet(apiKey, clientId, droplet['id'])
 #print destroyResult['status']

#droplet = deployDroplet(apiKey, clientId, 'devstack')
#print "%s %s" % (droplet['name'], droplet['id'])
#deployDroplet(apiKey, clientId, 'tester4')
#deployDroplet(apiKey, clientId, 'tester5')
#destroyDroplet(apiKey, clientId, '1786240')

EMC ViPR Leverages 3rd Party OpenStack Cinder Plugins

EMC ViPR 2.0 was just announced today at EMC World.  It brings a distinct level of maturity to a product that was only just been brought to market, but is already the most advanced storage controller of heterogeneous arrays.  One of the most interesting aspects of the announcement was the leverage of Cinder to provide ViPR a much wider support capability for heterogeneous arrays!

ViPR LogoIf you haven’t heard of ViPR, it’s not “storage virtualization” in the sense that you’re familiar with.  It’s a control-plane product like Virtual Center rather than a virtualization product like the hypervisor ESXi.  ViPR discovers storage, SAN, and hosts and controls the provisioning and connectivity of those assets together.  It virtualizes (in a sense) heterogeneous storage “southbound” and provides the common higher function API “northbound” to any framework product that wants an easy way to communicate with the Software Defined Storage layer.

Enough similar products and management frameworks have been created over the years for the SNIA storage standards organization to create an API standard called SMIS.  EMC VMAX and VNX have SMIS providers for instance that ViPR can use to manage provisioning on these arrays.  Other vendors do not so easily provide or so fully support the SMIS standard, and as such ViPR can’t use a standards-based approach to managing them.  Thus the EMC ViPR team was left with creating plugins for each and every array to provide provisioning support–a daunting task indeed.

Enter the OpenStack community.  I’m sure as we were throwing our weight into the OpenStack community, some smart-gal-or-guy in our Advanced Software Division said, “Hang on a sec.  All these vendors are producing Cinder plugins for OpenStack.  Why can’t ViPR just leverage them?”  And where SMIS couldn’t deliver, a viable open source community has created a widely adopted API for OpenStack.

Let’s make sure you understand this.  We’ve had Cinder API support for “Northbound” into OpenStack.   This provides OpenStack the ability to provision block storage against ViPR-as-a-virtual-storage-array.  This new announcement leverages every other vendor’s Cinder support to provide ViPR the ability to communicate “Southbound” into the array and manage it with ViPR-as-a-SDS-controller.  This is a VERY different animal and instantly provides ViPR a wide range of heterogeneous storage management and automation capabilities.

Added to this common industry storage support, the ViPR team has now announced a plugin support (rich data services support) for Hitachi arrays, with more on the way.  Great stuff from the EMC ViPR team.

Size Does Matter

A colleague of mine Sean Cummins recently posted a very good summary of VMAX FAST VP best practices which got a little derision from some of the competition.

bitingoffMuch of the competitive argument is from those who sell an all flash array that can store about 100TB after deduplication. That’s a single RAID type.  That’s a single tier.  That’s a single price per GB.  That’s a single IO per GB. That’s great, but what about the other 90%+ of the capacity of the VMAX, and all of the other workloads that don’t require the highest of IOPS and lowest of latencies? EMC marketing has focused on the “optimization of flash” within the VMAX, but the flip side of that point-of-view is to drive costs out by a massive introduction of SATA without suffering the performance hit.

For the last several years, and for some time to come flash has been a means to an end. The real value of flash has been to enable the placement of as much capacity as possible onto cheap, high capacity, slow SATA. Customers can’t afford to place petabytes of data on all-flash solutions. There aren’t all-flash solutions that can accommodate that scale. The petabyte workloads have not required all-flash performance.

This to shall pass. Every major storage vendor has an all-flash product. There are even customers today (who shall not be named) that purchase all-flash VMAX! We’re starting to see how Latency is becoming the name of the game as opposed to IOPS. This will drive all-flash array consumption for those smaller workloads that can fit on an all-flash solution that don’t require enterprise federated consistency protection for BC/DR (that’s a whole other topic).

hammerThe world is changing rapidly. Data growth is exploding. Companies want to store everything in the hopes that they’ll be able to mine their data for customer insights. We’ll see what storage architectures emerge as the most effective for each of the various workloads in the world.  My warning is this. Whoever is selling you “only one way” of doing things is SELLING YOU on an idea that all of your workloads are the same. They have the proverbial hammer, and they’re trying to convince you all your problems are nails. Don’t buy it.

EMCs OpenStack strategy

EMC Openstack Like most things at EMC, the OpenStack integration strategy is focused on customer choice. It’s an attempt to meet the customer where they leveraging IT assets, specific workloads, and integration points into all of our products. Additional automation and deeper integration we provided by our software defined storage can control plane software called ViPR. Obviously all of the deep reporting, SRM tools, and storage data protection services will remain and be brought into the OpenStack integration.

To jump right into EMC integration with OpenStack Cinder, we provide native plug-ins for all of our block storage products, iSCSI Isilon and Scale I/O. Several major distributions such as Rack Space, Red Hat, and Canonical distribute the plug-in for VNX and VMAX, and versions of the Cinder plug-in support fiber Channel are available from Github or the EMC web site. Third parties have written the Cinder plug-ins for Isilon iSCSI and Swift support and iSCSI support for Scale I/O.

image EMC’s recommended approach would be to use the Cinder plug-in for our ViPR storage control-plane product, since through that single plug-in iSCSI and fiber channel support are provided across all our products, as well as object storage support for Swift, S3, and our own Atmos APIs. That plug-in is available from Github and will be coming to the major distributions soon. The advantage of using ViPR especially in fiber Channel environments is that ViPR will automate the zoning and the definition of rich service pools that provide fully automated storage tiers, replication settings, and high availability configuration.

Not only is there a single integration point with the ViPR Cinder plug-in for block storage, but with ViPR data services we bring Swift API and Amazon S3 API support. Finally, ViPR brings this automation support to all consumers of block storage, not only OpenStack but VMware and other virtualized or bare metal hosts.

Community for Sharing Storage Resource Management (SRM) Custom Reports

EMC SRM 3.0I was in an EMC and EMC Partner training class around SRM 3.0 (Storage Resource Management Suite) today, and came away very excited about one of the latest software assets in EMC’s portfolio (SRM home page).  SRM 3.0 is a comprehensive tool for reporting on capacity and performance of the largest storage enterprises in the world.  Using passive or standards based active discovery, SRM can provide comprehensive capacity and performance reporting and trending across your entire IT stack: VM’s, physical servers, SAN infrastructure, and storage elements.

EMC is becoming very good at providing a simple and easy out-of-the-box experience for its software products while preserving a very rich underlying functionality and flexibility that elevates EMC products above the competition. This balance is very evident in SRM 3.0, where predefined active dashboards show summary data, and allow click-through to the underlying metrics and detailed reports. (Watch a demo video here).

SRM-ReportDuring the class lecture, (and the first hands-on experience I had with the tool) I was able to modify one of the standard storage pool reports to show the data reduction associated with Thin Provisioning on VMAX and VNX. I discussed some of these formulas in a previous post (Some EMC VMAX Storage Reporting Formulas).

One of the best parts of SRM 3.0 is the ability to export and import Report Templates. As such, I exported the report I customized, and posted the XML here for you to download and import into your own SRM 3.0 installation.

I’d like to invite all of you to export reports that you have customized for your environment with the community, so that we can engage the power of everyone to advance the art of storage resource management.

My first python and JSON code

I’m not a developer. There I said it.

I’m a presales technologist, architect, and consultant.

But I do have a BS in Computer Science. Way back there in my hind brain, there exists the ability to lay down some LOC.

I was inspired today by a peer of mine Sean Cummins (@scummins), another Principal within EMC’s global presales organization. He posted an internal writeup of connecting to the VMAX storage array REST API to pull statistics and configuration information. Did you know there is a REST API for a Symmetrix?! Sad to say most people don’t.

He got me hankering to try something, so I plunged into Python for the first time, and as my first example project, I attached to Google’s public geocoding API to map street addresses to Lat/Lng coordinates. (Since I don’t have a VMAX in the basement)

So here it is. I think it’s a pretty good first project to learn a few new concepts. I’ll figure out the best way to parse a JSON package eventually. Anyone have any advise?

###################
# Example usage
###################
$ ./geocode.py
Enter your address:  2850 Premiere Parkway, Duluth, GA                 
2850 Premiere Parkway, Duluth, GA 30097, USA is at
lat: 34.002958
lng: -84.092877

###################
# First attempt at parsing Google's rest api
###################
#!/usr/bin/python

import requests          # module to make html calls
import json          # module to parse JSON data

addr_str = raw_input("Enter your address:  ")

maps_url = "https://maps.googleapis.com/maps/api/geocode/json"
is_sensor = "false"      # do you have a GPS sensor?

payload = {'address': addr_str, 'sensor': is_sensor}

r = requests.get(maps_url,params=payload)

# store the json object output
maps_output = r.json()

# create a string in a human readable format of the JSON output for debugging
#maps_output_str = json.dumps(maps_output, sort_keys=True, indent=2)
#print(maps_output_str)

# once you know the format of the JSON dump, you can create some custom
# list + dictionary parsing logic to get at the data you need to process

# store the top level dictionary
results_list = maps_output['results']
result_status = maps_output['status']

formatted_address = results_list[0]['formatted_address']
result_geo_lat = results_list[0]['geometry']['location']['lat']
result_geo_lng = results_list[0]['geometry']['location']['lng']

print("%s is at\nlat: %f\nlng: %f" % (formatted_address, result_geo_lat, result_geo_lng))

Enterprise Storage Scale Out vs Scale Up

A lot of what we hear in the cloud washed IT conversation today is that scale out architectures are the preferred method of deployment over scale up designs. As the theory goes, scale out architectures provide more and more compute connectivity and storage capacity as you add nodes.

Scale outMost vendor marketing has jumped onto the scale out bandwagon leaving implementations that also scale up indistinguishable from the rest. Properly conceived scale out architectures allow us to add additional discrete workloads to an existing system by adding nodes to the underlying infrastructure. The additional nodes accommodate processing required to support the additional separate workloads. This kind of share nothing design, allows us to stack a large quantity of discrete workloads together and manage the entire system is one entity, even though it can be very hard to balance workload across the nodes (silo’ed).

But what about the large quantity of monolithic workloads that need scalable performance but are not able to be distributed across independent nodes? Take for example a single database instance that begins life with just a few users, and is supported well on a single node of a scale out architecture. As this instance grows, more users are added, the database grows beyond the confines of the single node. In many pure scale out implementations, adding additional nodes is ineffectual. The additional nodes do not share in the burden of the workload from the first node. They are designed merely to accommodate additional discrete workloads.

Scale upEMC scale out and up products such as VMAX Isilon, provide the cost benefits of a scale out model with the performance benefits of a scale up model. Purchasing only a few nodes initially, IT gains the cost benefit of starting small, and as workloads increase, additional nodes can be brought online, and data and workload is automatically redistributed across all nodes. This creates a share everything approach that supports the ability to scale up discrete applications and while also providing the ability to scale out stacking additional applications into the array along with the first. Software-based controls provide for multi-tenancy, tiering and performance optimization, workload priorities, and quality of service. This is as opposed to scale out architectures where rigid hardware defined partitions separate components of the solution.

I hope this helps demystify “scale out” a little. Just because you can manage a cluster of independent nodes, doesn’t mean it’s an integrated system. Unless workload tasks are shared across the entire resource pool, scale out does not confer the benefits of scale up, where scale up-and-out designs provide the best of both worlds.

Some EMC VMAX Storage Reporting Formulas

What goes around comes around. Back in the day everything was done on the command line. Storage administrators had their scripts, their hands never had to leave the keyboard, they could provision storage quickly and efficiently, and life was good. Then the age of the easy to use GUI came into effect. Non-storage administrators could deploy their own capacity, and vendors espouse how much easier and more effective IT generalists could be in managing multiple petabytes of capacity with the mouse. Now we’re into the age of the API. Storage along with all other IT resource domains has its reporting API and developers are now writing code to automatically report on and provision storage in the enterprise. And many shops have realized that the command line is as effective an API for automating reports and doing rapid deployments as any REST based API could be.

So I’m getting a lot of questions again how to do some basic reporting on a VMAX storage array from command line generated or API generated data. Customers are writing their own reporting dashboards to support multiple frameworks.

The most fundamental question is how much of my storage array have I consumed? Assuming one is inquiring about capacity and not performance, there are two aspects to that question. The first question is from the business, “How much capacity has been allocated to my hosts?” In other words how much capacity do my hosts perceive they have been assigned? In an EMC VMAX array this has to do with the Subscription rate. For better or for worse we provide a Subscription percentage of a pool that has been Subscribed, as opposed to an absolute value of capacity.

To calculate how much of your array has been Subscribed, sum the products of the Pool % Subscribed by Pool Total Available GB to get the total capacity Subscribed to the hosts.

Symcff list thin pools detail

 

EFD Pool Subs % * EFD Pool Total GBs = EFD Subscribed GB
FC Pool Subs % * FC Pool Total GB = FC Subscribed GB
SATA Pool Subs % * SATA Pool Total GB = SATA Subscribed GB
Sum the totals above = Array Subscribed GB

Using the example above, this would be:

0% * 4,402.4 = 0.0  EFD Subscribed GB
177% * 77,297.4 = 136,816.398 FC Subscribed GB
73% * 157,303.7 = 114,831.701 SATA Subscribed GB
251,648.099 Array Subscribed GB

Then compare that number to the sum of the Total GBs provided by each of your three pools to reveal the ratio of the subscribed array capacity versus the total available capacity.

If you’re Array Subscribed GB is larger than your Array Total GB, then tread carefully since you have oversubscribed your array’s capacity!

Subscribed GB / Array Total GB = Array Subscription Rate
251,648.099 / 239,003.5 = 105.29% oversubscribed in this case!

To understand how much capacity has been filled physically allocated on the array after all data reduction technologies have done their job (thin provisioning, compression, de-dupe), simply sum the Used GBs capacity from each of your pools to show an overall array consumed capacity. Compare this number to the previously mentioned subscription capacity to show the benefit of data reduction technologies in the array.

Sum of Used GBs above is ( 3855.6 + 38708.4 + 43987.6 ) = 86,551.6 GB
251,648.099 Subscribed GB / 86,551.6 Consumed GB = 2.9 : 1 Data Reduction provided by Thin Provisioning!  Not bad at all!

Likewise sum the free capacity measures from each pool together to get the available capacity remaining in the array.

There you have it. If you’re not into the GUI, just a little math provide you the dashboard data you need to show your executives the capacity utilization of the array.

Happy hacking!

Array Sizing based on ITaaS Principles

There are two hats we wear when creating an IT service offering.  The first is the customer-facing marketing hat where customer needs are defined, and service offerings are advertised at various price levels in a catalog.  The second hat is the backend product management hat where the front-facing service catalog is translated to various reference architectures at well defined cost levels.

The goal when running IT as a business (aka: ITaaS) is that the fully burdened costs associated with the reference architecture are recouped over a reasonable period of time by the pricing set in the service catalog.

A few rules to live by in this world view related to enterprise storage:

  • The customer (business user) only has visibility to the Service Catalog.  All communications with the customer are based on the language used in the Catalog.
  • The customer (business user) never sees the Reference Architecture that supports each of the Services listed in the Catalog.  Never have customer communications about arrays, drive mix, RAID type, methods, processes, etc.
  • If the customer is unhappy with the service they are receiving, an evaluation must be made as to whether or not they are receiving the expected Service Level Objective as defined in the Service Catalog.  If not… it’s IT’s problem.  If they are and are still unhappy, then a discussion must follow about moving them to a different Tier of service as defined in the Service Catalog.
  • If a differentiated Tier of service does not exist in the Service Catalog, and there is a defined customer need for that class of service, then an end-to-end service definition must occur.  IT must create and price a new offering in the Service Catalog.  IT must also create a corresponding new definition in the Reference Architecture to explain how that service will be accomplished and at what cost.

What does this have to do with sizing a storage array?

Whether or not IT is living out the utopian vision of ITaaS, some concrete realities emerge from the ITaaS vision that show up in array sizing, namely:  Subscribed vs. Allocated capacity planning, and performance planning measured in IOPS/GB.

Warning:  There are lots of different considerations when it comes to array performance:  Write%, Skew, Cache Hit Rate, Sequentiality, Block Size, app sensitivity to Latency vs. Throughput, advanced data protection services etc.  Let’s take these for granted for now and focus on IOPS/GB, because frankly that’s how the Service Providers do it.

How much capacity do I need to purchase?

Customer facing Service Catalog capacity planning should start with the Subscribed capacity.  This defines the promise IT makes to the user about how much capacity is available to store data and at what tier. This is the advertised capacity to which pricing is set.

IT internal Reference Architecture capacity planning should worry more about the Consumed capacity after any available data reduction technologies do their work (thin, de-dupe, compress, etc).  This is the actual capacity (plus buffer) to be purchased to which costs are allocated.

What kind of performance do I need?

Customer facing Service Catalog performance planning can be done by SLA or SLO.  Do you want to establish a worst case scenario floor that you promise to exceed every time (SLA)?  Or do you establish a best case performance target ceiling you promise to try and deliver (SLO)?  The world is moving away from SLA’s to SLO’s.  The IOPS/GB performance level is defined as the IOPS to the Host divided by the Subscribed GB (IOPS / GBs).

IT internal Reference Architecture performance planning should be done against IOPS to the Host divided by the Thin Consumed GB (IOPS / GBc).

An IOPS / GB sizing example that leads to poor performance

A Business customer has an existing application in a legacy storage environment.  They’ve got 40TB of capacity assigned and are doing about 4,000 IOPS to the array.

That’s 4,000 / 40,000 = .1 IOPS/GBs (Host IOPS per Subscribed GigaByte).

A purchase is made for 56 x 1TB SATA drives RAID 6 6+2 providing ~40TB capacity, and  4,480 IOPS (56 * 80 IOPS) to fit the .1 IOPS/GB workload profile rather well (ignoring too many factors).

But … after Thin Provisioning, only 40% of the capacity is consumed (16TB), and the IT Finance Dept says, wow, I’ve still got another 34TB of capacity available.  Let’s use it!!  The problem is, they’ve consumed all their IOPS, leaving a lot of capacity stranded and unable to be used.

An IOPS / GB sizing example that leads to good performance

A Business customer has an existing application in a legacy storage environment.  They’ve got 40TB of capacity assigned and are doing about 4,000 IOPS to the array.  Planning for Thin Provisioning, we expect only the first 40% (16TB) will actually be Consumed.

That’s 4,000 / (40,000 * 40%) = .25 IOPSh/GBc (Host IOPS per Consumed GigaByte).

A purchase is made for 42 x 600GB 10K drives RAID 5 3+1 providing ~17TB capacity, and  ~5,000 IOPS (42 x 120) to fit the workload profile rather well (ignoring many factors, see “Disk IOPS” discussion further down).  This also assumes the risk that Thin Provisioning or other data reduction technologies are able to do their job effectively and allow you to purchase less capacity than in a “thick” world.

Over time, the app performs well, and is more properly balanced in both the IOPS and TB dimensions.

What if your boss says you must purchase the Thick capacity?

Some pointy haired bosses who don’t trust that Thin Provisioning will provide real benifit will say that you must purchase the “Thick” capacity amount due to various sociological / political / economic conditions within your organization.

In this case, make two assumptions:

  1. Your goal is to reach 80% capacity consumed over the life of your array
  2. Thin will cause 40% Consumption of every Subscribed GB

Use the formula Host IOPS  / ( Thick GB * 40% ) to arrive at your target IOPS/GB for the entire array’s worth of capacity!  This will be a much bigger  number than before, and will equate to a much higher performance solution.  The trick is, when Thin Provisioning (or other data reduction) kicks in, you’ll have so much head room, that someone will want to stack additional applications onto the array to consume all the capacity.

You’ll need the additional performance to accommodate this inevitable consolidation.

Use the above calculated IOPS/GB * (Thick GB * 80%) to reveal the target Host IOPS needed to achieve full efficient use of the installed capacity.  It’s never a good idea to consume more than 80% of a presumably over-subscribed resource.  It’s ok to target your performance to equate to 80% capacity consumption.

Let’s add RAID Type and Read/Write % into the mix

So far, I’ve been focused on “Host IOPS” when discussing IO Density.  Now that we’ve calculated the Host IOPS and Capacity properly, how do we configure an array to achieve a certain Host IOPS Density?  We need to translate Host IOPS down into “Disk IOPS.”
By Disk IOPS, I mean the amount of work the underlying Flash or HDD Spindles must do to accomplish the Host IOPS goal.  I won’t have room to discuss all of the potential factors that go into a model like this, but we’ll get started down the path.

FYI, some of those factors are: block size, cache benefits, data reduction (benefit or overhead), replication, etc.

Most implementations of traditional RAID types RAID-1, RAID-5, and RAID-6 carry with them some form of write IO penalty. When a host writes to a RAID-1 mirror device, 2 IOs are generated by the backend storage representing the write to the first spindle, and the second write to the mirror pair.  So we say that the write penalty for RAID-1 is 2.  When a host writes to a RAID-5 protected device and the write I/O is not the full stripe width, then potentially 4 IOs must happen on the backend of the array. One read I/O from the parity portion, a second read I/O from the data stripe, CRC calculations are done in memory, and then a third IO is issued to write the parity portion, and the fourth and last IO is issued to write the data stripe. So we say that RAID-5 has a 4 IO write penalty. For RAID-6, there must be a read of the first parity, a read of the second parity, a read of the data stripe, and after CRC calculations are done three writes must occur to finish the IO. So we say that RAID-6 carries a 6 IO write penalty.

Coupled with RAID type, we must know the read/write percentage of our workload. We assume that reads do not carry any penalty to the backend spindles, so for every host read there is a single disk I/O.  Obviously at this point we are not factoring in any cache benefits.  We must multiply every host write by the write IO penalty previously described. So knowing that we need to support a 10,000 IO workload which is 60% writes, we can calculate how many IOPS the backend disks must be able to provide.

IOPS to Disk  = %READS * Host-IOPS + %WRITES * Host-IOPS *  RAID-Penalty

Where the RAID-Penalty is 2, 4, or 6 as described above.

Now that we know how many disk IOPS our spindles must be able to produce, how many spindles do we need? Use the following chart to reference how many IOPS per spindle technology type can be provided with no more than one outstanding queued I/O to the spindle (aka:  good response times).

2500 IOPS per EFD (200GB EFD = 12.5 IOPS/GB)
180 IOPS per 15K RPM spindle (2.5″ or 3.5″ doesn’t matter)  (300GB = .6 IOPS/GB)
120 IOPS per 10K RPM spindle (2.5″ or 3.5″ doesn’t matter)  (600GB = .2 IOPS/GB)
80 IOPS per 7.2K RPM spindle (typically SATA)  (2TB = .04 IOPS/GB)

A Final Example

A Business customer has an existing application in a legacy storage environment.  They’ve got 40TB of capacity assigned and are doing about 4,000 IOPS to the array.  Planning for Thin Provisioning, we expect only the first 40% (16TB) will actually be Consumed.  Our pointy haired boss is going to require that we purchase all 40TB of capacity… hmph.

That’s 4,000 / (40,000 * 40%) = .25 IOPSh/GBc (Host IOPS per Consumed GigaByte).

Now factoring in the inevitable growth that we know will happen due to all of the free space that will remain in the array, we say:

.25 IOPS/GB * (40,000 * 80%) = 8,000 Host IOPS that the array solution will need to support over it’s lifespan

Ignoring the complexity of multi-tiered solutions, let’s look for a single spindle type that will meet our needs.  Our IOPS/GB target is .25.  From the previous table, it looks like a 600GB 10K is .2 and a 300GB 15K is .6… Hmmm, what about a 600GB 15K drive?  That would be:

180 IOPS / 600GB = .3 IOPS/GB  It looks like a match!

Since this is a consolidated general purpose VMware farm that we’re supporting, let’s assume the Read/Write ratio will be 40% Reads and 60% Writes (this is VERY consistent across VMware clusters, FYI).  Also, since we’re using fast FC spindles, let’s see what a RAID-5 3+1 solution will look like, shall we?

Here’s our formula repeated from above:
IOPS to Disk  = %READS * Host-IOPS + %WRITES * Host-IOPS *  RAID-Penalty

40% * 8,000 + 60% * 8,000 * 4 = 22,400 IOPS to Disk
Wow, that’s a lot of write penalty, since there we only needed 8,000 Host IOPS.  Our 600GB 15K RPM drive can do 180 IOPS per spindle, so 22,400 / 180 = ~124 spindles… let’s round up to 128 since that’s a nice power of 2 to achieve the performance needed to support our hosts.

Now, does this provide enough capacity?  Well, 128 * 600GB * 3/4 (RAID Overhead) = 57,600 GB.  It looks like we’ve over-shot our capacity numbers a little.  Perhaps a 300GB drive would be more efficient?  Maybe a 300GB 10K?  Maybe we should look at a 3 tier approach with a little EFD, some FC, and most of the capacity in SATA?

And this dear reader is an exercise for you ;-)

Take this to the extreme with all flash arrays

Always on in-line deduplication provided by quality All Flash Arrays take this notion to the extreme.  These arrays don’t allocate anything unless it’s globally unique data.  A single EMC XtremIO X-brick provides 7.5TB of usable capacity with 70TB of logical de-duped capacity.  This 10X increase in the potential IOPS/GB density requires Flash at the core and innovative workload management algorithms to enable extreme performance that’s hitting a very limited set of allocated data regions.

What about the benefits of cache?

It is a dangerous and complex game to try and figure out the benefit of cache in any enterprise storage array. Cache serves two purposes. First, cache provides a buffer to accept spikes in host generated workload for asynchronous destage to the backend media post acknowledgment to the host. This function provides good latency to the host while leveling out the workload characteristics seen by the backend spindles.  This does not necessarily reduce the amount of work that the spindles must do overall.  The second function of cache is to legitimately reduce the amount of work the backend spindles must accomplish. This is done by caching host writes for subsequent reuse, and by being a prefetch buffer for reads that are predicted for near future use.  In my experience prefetched reads reduce backend spindle work to about 30% of what it would normally be.  Accommodating host rewrites to active cached data before it is destaged to the backend saves the spindles from having to do any of that work.  The question is how do you predict the amount of read cache hits, read cache misses, prefetched reads, writes to backend, and re-writes from any given host workload?  These arrays specific implementation details that depend on your specific host workload profile, and make it very dangerous to assume much cache benefit.  Just let it come naturally and be the icing on the cake of your good performing solution.