Tag Archive: EMC


Reading , wRighting and Recording – Measure how your applications hit your disks!

I’ve spent the last week thinking more about storage than I usually would, particularly in the light of some of the conversations I’ve been having over Tech Field Day with the other delegates & sponsors who have had varying levels of interest & expertise within the storage world. If, like me you have a basic appreciation of storage but want to get in that little bit deeper , a good primer would be Joe Onisick’s storage protocols guide at DefinetheCloud.net

Admins working in smaller shops probably have a little closer control over the storage they buy as they are likely to be the ones specifying , configuring and crying over it when it goes wrong ; It’s one of the con’s of working for a large enterprise is that the storage team tends to be separate – they guard their skills and disk shelves quite closely , sometimes a little too closely – I do wonder if their school reports used to say “does not play well with others” . The SAN is seen as a bit of a black box by the rest of the department and generally as long as the required capacity is available to someone when they ask for it , be it a lun or VMware datastore , then everyone is happy to let them get on with it.

As soon as there is a performance issue however , that happy boat starts to rock .The storage team starts to get defensive, casting forth whitepaper & best practice guide as if they were a World of Warcraft character holding a last stand. At some point you may well find that you hit the underlying best performance of the SAN , no matter how well tuned it is. You are then left in a bit of a quandary of what to do, in the worst case you have to bite that bullet and move that application which looked like the lowest of the low hanging fruit back onto a physical server with direct attached storage , where it’ll smugly idle at 5% utilisation for the rest of its life , ever causing reproachful looks when you walk past it in the datacenter.

How do you avoid the sorry tale above ? In a nutshell, “Know your Workload!” When you start to map what your applications are actually using you can start to size your environment accordingly. One of the bigger shocks that I’ve come across when doing such an exercise is a much heavier proportion of writes than the industry would have us expect. This causes a big problem for storage vendors who rely on flash based cache to be able to hit their headline performance figures. When reading from a cache , of course the performance will be great , but under a heavy write intensive load the performance of the underlying disk starts to be exposed and it seems to come down to number and speed of spindles. Running a system that uses intelligent tiering to write hot blocks in the fastest way then cascade them down the array as they get cooler could help in this instance. Depending on your preference for File or Block level storage , there are a number of vendors who could help you with this, for example Avere Systems or 3PAR or the next Generation of EMC’s FAST technology.

At Tech Field Day , NetApp , VMware and Cisco presented on their flexpod solution for a scalable and secure multi tenant virtualised infrastructure. If you’d like to watch the recording of the presentation, its available here . What would appear to differentiate the flexpod from other products is that is a not a blackbox device , designed to drop into a data centre to provide X number of VM’s , when you have X+1 VM’s, you just go out and buy another device.

While you can approach a VAR and order a flexpod as a single unit , the design and architecture is what makes it a “flexpod” – being a single bill of materials that can be put together to give a known configuration. The view of this being that it offers a greater agility of design , for example using a NetApp VServer head to present storage from another vendor to the solution.

To me , this seems a little bit like buying a kit car.

imageYou get a known design and list of components you have to source – although the design may well recommend where you source the components. Sometimes you can get them part built or pre built, but if you want to run it with a different engine , you can drop one in should you so desire.

 

The VBlock from the VCE guys is a different kettle of fish – its not a design guide , its a product. You chose the VBlock that suits the size of deployment that you want to do , order it and sit back and wait for the ready built solution to arrive on the back of a lorry ( truck to our US friends 😉 ) This is like ordering a car from a dealership.

image

Of course you could just go to any reseller and buy a bunch of servers , network & hardware and then install ESX on it. The Stack vendors might compare this to trying to hand cast your car from a single block of metal !

image 

At the moment many of us who can already design a solution from scratch are at that hand casting level , and while I wont deny we’ve been through a few pain points , we’ve usually been able to fix them. Its part of the skill that keeps us employed. By going for an “off the shelf product” the pain of that part of a system design is divorced from the solution and perhaps it would allow focus on what may be the next part of the design at the service and application level –don’t worry about build a car , worry about driving it! . If you need a car to drive to work and do the weekly shopping in, you buy one from a dealership – but if you have a specific need , then you may have to get into the workshop and build a car that meets those needs.If you want to concentrate

When a prebuilt solution  develops a problem that requires support , the offerings from the major vendors seem to differ a little. If you have a VBlock, you have one throat to choke ( presumably not your own , its only a computer problem , don’t let it get to you 😉 ) and one number to call. They will let the engineers from the different divisions fight it out and fix your problem , which is ultimately the only thing of concern to you as an owner.

The situation with a flexpod seems a little less intuitive. As its not a single SKU – you would require a separate support contact with each vendor ( of course this may be marshalled by the VAR you purchase through ) , You would initiate contact with the vendor of your choice – they then have a channel under the skin to be able to work with engineering functions of the other partners at the network, storage , compute & hypervisor arms as required. I would like to think this does not mean the the buck gets passed for a couple of rounds before anyone takes ownership of the problem , but I’ve yet to hear of anyone requiring this level of support. If you have and had a positive or negative experience , please get in contact.

If you have “rolled your own” solution , then support is up to you ! make sure that you have a similar SLA across the stack , or you could find yourself in a situation where you have a very fast response from your hypervisor people , but when they work out its your storage at fault , they might make you wait till the next day / end of the week. If this does happen to you , then I’m sure you’ll have plenty of time to clear your desk….