Archive for November, 2010

It looks like the last post generated some interest around the networking side of things – particularly the use of vShield firewalls ( or my complete lack of them ) . I’ve done a little more digging and while it wasn’t immediately obvious to a newcomer to cloud director , there is a  way of using the vShield firewalls within a deployment – that’ll serve me right for not fully RTFM 🙂


By deploying an additional network for a given vApp , I am able to connect that to the internet connection and specify some NAT & Firewall rules to publish services from that application to the network. It also makes the vApp diagram look pretty.


imageNote that the Management network ( as I’ve called it ) is a vApp specific network rather than an organisation wide one, hence why I still have an internal network connection to the VM so that it can talk to other VM’s with the VDC. The firewall VM I configured earlier is organisation wide , so any machine in the VDC could be publish via it. For larger deployments I wonder if it would make sense ( although its not really within the spirit of “the cloud”  ) to use hardware devices for edge networking – for example an f5 load balancer. While they do have a VM available which would offer a per vApp LTM instance , some shops may want the functionality of the physical hardware ( for example SSL offload ) . There may also be licence considerations when it comes to deploying the edge layer as multiple virtual instances.


Still to come in subsequent posts – deploying a “real” application to a public vCloud Director instance.


I was recently selected to take part in a public beta for the London Based hosting provider Stratogen . The beta is based on their vCloud Director offering and has be great to taking a look at the “cloud” from a consumers point of view.

As far as the trial has been put together , I’ve been set up as an Organisation with a single VDC , allocated a fixed resource pool of some compute , memory and storage. Networking wise I’ve been set up an internal and external network , with a pool of IP’s on each.

My aim for the beta was to see from a virtual machine administrator’s perspective how easy it was to set up an application in the cloud from scratch. What I would really have liked to have done was built an application in my home lab and then federated that up to the cloud , but sadly that was beyond scope for the moment. Perhaps in the future I’ll be able to give that a go.

Stratogen haven’t currently put their own UI on top of Cloud Director, so it currently looks like the usual cloud director interface – its important that I must stress the beta is still at pretty early stages , so anything can ( and possibly will ) change.


So , After login , I was presented with the pretty default looking screen above. In my usual style when I get my hands on a product I tend to have a little click around to see what I can see without having to delve into any setup guides.

It all looks pretty locked down at the Organisation / Resources side of things , I can only see my own VCD , and aside form changing machine leases & the comments about my Organisation , not a whole lot to be able to change.

With Nothing deployed , I had no vApps to manage , so it would seem sensible to try and deploy a machine. Thankfully Stratogen had put a few sample vApps in a public catalog– mostly apps containing a single VM of varying operating systems , Windows 2008 R2 , Centos Linux & RHEL.

Having spent  many of my formative years as a Windows admin , it makes a good common denominator so I chose to deploy a windows Vm to see what would happen. After a short Wizard allowing me to name the Vm and set its lease , I had to select the network to put the VM on. I wasn’t keen on putting this VM directly into the public network – as an admin , I wasn’t to sure of what patching level it was, nor did I know how open the public network was either, so erred on the side of caution , and selected the private network to home the VM.

While I wouldn’t have said the Vm provision was instant , it was pretty fast ,along with a host customisation that set a random admin password for me. Because I’d put the Vm on the private network, I wasn’t able to RDP directly into it from my workstation, so initially was restricted to the embedded vm console application – which in server 2008 r2 can be a little bit painful to use , I suspect the WDDM drivers weren’t in use – however this is an easy fix that I’d have probably had to do anyways. Improved though the connection now was, on my home DSL line , which isn’t all that fast to begin with , performance was a little lacklustre. I needed RDP.

I dropped the Stratogen chaps a mail about what my options where from a security point of view in the beta – it seem that in a fully managed service there would be a lot more control over the hardware firewalls available , but as a beta customer and in the interests of keeping it virtual, I would probably be better off deploying my own firewall.


Had this been a “real” deployment , I would have looked at something like vShield app ( or Checkpoint VE edition ( however given that my beta test is on a zero budget , I’m going to have a look a little cheaper. I would have loved to have deployed a m0n0wall appliance ( however because the appliance is delivered as a VMDK , I’d have had to somehow convert it to an OVF file with a way to import the VMDKs from a public web server , which at this point wouldn’t be practical. What I was able to locate was a firewall deployed from an ISO image of the Endian Community Edition. . This is a turnkey Linux install that will allow me some basic firewall functionality. I am able to use this to open up pinholes to my private network and publish any services from within.


Coming up in Part 2 – Deployment of a load balanced multi tier application in a public cloud.


As a virtualisation professional , there seems an almost limitless choice of 3rd party software you can bolt into your environment. While VMware covers many of the bases with its own product lines in capacity planning , lifecycle management & reporting , some of them are missing a feature or two , or just too complex for your environment. Many vendors seek to address this problem with a multi product offering , but so far, I’ve only come across a single vendor who aim to address issues like these with a single product.

I spoke with Jason Cowie & Colin Jack from Embotics a few months ago , but was only able to secure a product demo last week – In some ways I wish I’d waited until the next release as it sounds like its going to be packed with some interesting features. I don’t really like blogging about what is “coming up in the next version” , so will be concentrating on what you can get today ( or in a couple of cases the minor release due any time ). This isn’t something specifically levelled at the Embotics guys who are most likely internally submersed in the “vnext” code so to them it is the current product.As an architect ,I’m just as guilty of evangelising about features of a product that is several months away form a deployment. Many vendors do the same to whip up interest around the product ( hyper-v R2 is a great example of this ) , but it doesn’t really show a level playing field to compare a roadmap item with an item that’s on the shelves today. When the 4.0 version of V-Commander is released , I look forward to seeing all of the mentioned features for myself !


So What is it ?

The Website really does define the V-Commander product as being all things to all men , that is to say if those men are into Virtualisation management ! They show how the Product can be used to help with : Capacity Management , Change Management , Chargeback and IT Costing , Configuration Management , Lifecycle Management , Performance Management and Self Service.

That’s a lot of strings to its bow – and certainly enough to make you wonder if its a jack of all trades, master of none type product. After a good look at the offering , I can safely say that’s not the case , but its defiantly stronger in some of those fields than others.

The “Secret Sauce” of the V-Commander product is its policy engine. Policies drive almost every facet of the product and they are what allows it to be as flexible as it is. Once connected to one or more vCenters , it will start gathering information right away. This is what they refer to as “0-Day Analysis” , For a large environment , the information gathering cycle for some capacity management products can take quite some time ( I’ve seen up to 36 Hours) as the appliance tries to pull some pretty granular information from vCenter. I wasn’t able to run the Embotics product against a large environment to see if this is the case.However, I have it from the Embotics guys  as an example, that to pull the information for 30 months of operation for a vCenter with 1200 machines took a couple of hours ;to me this is more than acceptable.The headline report that Embotics shows off as being a fast one to generate is one showing the number of deployed VM’s over time , which is a handy way of illustrating potential sprawl.


The next key thing that V-Commander does is provide some more flexible metadata about a virtual machine. Entry of this data can be enforced by policy , for example you might want to say that all machines must have an end of use or set review date before they can be deployed. This really enforces the mantra of a cradle to grave lifecycle management application. The VM is tracked form its provision , through its working life and finally during the decommission phase. Virtual Machine templates can be tracked in the same way as Machines themselves – this sounds like an appealing way of ensuring you are not trying to deploy a machine from an old template. What is interesting is that the Metadata for an object can come in from other 3rd parties so there is potential to track Patching / Antivirus , so the appropriate integration be available.


Policy enforcement is real time, so for example even if I attempted to power on a VM via an rCLI command that V-commander policies would not allow to be powered on , the product is fast enough to power it back off again before it left the BIOS. In addition to this an alert would be generated of the rogue activity.

The Web GUI of the product splits into 2 main views – in addition to the administrators view There is also a “self service portal” – I put this in quotes for the very good reason that there are other self service portals that have currently hit the market which are more self provisioning. At this point on time the product does not provide self provisioning , but it is thought to be high priority for the 4.0 release. That the portal does allow is a very fine grained control that could be passed directly to VM owners without requiring any underlying access to vCenter , which is a feature that has some legs. They can currently request a machine , complete metadata and manage that specific groups of machines within an easy to use interface.


It is also possible to pull the data from V-Commander into the VI Client via a plugin to VI – this is defiantly aimed at the administrator rather than the VM Owner.



Automation is the key here and there are many issues where the product highlights that very well. While there is a degree of automation currently within the product , I think the next version will sink or swim on how well that ability is provisioned. For example , when it comes to Rightsizing a virtual machine , identifying those machines that may need a CPU adding or removing is great, being able to update the hardware on those machines automatically is what would actually get used , particularly in a large environment. Smaller shops may have a better “gut feeling” on their VM’s, hence will quite possibly manually tune the workloads more often. The product doesn’t have a whole lot in terms of analytics of virtual machine performance – the capacity management policies are pretty simple metrics at the moment, its certainly another area for potential growth to put that policy based automation engine to use.

V-Commander is slated to support Hyper-V in the 3.7 release , which is out any time now. I shall be interested to see how it will interact with the Self Service Portal in the upcoming versions of SC:VMM. From what I’ve seen of the product it could sit quite neatly behind the scenes of your <insert self service portal product here> and provide some of the policy based lifecycle management – all it would need would be a hook in from that front end so that those policies can be selected accordingly.

You get a lot of product for your money – which depending on how you want to spend it could cost you a fixed fee + maintenance , or an annual “rental” fee. I’ve been weighing up the pro’s and con’s of each licensing model and it would look like the subscription based model is the easier one to justify. It also means that should there be a significant change in the way you run your infrastructure , you wont be left holding licences that you’ve paid for , but can’t really use.


So is this the only Management software you’ll ever need ? At the moment, no it isn’t. That said its got some really strong features which aligned with a good service management strategy could help align your virtual infrastructure with the rest of your business.

Nb. I’ve just had some clarification on the release schedule for hyper-v support.

“Given priorities and customer feedback (lower
than expected adoption rates of Hyper-V), we decided to do only an internal release of Hyper-V (Alpha) with 3.7 (basic plumbing), with a GA version of Hyper-V coming in the first half of 2011.  At the beginning of 2011 we will begin working with early adopters
on beta testing.”

If you have a hyper-v environment and would like to take advantage of the embotics product , I’m sure they would be keen to hear from you.

Reading , wRighting and Recording – Measure how your applications hit your disks!

I’ve spent the last week thinking more about storage than I usually would, particularly in the light of some of the conversations I’ve been having over Tech Field Day with the other delegates & sponsors who have had varying levels of interest & expertise within the storage world. If, like me you have a basic appreciation of storage but want to get in that little bit deeper , a good primer would be Joe Onisick’s storage protocols guide at

Admins working in smaller shops probably have a little closer control over the storage they buy as they are likely to be the ones specifying , configuring and crying over it when it goes wrong ; It’s one of the con’s of working for a large enterprise is that the storage team tends to be separate – they guard their skills and disk shelves quite closely , sometimes a little too closely – I do wonder if their school reports used to say “does not play well with others” . The SAN is seen as a bit of a black box by the rest of the department and generally as long as the required capacity is available to someone when they ask for it , be it a lun or VMware datastore , then everyone is happy to let them get on with it.

As soon as there is a performance issue however , that happy boat starts to rock .The storage team starts to get defensive, casting forth whitepaper & best practice guide as if they were a World of Warcraft character holding a last stand. At some point you may well find that you hit the underlying best performance of the SAN , no matter how well tuned it is. You are then left in a bit of a quandary of what to do, in the worst case you have to bite that bullet and move that application which looked like the lowest of the low hanging fruit back onto a physical server with direct attached storage , where it’ll smugly idle at 5% utilisation for the rest of its life , ever causing reproachful looks when you walk past it in the datacenter.

How do you avoid the sorry tale above ? In a nutshell, “Know your Workload!” When you start to map what your applications are actually using you can start to size your environment accordingly. One of the bigger shocks that I’ve come across when doing such an exercise is a much heavier proportion of writes than the industry would have us expect. This causes a big problem for storage vendors who rely on flash based cache to be able to hit their headline performance figures. When reading from a cache , of course the performance will be great , but under a heavy write intensive load the performance of the underlying disk starts to be exposed and it seems to come down to number and speed of spindles. Running a system that uses intelligent tiering to write hot blocks in the fastest way then cascade them down the array as they get cooler could help in this instance. Depending on your preference for File or Block level storage , there are a number of vendors who could help you with this, for example Avere Systems or 3PAR or the next Generation of EMC’s FAST technology.

At Tech Field Day , NetApp , VMware and Cisco presented on their flexpod solution for a scalable and secure multi tenant virtualised infrastructure. If you’d like to watch the recording of the presentation, its available here . What would appear to differentiate the flexpod from other products is that is a not a blackbox device , designed to drop into a data centre to provide X number of VM’s , when you have X+1 VM’s, you just go out and buy another device.

While you can approach a VAR and order a flexpod as a single unit , the design and architecture is what makes it a “flexpod” – being a single bill of materials that can be put together to give a known configuration. The view of this being that it offers a greater agility of design , for example using a NetApp VServer head to present storage from another vendor to the solution.

To me , this seems a little bit like buying a kit car.

imageYou get a known design and list of components you have to source – although the design may well recommend where you source the components. Sometimes you can get them part built or pre built, but if you want to run it with a different engine , you can drop one in should you so desire.


The VBlock from the VCE guys is a different kettle of fish – its not a design guide , its a product. You chose the VBlock that suits the size of deployment that you want to do , order it and sit back and wait for the ready built solution to arrive on the back of a lorry ( truck to our US friends 😉 ) This is like ordering a car from a dealership.


Of course you could just go to any reseller and buy a bunch of servers , network & hardware and then install ESX on it. The Stack vendors might compare this to trying to hand cast your car from a single block of metal !


At the moment many of us who can already design a solution from scratch are at that hand casting level , and while I wont deny we’ve been through a few pain points , we’ve usually been able to fix them. Its part of the skill that keeps us employed. By going for an “off the shelf product” the pain of that part of a system design is divorced from the solution and perhaps it would allow focus on what may be the next part of the design at the service and application level –don’t worry about build a car , worry about driving it! . If you need a car to drive to work and do the weekly shopping in, you buy one from a dealership – but if you have a specific need , then you may have to get into the workshop and build a car that meets those needs.If you want to concentrate

When a prebuilt solution  develops a problem that requires support , the offerings from the major vendors seem to differ a little. If you have a VBlock, you have one throat to choke ( presumably not your own , its only a computer problem , don’t let it get to you 😉 ) and one number to call. They will let the engineers from the different divisions fight it out and fix your problem , which is ultimately the only thing of concern to you as an owner.

The situation with a flexpod seems a little less intuitive. As its not a single SKU – you would require a separate support contact with each vendor ( of course this may be marshalled by the VAR you purchase through ) , You would initiate contact with the vendor of your choice – they then have a channel under the skin to be able to work with engineering functions of the other partners at the network, storage , compute & hypervisor arms as required. I would like to think this does not mean the the buck gets passed for a couple of rounds before anyone takes ownership of the problem , but I’ve yet to hear of anyone requiring this level of support. If you have and had a positive or negative experience , please get in contact.

If you have “rolled your own” solution , then support is up to you ! make sure that you have a similar SLA across the stack , or you could find yourself in a situation where you have a very fast response from your hypervisor people , but when they work out its your storage at fault , they might make you wait till the next day / end of the week. If this does happen to you , then I’m sure you’ll have plenty of time to clear your desk….



I’ve almost recovered from my Hectic week of Jet-setting for this year , starting with the VCAP-DCD Beta Exam in Amsterdam and culminating in a few days of visiting vendors for talks and roundtables in Silicon Valley. It was my first visit to the west coast , so I was initially star struck by it all – names you only ever see as a URL on buildings really pushes home how close you are to the technology and its not hard to get caught up with the buzz of it – I lost count of the number of startup ideas I heard over the course of the event!

For those of you who haven’t heard of the Tech Field Day concept before , here is a brief guide.  Following on from a concept launched by HP , the field day brings a number of delegates from the user community together with a vendor or vendors for a session that should be a little bit more in depth that your average marketing pitch. The delegates are not there to buy anything , and are no way obliged to write about their experiences, although Food & Drink , Travel & Accommodation expenses are covered by the sponsoring vendors.

This particular event marked a new direction for TFD in that it was streamed live over the web via . This potentially changed things in a couple of ways – The cameras were far form hidden and I wonder if the fact that they were being broadcast affected some peoples candour and in a couple of circumstances the sponsors where prepared to say some things off camera that they were not prepared to when they were rolling. That said , the greater audience did mean that a few questions were asked that may have not been bought up had it not for being mentioned on twitter by someone watching the stream. I would like to think that I was as honest as I’d have been on and off camera!

I think the event is possibly better suited to the smaller vendors with a less refined marketing function – Of the larger vendors that we saw , the sessions felt a little pre-canned with PowerPoint hitting a critical mass at one particular site. Making use of an “Executive Briefing Centre” , while it gives you access to nice comfy rooms with wireless internet access does nudge conversations to wards that more marketing side of things. Just using a regular conference room facilitated a more in depth discussion and 2 way communication.Perhaps there is a case for presentations to be done “in the round” to use a theatrical example , with delegates sitting in a "”doughnut” around the presenter.Presenters that had a real passion about their product held the audience much better , a prime example of which was Dave Hitz – founder of NetApp. He was only booked in for a 15 minute slot , but stayed for most of the 4 hours session , which is a lot of time to dedicate for a guy in his position. Outside of his own slides he was active in the discussions around the topics. It was a shame he wasn’t able to stay for lunch, where I believe the best dialog with the NetApp guys occurred.

I my next few blog posts I’m going to try and write about subjects that came up during the sessions , rather than a summary of each session , which you would better off getting from watching the excellent recordings made by the PrimeImage Media guys.


For those that missed it , have a look at the following video from the day (my wonderful piece to camera is at about 1:41 )

Tech Field Day 4 – Day 1 – NetApp 2 from Stephen Foskett on Vimeo.


One last thing – you may well have noticed my fledgling upper lip furniture – I’m growing a moustache this month as part of Movember – donating my face to men’s health. If you would like to donate to help men who have problems growing good facial hair like myself , then my MoSpace page is at

I’m currently sat in a lounge at Schiphol Airport trying vain to get onto to the wireless network , even offering to pay , but to no avial. Thankfully due to the wonders of windows live writer , I can rant now , upload later !

As you may have guessed , the reason that I’m sat here is that I took the Beta Exam for the VMware Certified Advanced Professional – Datacenter Design certification. When I was invited to take the exam , the list of dates was pretty short & in order not to clash with my outbound trip to Tech Field Day tomorrow I had to sit it today but alas there where no seats available at the London test centre.

I’d almost given up hope of being able to sit the beta , when I noticed that the Global Knowledge Test centre in Amsterdam had a plethora of slots available, so checked some prices with easyjet and realised its not much more expensive to travel from Milton Keynes to London at Peak time than it is to travel from Milton Keynes to Amsterdam ! A plan was rapidly forming , which lead to having to get up at 4:30 am this morning to jump on a flight.

As I get older I get earlier , in this time arriving 2 hours before my 4 hour exam was due to start, thankfully the nice lady at the desk let me start early. I’d been doing my last minute revision on the flight and at the airport , so there really wasn’t any point delaying the inevitable !

Onto the Exam itself, I’m restircted by NDA as to how much I can say , beyond what has already been released which is that the exam consists of a …. number of questions , split into 3 types – multiple choice , drag & drop and design/visio(ish). In contrast to the DCA Exam , this felt much more like an extended VCP test to which I suspect I got the whole question deck thrown at me. I’m going to take a wild guess and assume the live exam would consist of a subset of the questions posted.

@hany_micheal was the first tweep I noticed to have taken the beta and his feed back was along the lines of having problems finishing the exam due to the ammount of reading. I suspect that not having english as a first language didn’t help there. He is right , there is a lot to read , but I felt the skill was in working out which buts of the text were relevenet. If you read the same exhibits over and over again form start to finsh I can see how time would be a problem. If course if you skim read them too much then you may well miss a key item , which I think I may well have done on a number of occasions.

I completed the question set with about 35 minutes to spare , which I felt was plenty of time to go back and check any answers & add additional comments ( as beta exam takers are often encouraged to do ) , however when I got to the end, the only option was to end the exam. No review stage meant that a) I was not able to add additional feedback and b) a couple of questions that I had flipped through in order to come back to if I’d had more time went unanswered. I dont know if this was just a beta “feature” or not.

In terms of “features” I felt the exam was pretty good – cetainly didn’t have any of the techinical challenges that the DCA exam had – the design interface was actually pretty good ot use once you had got the hang of it , though it did highlight my lack of visio-diagram-making-pretty skills !

I recognised a few faces coming into the exam room as I left , notably Duncan Epping of yellow bricks fame & also Frank Denneman of VMware , so I look forward to seeing how their feedback compares.



After the successful release of the Capacity Management suite product at VMworld , its all been pretty quiet on the VKernel front , which usually means they are up to something.In addition to coding away like the clever chaps they are , they’ve also been growing the company , always a handy thing to do if you’d like to put food on the table.Its been a bumper year and a record quarter for them with the key Metric of their client sizes continuing to grow, showing that people are taking the problem of optimisation planning & chargeback seriously. When I was invited onto a call with Bryan Semple , CMO for VKernel last week I was looking forward to something new. Little did I know that I’d actually seen a sneak peak of it back in July with the Chargeback 2.0 release.


One of the key features within the new versions of the chargeback product is that is supports chargeback for environments running on Microsoft’s Hyper-V platform , and specifically the support for the Virtual Machine Manager Self Service Portal Toolkit (MSVMMSSP) . This allow the creation of self service portals to not only provision Machines according to a quote , but to be able to collect metrics for static or utilisation based chargeback of those machines. This starts to become increasingly relevant as enterprises move towards a “cloud” model ( presumably private with hyper-v at the moment ) VKernel has been selected as the primary chargeback vendor. Other partners providing support for the toolkit include IBM , Dell , EMC NetApp and HP


Ok so I almost went two paragraphs without using the “C” word – I could have been a lot worse! When looking at the kind of product that VKernel offers from a cloud provider perspective , the importance of the 3 sub products ( Capacity Analysis , Optimisation & Chargeback ) gets juggled around a bit. A service provider doesn’t really care as much about VM rightsizing as the end users are going to pay for it. A public cloud is also going to be looking at capacity from a slightly different point of view so while its important , I would imagine they may well use a different toolset.


VKernel has integrated with Microsoft’s “cloud” product , but what will it do with VMware other than the existing integrations , I would suspect they are keeping a very careful eye on the vCloud Director API and how they can best plug into that for example to track the costs of a vApp in a hybrid cloud situation as it moves from the private to public datacenter.

and I still don’t own an ipad. I’m trying to work out if I *really* need a tablet right now or whether its just an exercise in e-manhood waving at meetings – after all its the content you create , not what you create it on that counts !


As Stephen from Gestalt recently blogged , there isn’t much out there that can take on the ipad when it comes to functionality. I was passing a local electronics outlet today and noticed they actually had a couple of non-iPads out on demo , so I thought I’d try and get a little hands on with them .


The first was the Toshiba Folio 100 Tablet – its got a 10” capacitive screen and runs Android 2.2 I have to admit to being an android fan and have tried one of the early Chinese made tablets running 1.6 on a 7” resistive screen and was hugely underwhelmed. The Toshiba wasn’t that much better. The screen still required a touch heavier than a mason chisel to actually get anything to respond and the “touch sensitive” buttons appeared to operate by committee.. of 1970’s British Leyland workers ( i.e. infrequently )  When a member of staff noticed I was having a play around with it , he came over to demo a few features , but the device crashed when he put a USB key into it. Not the best demo in the world.

Next to the Toshiba was a Samsung Galaxy Tablet – from a construction point of view this looked really well made other than the fact that the designers seemed to have got the plans scaled down a bit. It really isn’t that much bigger than my HD2 phone. the interface looked slightly different to the regular android screen I’m used to and while I could have operated the keyboard with thumbs it really wouldn’t have been worth the £500 investment.

I guess the smart money is just going to have to wait until Android 3.0 for a capable Tablet experience. I will instead concentrate on producing some great content from the meetings and presentations I’ll get to take part in San Jose with the rest of the #TechFieldDay team.


And finally… because its Friday its time for a lolcat for that brief giggle before you go home for the weekend. The model for this belongs to Mike Laverick of fame. I present to you , Molly.