Category: VMworld



Everyone who is anyone knows that the social highlight of the Vmworld Conference is the Veeam Party – this gets better and better every year and this years one at Copenhagen is no exception.

Numbers are limited so its an invite only gig – but if you are going to Copenhagen and haven’t received an invite yet , then don’t give up hope…


I am shamelessly self promoting myself on twitter to break the magic 1000 follower barrier – somehow a viral rumour started that I would wear a mankini at the Veeam Party if this happened. This plan has been vetoed by both of my managers – domestic and employer ( thankfully )  so I’ll be saving you all a fortune in therapist fee’s !


However if I do hit 1000 followers before the Veeam Party , I will chose one of the new followers at Random and they will get an invite to join myself ,the Veeam team and a host of VMworld rockstars at an exclusive venue in Copenhagen for a great night out! I will also be making a donating to a worthy & geeky cause ( the vCommunity Trust )


admin  note : you must be able to make it to Copenhagen on the 18th for the Party – chances are you will be at VMworld , but if not , I’m not paying for your Transport Smile


So for your chance to join in the fun ( and find out what I’m up to ) , all you have to do is Follow @chrisdearden on Twitter and if I hit 1000 before the 18th I’ll choose a winner and get in touch !

I got wind of a new project launched by Veeam from one of the many, many, many tweets that flooded by tweetdeck during VMworld US week – when my fellow vExpert, Blogger & Colleague Rick Vanover hinted that Veeam was due to launch another free community resource, I was keen to discover !


Having made the transition to vendor world, finding content as a blogger can be a little bit more of a search. For some reason, many of Veeam’s competitors don’t seem to want to give me a sneak peak of their products Winking smile ,however in this case the product in question isn’t one that Veeam will be selling !


When we shifted into virtualisation, many of us from the physical server world had to make a little bit of a leap of faith into the new mind-set around virtualisation, now that we’ve made it, its is almost second nature to us. If you cast your mind back to those days of 7u file servers, imagine how alien the concept would be that they could be represented as a handful of files running of a single half height blade. Fast wording that concept to today and many people have yet to make that similar leap of faith when it comes to image based backup of VM’s


The Backup Academy as developed by Veeam to provide administrators with the foundations and fundamentals of Virtual Machine level backups, no matter who you choose for the solution. Veeam are by no means the first vendor to produce “neutral” training, EMC for example have paved the way with  Their Cloud Certifications last year.

The solution consists of a series of Videos produced by well known community contributors and trainers, such as David Davis, Eric Siebert & Greg Shields . The Academy Professors will be an ever growing list of subject matter experts around the backup and management of virtual machines.

Users of the site will be able to take an exam based on the content and even get a certificate for passing. I personally see the academy as a great way for current backup admins & virtualisation specialists to move to the next level – now that you have made game changing strategies to your production infrastructure, why not do the same for your backups ?


to find out more, head to


Details have just been released for the next meeting of the London VMware User Group – In the few years since  I have started attending , this event has continually got better and better with May’s meeting blossoming into a full days event with a couple of different tracks and 2 labs hosted by COLT , based around consumption of resources with vCould Director & Administering vCloud director. If these labs are up to the quality of those offered at VMworld Europe then it’ll be worth taking the day off just for those !

Did I mention this was all free ? After the glow of the projector bulb has died town the event concludes with a social reception at a nearby pub to give you a chance to meet , greet and drink with fellow and like minded individuals who actually want to listen to you talk about virtualisation all night ( my usual drinking buddies tend to fall asleep after the first hour or so ! )


Head over to and register!  follow #LonVMUG on twitter for updates and tweets from other attendee’s

Centrix Software [LOGO]

I was invited to a call with Rob Tribe from Centrix software last week – following advice from Marketing Guru & London VMUG ring-mistress Jane Rimmer for vendors to get in touch with bloggers – fine advice if you ask me 🙂

I was aware of the existence of Centrix , having seen their booth at VMWorld Europe, but sorry to say I didn’t get a chance to speak to them there , so aside from a brief look at their site , it was something new.

Starting off with a brief look at how the company came to be, which was that it grew from a number of VMware staff who recognised the challenges from taking the same approach to a VDI implementation as a server consolidation project. When an application is installed on a server , there is a fairly good chance that its going to be used. When you have a VDI image with *every* application that’s been put into the requirement specification for that desktop golden image , its going to a) get quite bloated b) require some considerable licensing c) possibly not get all that heavily utilised.


Knowing what is current deployed in your environment is not a new concept by any means. There are armfuls of packages available to collect an inventory from your clients , ranging from the free to the costly and over many different platforms. To get the best from an inventory , many of these applications will deploy an agent to the endpoint in question.


Centrix can deploy its own agent , or take a feed from your existing systems management package and apply some nifty analytics to it in order to give you a more accurate picture of the environment. Currently the key metric in this would be executable start and stop times. This of course will give you the best kind of data when your environment is rich in fat clients or installed software and will give you a true meter of not only total usage , but at what times of day those products are used, thus enabling you to build up a map of utilisation across your estate.


This kind of information would be of great help when planning your VDI environment. Not only are you going to be able to know about the concurrency of the current landscape , but which applications are most frequently used ? Planning which applications should be a core part of your master images and which ones can be deployed via an application virtualisation layer would be made a lot smoother.


Having given an overview of the product, I was looking forward to seeing a bit more under the hood as to how this “workspace intelligence” was achieved.


Rob started with the Raw data as shown below – this isn’t anything massively new to write home about , but its interesting to know that the BIOS date of a physical machine is considered a reasonable metric for the age of the deployment , given few machines would have a bios upgrade after they have been deployed to a user.




The unique users field is also an indication of boxes that have not been logged into during a given monitoring period. This data would be of less importance for servers , after all if a server has remained functional without anyone having to log into it , then its defiantly a good thing ! An unused workstation could be put onto a watch list of systems to decommission perhaps ?

For Each device you can drill down to further information , picking up installed hardware and other common metrics.




So that’s the raw data , how do you go about presenting it ? The example graph below shoes the numbers of applications a given workstation is running over the capture period.



We can look at this data in a number of ways to help build up our application “map” . For example , looking at some metric of MS Office utilisation…



Based on the above data , would you deploy PowerPoint to your VDI base image ? The number of times an application is opened doesn’t give you much of a picture of how heavy hitting that application is. For this metric , Centrix have elected to pick Avg CPU time ( rather than % utilisation – given the heterogeneous nature of CPU across the estate )




In addition to looking at the software utilisation over your workstations , the same details can be used to help with licence management. Rob was clear to point out that Centrix isn’t a licence management product , but could certainly help make some decisions around the deployment of per-instance licensed software.




With the main features of the product suitably demonstrated, we talked a little around the product itself. It is a regular 2 Tier application, running on windows with a Microsoft SQL Back-End. We didn’t have too much time to talk around the scaling of the product up to an enterprise space , but I’d like to see how it would cope with estates of over 40 or 50,000 workstations.

The agent itself is pretty streamlined – they have managed to get it down to a 500k install , which seems small enough in footprint , though I’d like to see what the footprint of an agent actively collecting would be on a workstations resources.

“Out of the box” – the product ships with 40 different reports on the raw data , to enable you to pull out the common detail with ease. While I don’t generally harp on too much about vnext features , the version 5 of the product due for launch early in the new year will feature a community reporting portal , hopefully along the lines of Thwack! the Portal from Solarwinds that enables Content exchange between users of the Orion products.

I think the product is quite niche & and sell of its features to budget holders isn’t necessarily around the technology / cost side of the product ( approx real cost of £20 / desktop )  but around the compliance / political side – Stakeholders have a tendency to object to user management via IT & and perceived “Big Brother” feeling that inventory / software metering agents tend to invoke with a user population.

I look forward to seeing how the product develops , with v5 proposed to go beyond analysis of thick client usage , into looking at how thin client application access via a browser are used. Matching this kind of data up with traditional monitoring data from the application backend would be most interesting.



After the successful release of the Capacity Management suite product at VMworld , its all been pretty quiet on the VKernel front , which usually means they are up to something.In addition to coding away like the clever chaps they are , they’ve also been growing the company , always a handy thing to do if you’d like to put food on the table.Its been a bumper year and a record quarter for them with the key Metric of their client sizes continuing to grow, showing that people are taking the problem of optimisation planning & chargeback seriously. When I was invited onto a call with Bryan Semple , CMO for VKernel last week I was looking forward to something new. Little did I know that I’d actually seen a sneak peak of it back in July with the Chargeback 2.0 release.


One of the key features within the new versions of the chargeback product is that is supports chargeback for environments running on Microsoft’s Hyper-V platform , and specifically the support for the Virtual Machine Manager Self Service Portal Toolkit (MSVMMSSP) . This allow the creation of self service portals to not only provision Machines according to a quote , but to be able to collect metrics for static or utilisation based chargeback of those machines. This starts to become increasingly relevant as enterprises move towards a “cloud” model ( presumably private with hyper-v at the moment ) VKernel has been selected as the primary chargeback vendor. Other partners providing support for the toolkit include IBM , Dell , EMC NetApp and HP


Ok so I almost went two paragraphs without using the “C” word – I could have been a lot worse! When looking at the kind of product that VKernel offers from a cloud provider perspective , the importance of the 3 sub products ( Capacity Analysis , Optimisation & Chargeback ) gets juggled around a bit. A service provider doesn’t really care as much about VM rightsizing as the end users are going to pay for it. A public cloud is also going to be looking at capacity from a slightly different point of view so while its important , I would imagine they may well use a different toolset.


VKernel has integrated with Microsoft’s “cloud” product , but what will it do with VMware other than the existing integrations , I would suspect they are keeping a very careful eye on the vCloud Director API and how they can best plug into that for example to track the costs of a vApp in a hybrid cloud situation as it moves from the private to public datacenter.

I’ve finally recovered from the non stop week that’s been VMworld Europe 2010. My feet no longer hurt, I’ve come down from the caffeine rush , and I can purchase a pint of beer for less than a week salary.

Its been a very different experience this year for me. I think that because its the first year that I’ve been more active within the community and as a blogger ( with the badge to prove it ) So the list of people to talk to has also snowballed.

Id badge with various "pins"

I attempted to compensate for this by extending my stay slightly to arrive on Sunday and leave on the Friday , giving me what I thought would have been more than enough time to meet , speak to and listen to what people had to say. I think I could have been there another week and still missed someone ! I was also able to spend a little time with some of my colleagues from around the world , which I’m sure will pay dividends both in the short and long terms.

I left my planner pretty open this year , having made the mistake from previous years of trying to fit too many sessions in , instead concentrating on the things that I wouldn’t be able to catch up online a few weeks later , such as the interactions at the solutions exchange, bloggers lounge & lab sessions.



There are 1001 other reports of what people did during their time in Copenhagen, so I wont dilute the blogosphere with yet another one but will leave you with a few things that I saw that hopefully will be worthy of some further investigation. 


Cisco UCS Express – the branch in a box issue is one that crops up time and time again. This could well be the perfect solution. I really want one to have a play with and if it comes in at the price point mentioned I’ll be impressed !

VAAI Integration – since the 4.1 release , the major storage vendors have been working hard to get this into place and working. Some great demo’s of this from IBM , EMC and 3PAR.

Veeam Backup Version 5 – I usually get a little annoyed at marketing of stuff you can’t actually buy yet , but with an announced release date of 20th of October, I’ll let them off. If you’ve managed to be on the web for more than a month without hearing about these guys , you probably don’t have your router switched on.


As well of plenty of things that got me excited , a few things left me baffled , confused and a little irate. They can be summed up by the Infrastructure Center from

I’m still a little bit confused by what this actually offers over using vCenter & a decent backup product. It claims to be able to manage Xen and Hyper-V in the next release but not under the same pane of glass. “in the next release” was a phrase used almost every second sentence , leaving me to wonder what actually was in the current product. The Guys on the stand were not native English speakers, but I have a hunch the pitch makes just as little sense in German as it did in English. Sorry to single you out guys , but perhaps you just accosted me at the wrong time Smile


The Party as ever was immense on its scale though like others I felt the venue was a little on the monolithic side – the Palais at Cannes offered the multi level , multi room experience that added the wow factor that the Copenhagen Party was slightly lacking in. I was impressed by the various acts , including a good old fashioned breakdance-off. As fellow bloggers have said, its the people that make the party and there were plenty of opportunities for networking with the other attendee’s

Its been a hectic week in Copenhagen , but as I checked my mail before I went to bed last night I discovered that the VCAP-DCA beta exam I took back in June was actually scored. I had previously thought the beta had been abandoned and had a voucher for a free retake as a result.

As you can imagine I’m over the moon at passing the exam – it was the most challenging technical exam I’ve ever sat. This will allow me to focus more on the VCAP-DCD Design exam , due for beta release quite soon.


(As a side note I’m sure this is completely unrelated to meeting the certification team including Jon Hall at the Copenhagen Show ! )

I arrived into a very foggy Copenhagen yesterday evening after good flight. It was looking initially as if early arrivals such as myself were going to have to brave the metro in order to make it to their hotels from the airport , but thanks to EMC’s Sponsorship of the airport shuttles , I was able to take a coach to the Bella centre ( even If I was one of 2 on board – goes a little against the green aims of this year! )

Once dropped off at the Bella centre, I was able to register and get my rucksack and badge. In addition to this I also got a Metro pass valid until Thursday. A short metro journey from the conference to my hotel, shared with some of the VMware guys running the lab setup – I don’t think I’ve seen such a concentration of VCDX’s on public transport before 😉 I have to say on first impressions , I really like the Copenhagen Metro. Its clean , easy to navigate and runs 24/7. I’ll be making good use of to get between the various events over the next few days. I was able to meet up with some fellow Virtualisation Fanatics at Mike Laverick’s Tweetup to celebrate 10-10-10 ( or 42 in binary )

Today looks to be a pretty busy one , even for those of us not on the Partner / VCI / Developer track – The labs have been opened up early so I’m going to try and get a quick one in before taking a photo walk with Scott Herold.

In what seems to have become a bit of a theme on JFVI , I’ve been taking a peek at a recently released product , listening to what the Marketing / Sales ladies & gents have to say , then having a poke around with the product to see if they’ve been truthful ( allegedly sometimes Sales & Marketing people have been a little economical with the truth over the ages – I’m sure it happens much less now , but its always good to check don’t you think ? )

imageI have only recently become aware of the Kaviza solution since VMworld , where a number of people seemed to rate the offering pretty highly , notably winning the best of VMworld 2010 Desktop Virtualisation award , which isn’t to be sneezed at. Its also work awards from Gartner , CRN and at the Citrix Synergy show , wining the Business Efficiency award.

It seems a fair amount of “Silverware” for a company that launched its first product in January 2009 but being a new player to the market does not seem to have put Citrix off , who made a strategic investment in Kaviza in April of this year.

I spoke with Nigel Simpson from Kaviza to find out a little bit more. The key selling point of the VDI-in-a-box solution is cost. All too often you hear that switching to VDI does not save on CapEx – its only in the OpEx savings that you can realise the ROI of Virtualising client desktop. However if you are looking at a desktop refresh then you can get that ROI , but its not a case for every client. Kaviza aims to be able to provide a complete VDI solution for under £350 ( $500US ) per desktop. That cost includes all hardware & software at the client and server end. The low cost of the software and the fact that its designed to sit on standalone , low cost hypervisors using local storage means that particularly for smaller scale or SMB solutions , you are not getting hit by the cost of additional brokers or management servers. its also claimed to be scalable without a cluster of hypervisors due to the grid architecture used by the Kaviza appliance itself.


The v3.0 release of the product adds some extra functionality to improve the end user experience. Part of the investment form Citrix has allowed Kaviza to use the Citrix HDX technology for connection the client desktops. This allows what Citrix define as a “high Definition” end user experience including improved audio visual capabilities & wan optimisation. This is supported in addition to convention RDP protocol to the client VM’s.

I will freely admit that I’m a bit of a VDI virgin. While I knew a bit about the technology , My current employer hasn’t until very recently seen a need for it within our environment so I’ve tend to wander off for a coffee whenever someone mentioned it. At a recent London VMWare User Group meeting – Stuart McHugh presented on his journey into VDI and I was so impressed , I thought I’d take a closer look.  I’ve not had a chance to play around with view much so I can’t comment on how HDX compares to PCoIP however from reading other people opinions of it , it seems that HDX is as good. ( source :,289142,sid194_gci1374225,00.html )

The kMGR appliance central to VDI-In-a-box will install on either ESX or XEN on 32 or 64 bit hardware. I’m told that hyper-v support is due pretty soon – having the appliance sit on the free hyper-v server would defiantly be good. It’ll also use the free version of Xen server , but sadly for VMware fans such as myself it will not currently run on the free versions of ESXi – according to Kaviza , this will only bump up the projected costs by around £30 per concurrent desktop.

The proof of the pudding will always be in eating , so rather than talk about the live demo I got from Nigel, I’ll dive right into my own evaluation of the product. Kaviza claim that the product is so easy to use , you can deploy an entire environment in a couple of hours. I would agree with this , even with the little snags I introduced by a minimal reading of the documentation and a quick trip to the shops I managed to get my first vm deploying surprisingly quickly.

Quick background on my test lab – I don’t have the space , cash or enough of a forgiving partner to be able to run much in the way of a full scale setup from home , so my lab is anything I can run under VMware Workstation; thankfully I have a pretty quick Pc with an i7 quad core CPU & 8 GB of Memory , so enough for a couple of ESXi hosts.

I downloaded a shiny new ESXi 4.1 ISO from VMware after a quick update to workstation and as ever within a few minutes I had a black box to deploy the Kaviza Appliance to. After a pretty hefty download and unpack ( to just over 1Gb ) the product deployed via an included OVF file. While I was waiting for the appliance to import , I started the build of what was to be my golden VM with a fresh Windows XP ISO. The kMGR appliance booted up to a pretty blank looking Linux Prompt.

imageAs the next step in config involves hitting a web management interface I think a quick reminder “ to manage this appliance browse to “wouldn’t have gone amiss.

I was able to grab the IP of the appliance from the VI client so hit the web management page to start building the Kaviza Grid.


At this stage I hit the first gotcha with a wonderful little popup that very politely explained that ESXi 4.1 was not supported , and would I like to redeploy the appliance. After the aforementioned trip to the shops to calm down I trashed the ESXi 4.1 Vm and started again with an older 4.0 ISO I had handy.

This time I was able to build the grid , providing details of the server , and if I was going to use an external database and if I was using vCenter( in a production deployment , even though you would not require the advanced functionality of vCenter , I think there is a chance it would be used if you had an existing one so that you could monitor hardware alerts etc. Kaviza best practice states that you should put your VDI hosts into a different datacenter to avoid any conflicts of naming.

With a working server , I needed to define some Desktop Images , so I took the little XP desktop VM I’d built in the background ( please note I did pretty much nothing to this VM other than install windows from an ISO that had been slipstreamed with SP3 ) and started the process to turn it into a prepared image for desktop deployment.

image The first image is built from a running VM that you could have deployed or recently P2V’d to the host server.  I was hoping that the process would have been a little more automated than it was , and as a non manual reader it was not immediately obvious. I can confirm that creation of subsequent images is a much more straightforward process. As the image creation stage I because aware of the second little feature that caused a little delay. The golden VM requires the installation of the Kaviza Agent ( this isn’t automated , but it is pretty straightforward ) – this agent requires the 3.5 version of the .NET framework which took a little bit of time to download and deploy. I’m sure those of you with a more mature desktop image will most likely not his this little snag. After testing a sysprep of the image I was finally able to save it to that it would become an official image.

From the image , you can create templates. Templates represent a set of policies wrapped around a given machine so would enable a lot of the customisation ( for instance the OU that the machine will be joined to – the amount of memory it has and which devices can map back to the end user )

imageThis is also where you specify the size of the pool for this particular desktop – the total number of machines in the pool and the number to keep ready for pre-deploy. The refresh cycle of the desktops can also be set up – if you have a good level of user and application abstraction then you can have a desktop refresh as soon as a user logs out. I gave this a test , and even with the very small scale setup and tiny XP VM’s I was using I was able to keep the system pretty busy with a few test users logging in to see how quickly desktops where spawned and reclaimed. With a large scale deployments I can see that possibly causing some issues with active directory if you had a particularly high turnover of machines and a long TTL on AD records.

To test the user experience , I deployed a smaller number of slightly larger XP machines and installed the optional Citrix client to see what HDX was all about. I have to admit to be pretty surprised that a remote connection to an XP session inside a nested ESX host under workstation was able to play a TV show recorded on my windows home server at full screen with audio completely in sync. I would seriously consider it for the extra $30 per concurrent user license. I understand the HDX protocol does need a proper VPN or Citrix access gateway to be fully available over the internet and that the supplied Kaviza Gateway software which published the Kaviza desktop over an SSL encrypted link without the use of a VPN is for RDP only. Its not the end of the world but its something to think about.

I was very impressed with the ease at which I was able to start deploying desktops – and at the simplicity of the environment needed to do so. As well as the product woudl scale up on its own , I believe there is likely to be a sweetspot where a traditional VDI solution would work out cheaper. For SMB/SME /Branch office/ small scale  deployments, this really is an ideal solution form a cost point of view. . This was of course only at the pre-proof of concept stage , but to go with a production solution wouldn’t necessarily be much harder at the infrastructure level. The same level of work would need to be done to produce the golden desktop image regardless of the choice of  VDI technology. If you’d like to trial the product yourself , head over to and grab a trial.

DISCLOSURE : I have received no compensation and used trial software freely available on the Kaviza website to conduct the testing on this blog post.

Its Less than a month to go before VMworld Europe at the Bella Centre, Copenhagen. The buzz on twitter / the blogosphere is working its way up , possibly not to the level of the recent VMworld conference in San Francisco , but I’m sure the event will be just as good , with its many many breakout sessions and mammoth cloud based lab infrastructure.

This will be my 3rd year of attending the European conference and I have to say I’m really looking forward to it – mainly as it’ll be my first year where due to my interactions within the online and user group community, I’m actually going to know people going 🙂

Given the great number of bloggers that’ll be attending the show , I thought of an idea that would be a bit of fun & generate some traffic for each others sites, a shirt Exchange.


Here’s how I see it working – I’ve had a batch of tshirts printed up with the wonderful logo below. While I will be giving a couple away in the run up to VMWorld , I thought that like we often do a link exchange, we should do a low tech version and have a shirt exchange. If you as a blogger get some shirts printed to, I’m sure we can arrange a mini swapshop in the bloggers lounge!


So once again if you would like to sport one of these , then get some shirts printed up and come find me! If you’d like to do a swap but are not coming to VMWorld, mail me at and we’ll work something out. I’ll advertise yours if you advertise mine 🙂