Tag Archive: VDI

Ok so I’ll take my tongue out of my cheek  – I have never heard Xangati’s summer refresh of their performance monitoring dashboard called by its Three Letter Acronym  (TLA) before, but I was lucky enough to be given a preview of the Xangati Management Dashboard (XMD) and to be shown some of the new ways in which it can gather information and metrics relevant to a Virtualised Desktop deployment.

When I first came across the product about 12 months ago, it’s main strength was in the networking information it could surface to a VI admin – by use of small appliance VM’s sitting on a promiscuous port group on a Virtual switch it was able to analyse net flow data going in and out of a host – when this was aligned with metrics coming form Virtual centre. The products “TiVo” like recording interface was able to capture what was happening to an infrastructure either side of an incident , be it a predefined threshold , or an automatically derived one – where a workload was suitably predictable for a given length of time , the application was able to create profiles for a number of metrics and record behavior outside that particular profile. As with other products that attempt to do dynamic thresholding , the problem comes in the form of an environment which is not subject to a predictable workload where is possible to miss an alert while the software is still “learning” – it also assumes that you have a good baseline to start with. If you have a legacy issue that becomes incorporated into that profile , then it can be difficult to troubleshoot. To this extent I’m glad that more traditional static thresholds are still able to be put in place. When monitoring environments with vSphere version 5 , there is no more requirement for the network flow appliances – the netflow data is provided directly to the main XDM appliance via a vCenter API. With a Single API connection , the application is focussed on much more than just the network data – allowing a VMware admin to see a wide view of the infrastructure from the Windows process to the VMware Datastore.


What interested me about the briefing was the level of attention being paid to VDI – I think Xangati is quite unique in terms of their VDI Monitoring dashboard and the latest release reinforces that. In addition to metrics that you would expect around a given virtual machine in terms of its resource consumption , Xangati have partnered with the developers of the PCoIP protocol , Terradici in order to be able to provide enhanced metrics at the protocol layer of a VDI connection. This offers a welcome alternative to the current method of having to utilise log analysers like Splunk.


VDI users are in my opinion much more sensitive to temporary performance glitches than consumers of a hosted web service. If a website is a little slow for a few seconds , people might look at their network connection or any number of alternative issues , but for a VDI consumer , that desktop is their “world” and would affect every application they use. Thus when it runs poorly they are much more liable to escalate than the aforementioned web service consumers. Use of the XMD within a VDI environment allows an administrator to trouble shoot those kinds of issues ( such as storage latency or badly configured AV policy causing excessive IO ) by examining the interaction between all of the components of the VDI infrastructure , even if the problem occurred beyond the rollup frequency of a more conventional monitoring product. This what Xangati views as one of its strengths , while I don’t think it is a product I would use for day to day monitoring of an environment – there is a lot of data onscreen and without profiles or adequate threshold tuning it would require more interaction than most “wallboard” monitoring solutions, I can see if being deployed as a tool for deeper troubleshooting. There is a facility which would allow an end user to trigger a “recording” of the relevant metrics to their desktop while they are experiencing a problem ( although if the problem is intermittent network connectivity , this could prove interesting ! )


As a tool for monitoring VDI environments it certainly has some traction , notably being used to monitor the lab environments for last years and this years VMworld cloud based lab setups , as well as some good sized enterprise customers. With this success I’m a little surprised at the last part of the launch, “No Brainer” pricing… In a market where prices seem to be on a more upward trend , Xangati have applied a substantial discount to theirs – with pricing for the VDI dashboard starting at $10 per desktop for up to 1000 desktops. I’m told there is an additional fee for environments larger than that. I’m no analyst but I’d love to explore the rationale behind this.. Was the product seen as too expensive ( although as with many things , the list price and the price a keen customer pays can often be pretty different – is this an attempt to make software pricing a little more “no nonsense ? “ I guess time will tell !


For more information on the XMD and to download a free version of the new product , good for a single host , check out http://Xangati.com


Look closely at the photo above and you’ll notice something missing in terms of a usual workstation setup … the PC itself. Its not hidden under the desk , but infact behind the phone.


I’ve been able to get my hands on one of the Cisco VXC 2111 Series thin clients. These clients are part of Cisco’s virtualisation experience infrastructure (VXI) strategy and come in a number of flavours. The one above is designed to integrate with the 9971 phone it is attached to and should you have a suitably juicy PoE+ setup be powered from it too. Cisco are able to supply the clients with a PCoIP or HDX firmware in order to connect to your VMware View or Xen Desktop based solution.

For those without a suitable phone , there is a micro tower form factor available. I’ve only had time to build a view environment for the client to talk to but its been very painless so far ( short of finding a USB keyboard ! ) What would be great is to be able to see some tighter integration between the phone and client but I don’t believe it is available at this stage. I can see this being great for the contract center business – although the key differentiator is only currently due to the physical integration with the phone to reduce desktop clutter – although if you had a more conventional thin client solution you could use softphones and eliminate the phone itself so the benefit is possibly a little clouded. Its almost certainly a big step in the right direction and illustrates Cisco’s commitment to Desktop Virtualisation. The “year of VDI” might not be here , but with big vendors getting behind it like this, it surely can’t be far away.

Centrix Software [LOGO]

I was invited to a call with Rob Tribe from Centrix software last week – following advice from Marketing Guru & London VMUG ring-mistress Jane Rimmer for vendors to get in touch with bloggers – fine advice if you ask me 🙂

I was aware of the existence of Centrix , having seen their booth at VMWorld Europe, but sorry to say I didn’t get a chance to speak to them there , so aside from a brief look at their site , it was something new.

Starting off with a brief look at how the company came to be, which was that it grew from a number of VMware staff who recognised the challenges from taking the same approach to a VDI implementation as a server consolidation project. When an application is installed on a server , there is a fairly good chance that its going to be used. When you have a VDI image with *every* application that’s been put into the requirement specification for that desktop golden image , its going to a) get quite bloated b) require some considerable licensing c) possibly not get all that heavily utilised.


Knowing what is current deployed in your environment is not a new concept by any means. There are armfuls of packages available to collect an inventory from your clients , ranging from the free to the costly and over many different platforms. To get the best from an inventory , many of these applications will deploy an agent to the endpoint in question.


Centrix can deploy its own agent , or take a feed from your existing systems management package and apply some nifty analytics to it in order to give you a more accurate picture of the environment. Currently the key metric in this would be executable start and stop times. This of course will give you the best kind of data when your environment is rich in fat clients or installed software and will give you a true meter of not only total usage , but at what times of day those products are used, thus enabling you to build up a map of utilisation across your estate.


This kind of information would be of great help when planning your VDI environment. Not only are you going to be able to know about the concurrency of the current landscape , but which applications are most frequently used ? Planning which applications should be a core part of your master images and which ones can be deployed via an application virtualisation layer would be made a lot smoother.


Having given an overview of the product, I was looking forward to seeing a bit more under the hood as to how this “workspace intelligence” was achieved.


Rob started with the Raw data as shown below – this isn’t anything massively new to write home about , but its interesting to know that the BIOS date of a physical machine is considered a reasonable metric for the age of the deployment , given few machines would have a bios upgrade after they have been deployed to a user.




The unique users field is also an indication of boxes that have not been logged into during a given monitoring period. This data would be of less importance for servers , after all if a server has remained functional without anyone having to log into it , then its defiantly a good thing ! An unused workstation could be put onto a watch list of systems to decommission perhaps ?

For Each device you can drill down to further information , picking up installed hardware and other common metrics.




So that’s the raw data , how do you go about presenting it ? The example graph below shoes the numbers of applications a given workstation is running over the capture period.



We can look at this data in a number of ways to help build up our application “map” . For example , looking at some metric of MS Office utilisation…



Based on the above data , would you deploy PowerPoint to your VDI base image ? The number of times an application is opened doesn’t give you much of a picture of how heavy hitting that application is. For this metric , Centrix have elected to pick Avg CPU time ( rather than % utilisation – given the heterogeneous nature of CPU across the estate )




In addition to looking at the software utilisation over your workstations , the same details can be used to help with licence management. Rob was clear to point out that Centrix isn’t a licence management product , but could certainly help make some decisions around the deployment of per-instance licensed software.




With the main features of the product suitably demonstrated, we talked a little around the product itself. It is a regular 2 Tier application, running on windows with a Microsoft SQL Back-End. We didn’t have too much time to talk around the scaling of the product up to an enterprise space , but I’d like to see how it would cope with estates of over 40 or 50,000 workstations.

The agent itself is pretty streamlined – they have managed to get it down to a 500k install , which seems small enough in footprint , though I’d like to see what the footprint of an agent actively collecting would be on a workstations resources.

“Out of the box” – the product ships with 40 different reports on the raw data , to enable you to pull out the common detail with ease. While I don’t generally harp on too much about vnext features , the version 5 of the product due for launch early in the new year will feature a community reporting portal , hopefully along the lines of Thwack! the Portal from Solarwinds that enables Content exchange between users of the Orion products.

I think the product is quite niche & and sell of its features to budget holders isn’t necessarily around the technology / cost side of the product ( approx real cost of £20 / desktop )  but around the compliance / political side – Stakeholders have a tendency to object to user management via IT & and perceived “Big Brother” feeling that inventory / software metering agents tend to invoke with a user population.

I look forward to seeing how the product develops , with v5 proposed to go beyond analysis of thick client usage , into looking at how thin client application access via a browser are used. Matching this kind of data up with traditional monitoring data from the application backend would be most interesting.


This week saw the last London VMUG of 2010. It featured not only the main session with a mix of updates from the mothership , presentations from users & sponsoring vendors , but a choice of 2 breakout sessions in the morning before the main session.

For the last 3 or 4 VMUG’s , Alan Renouf of virtu-al.net ( and lately vSpecialist ) fame has very kindly been running a PowerShell session , graduating from a presentation to a lab – this time hosted in the “cloud” ( also known as Al’s house! ) – I’m told from the guys who attended it , that it was yet another slice of command line gold.

While I enjoy PowerShell as much as the next man, the alternative breakout session being trialled this time appealed more to me as what Alaric Davies described as a bit of “beard-strokey high level strategy stuff” , namely a roundtable discussion with slightly less of a technical focus than some of the other VMUG subject matter.

As it turned out the subject of the roundtable was about VDI – not something I have a huge day to day exposure to as I’ve said before , but something that interests me as it represents some tough challenges for the virtualisation professional. In terms of workload numbers , a good sized Virtual Server estate could have 1000  workloads running , each reasonably predictable and trended. An equivalent sized VDI implementation could be tens of thousands of much less predicable workloads , raising a significantly different challenge.

In addition to the technical challenges offered by VDI – there is a far greater barrier from the finance department. Several of the delegates at the table found the case hard to justify to their management line, Not only could additional hardware need to be purchased in the server room , but potentially a whole new set of hardware for the desktop in the form of thin clients. It would seam there are potential pitfalls in other areas too, such as licencing , where if you are not careful you could end up attempting to licence your desktop twice , which is something we’d all rather avoid. One of the delegates at the table deliberately  avoided the thin client route – citing the reason that should the VDI project “crash and burn” then at least he would not have to go round and buy everyone a new PC ( presumably shortly followed by clearing his desk ? Winking smile )

In addition to user contributions from the table , we were also lucky enough to have a scattering of VMware staff who were able to offer clarifications / advice when required. on a personal note I felt they gave just the right level of participation, it would have been all too easy for them to have run away a bit with the conversation. Mike Laverick was running as an unofficial compere for the roundtable and I’m sure he’d have interjected if he felt things were loosing sight of their goals.

If you work in virtualisation anywhere around the south east, then I really do urge you to come and take part in the London VMUG. It really is a fantastic source of knowledge , networking and occasionally beermats ! Smile





If you’d like more information on the London VMware User Group then check out the VMware Community pages or the LinkedIn Group.



In what seems to have become a bit of a theme on JFVI , I’ve been taking a peek at a recently released product , listening to what the Marketing / Sales ladies & gents have to say , then having a poke around with the product to see if they’ve been truthful ( allegedly sometimes Sales & Marketing people have been a little economical with the truth over the ages – I’m sure it happens much less now , but its always good to check don’t you think ? )

imageI have only recently become aware of the Kaviza solution since VMworld , where a number of people seemed to rate the offering pretty highly , notably winning the best of VMworld 2010 Desktop Virtualisation award , which isn’t to be sneezed at. Its also work awards from Gartner , CRN and at the Citrix Synergy show , wining the Business Efficiency award.

It seems a fair amount of “Silverware” for a company that launched its first product in January 2009 but being a new player to the market does not seem to have put Citrix off , who made a strategic investment in Kaviza in April of this year.

I spoke with Nigel Simpson from Kaviza to find out a little bit more. The key selling point of the VDI-in-a-box solution is cost. All too often you hear that switching to VDI does not save on CapEx – its only in the OpEx savings that you can realise the ROI of Virtualising client desktop. However if you are looking at a desktop refresh then you can get that ROI , but its not a case for every client. Kaviza aims to be able to provide a complete VDI solution for under £350 ( $500US ) per desktop. That cost includes all hardware & software at the client and server end. The low cost of the software and the fact that its designed to sit on standalone , low cost hypervisors using local storage means that particularly for smaller scale or SMB solutions , you are not getting hit by the cost of additional brokers or management servers. its also claimed to be scalable without a cluster of hypervisors due to the grid architecture used by the Kaviza appliance itself.


The v3.0 release of the product adds some extra functionality to improve the end user experience. Part of the investment form Citrix has allowed Kaviza to use the Citrix HDX technology for connection the client desktops. This allows what Citrix define as a “high Definition” end user experience including improved audio visual capabilities & wan optimisation. This is supported in addition to convention RDP protocol to the client VM’s.

I will freely admit that I’m a bit of a VDI virgin. While I knew a bit about the technology , My current employer hasn’t until very recently seen a need for it within our environment so I’ve tend to wander off for a coffee whenever someone mentioned it. At a recent London VMWare User Group meeting – Stuart McHugh presented on his journey into VDI and I was so impressed , I thought I’d take a closer look.  I’ve not had a chance to play around with view much so I can’t comment on how HDX compares to PCoIP however from reading other people opinions of it , it seems that HDX is as good. ( source : http://searchvirtualdesktop.techtarget.com/news/article/0,289142,sid194_gci1374225,00.html )

The kMGR appliance central to VDI-In-a-box will install on either ESX or XEN on 32 or 64 bit hardware. I’m told that hyper-v support is due pretty soon – having the appliance sit on the free hyper-v server would defiantly be good. It’ll also use the free version of Xen server , but sadly for VMware fans such as myself it will not currently run on the free versions of ESXi – according to Kaviza , this will only bump up the projected costs by around £30 per concurrent desktop.

The proof of the pudding will always be in eating , so rather than talk about the live demo I got from Nigel, I’ll dive right into my own evaluation of the product. Kaviza claim that the product is so easy to use , you can deploy an entire environment in a couple of hours. I would agree with this , even with the little snags I introduced by a minimal reading of the documentation and a quick trip to the shops I managed to get my first vm deploying surprisingly quickly.

Quick background on my test lab – I don’t have the space , cash or enough of a forgiving partner to be able to run much in the way of a full scale setup from home , so my lab is anything I can run under VMware Workstation; thankfully I have a pretty quick Pc with an i7 quad core CPU & 8 GB of Memory , so enough for a couple of ESXi hosts.

I downloaded a shiny new ESXi 4.1 ISO from VMware after a quick update to workstation and as ever within a few minutes I had a black box to deploy the Kaviza Appliance to. After a pretty hefty download and unpack ( to just over 1Gb ) the product deployed via an included OVF file. While I was waiting for the appliance to import , I started the build of what was to be my golden VM with a fresh Windows XP ISO. The kMGR appliance booted up to a pretty blank looking Linux Prompt.

imageAs the next step in config involves hitting a web management interface I think a quick reminder “ to manage this appliance browse to https://xxx.xxx.xxx.xxx “wouldn’t have gone amiss.

I was able to grab the IP of the appliance from the VI client so hit the web management page to start building the Kaviza Grid.


At this stage I hit the first gotcha with a wonderful little popup that very politely explained that ESXi 4.1 was not supported , and would I like to redeploy the appliance. After the aforementioned trip to the shops to calm down I trashed the ESXi 4.1 Vm and started again with an older 4.0 ISO I had handy.

This time I was able to build the grid , providing details of the server , and if I was going to use an external database and if I was using vCenter( in a production deployment , even though you would not require the advanced functionality of vCenter , I think there is a chance it would be used if you had an existing one so that you could monitor hardware alerts etc. Kaviza best practice states that you should put your VDI hosts into a different datacenter to avoid any conflicts of naming.

With a working server , I needed to define some Desktop Images , so I took the little XP desktop VM I’d built in the background ( please note I did pretty much nothing to this VM other than install windows from an ISO that had been slipstreamed with SP3 ) and started the process to turn it into a prepared image for desktop deployment.

image The first image is built from a running VM that you could have deployed or recently P2V’d to the host server.  I was hoping that the process would have been a little more automated than it was , and as a non manual reader it was not immediately obvious. I can confirm that creation of subsequent images is a much more straightforward process. As the image creation stage I because aware of the second little feature that caused a little delay. The golden VM requires the installation of the Kaviza Agent ( this isn’t automated , but it is pretty straightforward ) – this agent requires the 3.5 version of the .NET framework which took a little bit of time to download and deploy. I’m sure those of you with a more mature desktop image will most likely not his this little snag. After testing a sysprep of the image I was finally able to save it to that it would become an official image.

From the image , you can create templates. Templates represent a set of policies wrapped around a given machine so would enable a lot of the customisation ( for instance the OU that the machine will be joined to – the amount of memory it has and which devices can map back to the end user )

imageThis is also where you specify the size of the pool for this particular desktop – the total number of machines in the pool and the number to keep ready for pre-deploy. The refresh cycle of the desktops can also be set up – if you have a good level of user and application abstraction then you can have a desktop refresh as soon as a user logs out. I gave this a test , and even with the very small scale setup and tiny XP VM’s I was using I was able to keep the system pretty busy with a few test users logging in to see how quickly desktops where spawned and reclaimed. With a large scale deployments I can see that possibly causing some issues with active directory if you had a particularly high turnover of machines and a long TTL on AD records.

To test the user experience , I deployed a smaller number of slightly larger XP machines and installed the optional Citrix client to see what HDX was all about. I have to admit to be pretty surprised that a remote connection to an XP session inside a nested ESX host under workstation was able to play a TV show recorded on my windows home server at full screen with audio completely in sync. I would seriously consider it for the extra $30 per concurrent user license. I understand the HDX protocol does need a proper VPN or Citrix access gateway to be fully available over the internet and that the supplied Kaviza Gateway software which published the Kaviza desktop over an SSL encrypted link without the use of a VPN is for RDP only. Its not the end of the world but its something to think about.

I was very impressed with the ease at which I was able to start deploying desktops – and at the simplicity of the environment needed to do so. As well as the product woudl scale up on its own , I believe there is likely to be a sweetspot where a traditional VDI solution would work out cheaper. For SMB/SME /Branch office/ small scale  deployments, this really is an ideal solution form a cost point of view. . This was of course only at the pre-proof of concept stage , but to go with a production solution wouldn’t necessarily be much harder at the infrastructure level. The same level of work would need to be done to produce the golden desktop image regardless of the choice of  VDI technology. If you’d like to trial the product yourself , head over to http://www.kaviza.com and grab a trial.

DISCLOSURE : I have received no compensation and used trial software freely available on the Kaviza website to conduct the testing on this blog post.