Category: Virtualisation


It looks like the last post generated some interest around the networking side of things – particularly the use of vShield firewalls ( or my complete lack of them ) . I’ve done a little more digging and while it wasn’t immediately obvious to a newcomer to cloud director , there is a  way of using the vShield firewalls within a deployment – that’ll serve me right for not fully RTFM 🙂

 

By deploying an additional network for a given vApp , I am able to connect that to the internet connection and specify some NAT & Firewall rules to publish services from that application to the network. It also makes the vApp diagram look pretty.

 

imageNote that the Management network ( as I’ve called it ) is a vApp specific network rather than an organisation wide one, hence why I still have an internal network connection to the VM so that it can talk to other VM’s with the VDC. The firewall VM I configured earlier is organisation wide , so any machine in the VDC could be publish via it. For larger deployments I wonder if it would make sense ( although its not really within the spirit of “the cloud”  ) to use hardware devices for edge networking – for example an f5 load balancer. While they do have a VM available which would offer a per vApp LTM instance , some shops may want the functionality of the physical hardware ( for example SSL offload ) . There may also be licence considerations when it comes to deploying the edge layer as multiple virtual instances.

 

Still to come in subsequent posts – deploying a “real” application to a public vCloud Director instance.

image

I was recently selected to take part in a public beta for the London Based hosting provider Stratogen . The beta is based on their vCloud Director offering and has be great to taking a look at the “cloud” from a consumers point of view.

As far as the trial has been put together , I’ve been set up as an Organisation with a single VDC , allocated a fixed resource pool of some compute , memory and storage. Networking wise I’ve been set up an internal and external network , with a pool of IP’s on each.

My aim for the beta was to see from a virtual machine administrator’s perspective how easy it was to set up an application in the cloud from scratch. What I would really have liked to have done was built an application in my home lab and then federated that up to the cloud , but sadly that was beyond scope for the moment. Perhaps in the future I’ll be able to give that a go.

Stratogen haven’t currently put their own UI on top of Cloud Director, so it currently looks like the usual cloud director interface – its important that I must stress the beta is still at pretty early stages , so anything can ( and possibly will ) change.

image

So , After login , I was presented with the pretty default looking screen above. In my usual style when I get my hands on a product I tend to have a little click around to see what I can see without having to delve into any setup guides.

It all looks pretty locked down at the Organisation / Resources side of things , I can only see my own VCD , and aside form changing machine leases & the comments about my Organisation , not a whole lot to be able to change.

With Nothing deployed , I had no vApps to manage , so it would seem sensible to try and deploy a machine. Thankfully Stratogen had put a few sample vApps in a public catalog– mostly apps containing a single VM of varying operating systems , Windows 2008 R2 , Centos Linux & RHEL.

Having spent  many of my formative years as a Windows admin , it makes a good common denominator so I chose to deploy a windows Vm to see what would happen. After a short Wizard allowing me to name the Vm and set its lease , I had to select the network to put the VM on. I wasn’t keen on putting this VM directly into the public network – as an admin , I wasn’t to sure of what patching level it was, nor did I know how open the public network was either, so erred on the side of caution , and selected the private network to home the VM.

While I wouldn’t have said the Vm provision was instant , it was pretty fast ,along with a host customisation that set a random admin password for me. Because I’d put the Vm on the private network, I wasn’t able to RDP directly into it from my workstation, so initially was restricted to the embedded vm console application – which in server 2008 r2 can be a little bit painful to use , I suspect the WDDM drivers weren’t in use – however this is an easy fix that I’d have probably had to do anyways. Improved though the connection now was, on my home DSL line , which isn’t all that fast to begin with , performance was a little lacklustre. I needed RDP.

I dropped the Stratogen chaps a mail about what my options where from a security point of view in the beta – it seem that in a fully managed service there would be a lot more control over the hardware firewalls available , but as a beta customer and in the interests of keeping it virtual, I would probably be better off deploying my own firewall.

 

Had this been a “real” deployment , I would have looked at something like vShield app (http://www.vmware.com/products/vshield-app/) or Checkpoint VE edition (http://www.checkpoint.com/products/security-gateway-virtual-edition/index.html) however given that my beta test is on a zero budget , I’m going to have a look a little cheaper. I would have loved to have deployed a m0n0wall appliance (http://www.vmware.com/appliances/directory/628223) however because the appliance is delivered as a VMDK , I’d have had to somehow convert it to an OVF file with a way to import the VMDKs from a public web server , which at this point wouldn’t be practical. What I was able to locate was a firewall deployed from an ISO image of the Endian Community Edition. http://www.endian.com/en/community/overview/ . This is a turnkey Linux install that will allow me some basic firewall functionality. I am able to use this to open up pinholes to my private network and publish any services from within.

 

Coming up in Part 2 – Deployment of a load balanced multi tier application in a public cloud.

image

As a virtualisation professional , there seems an almost limitless choice of 3rd party software you can bolt into your environment. While VMware covers many of the bases with its own product lines in capacity planning , lifecycle management & reporting , some of them are missing a feature or two , or just too complex for your environment. Many vendors seek to address this problem with a multi product offering , but so far, I’ve only come across a single vendor who aim to address issues like these with a single product.

I spoke with Jason Cowie & Colin Jack from Embotics a few months ago , but was only able to secure a product demo last week – In some ways I wish I’d waited until the next release as it sounds like its going to be packed with some interesting features. I don’t really like blogging about what is “coming up in the next version” , so will be concentrating on what you can get today ( or in a couple of cases the minor release due any time ). This isn’t something specifically levelled at the Embotics guys who are most likely internally submersed in the “vnext” code so to them it is the current product.As an architect ,I’m just as guilty of evangelising about features of a product that is several months away form a deployment. Many vendors do the same to whip up interest around the product ( hyper-v R2 is a great example of this ) , but it doesn’t really show a level playing field to compare a roadmap item with an item that’s on the shelves today. When the 4.0 version of V-Commander is released , I look forward to seeing all of the mentioned features for myself !

 

So What is it ?

The Website really does define the V-Commander product as being all things to all men , that is to say if those men are into Virtualisation management ! They show how the Product can be used to help with : Capacity Management , Change Management , Chargeback and IT Costing , Configuration Management , Lifecycle Management , Performance Management and Self Service.

That’s a lot of strings to its bow – and certainly enough to make you wonder if its a jack of all trades, master of none type product. After a good look at the offering , I can safely say that’s not the case , but its defiantly stronger in some of those fields than others.

The “Secret Sauce” of the V-Commander product is its policy engine. Policies drive almost every facet of the product and they are what allows it to be as flexible as it is. Once connected to one or more vCenters , it will start gathering information right away. This is what they refer to as “0-Day Analysis” , For a large environment , the information gathering cycle for some capacity management products can take quite some time ( I’ve seen up to 36 Hours) as the appliance tries to pull some pretty granular information from vCenter. I wasn’t able to run the Embotics product against a large environment to see if this is the case.However, I have it from the Embotics guys  as an example, that to pull the information for 30 months of operation for a vCenter with 1200 machines took a couple of hours ;to me this is more than acceptable.The headline report that Embotics shows off as being a fast one to generate is one showing the number of deployed VM’s over time , which is a handy way of illustrating potential sprawl.

b195a201-e89d-467c-87c0-58a0c5cc1d71

The next key thing that V-Commander does is provide some more flexible metadata about a virtual machine. Entry of this data can be enforced by policy , for example you might want to say that all machines must have an end of use or set review date before they can be deployed. This really enforces the mantra of a cradle to grave lifecycle management application. The VM is tracked form its provision , through its working life and finally during the decommission phase. Virtual Machine templates can be tracked in the same way as Machines themselves – this sounds like an appealing way of ensuring you are not trying to deploy a machine from an old template. What is interesting is that the Metadata for an object can come in from other 3rd parties so there is potential to track Patching / Antivirus , so the appropriate integration be available.

f09cb019-7f5b-4171-ae39-588eaadc1429[6]

Policy enforcement is real time, so for example even if I attempted to power on a VM via an rCLI command that V-commander policies would not allow to be powered on , the product is fast enough to power it back off again before it left the BIOS. In addition to this an alert would be generated of the rogue activity.

The Web GUI of the product splits into 2 main views – in addition to the administrators view There is also a “self service portal” – I put this in quotes for the very good reason that there are other self service portals that have currently hit the market which are more self provisioning. At this point on time the product does not provide self provisioning , but it is thought to be high priority for the 4.0 release. That the portal does allow is a very fine grained control that could be passed directly to VM owners without requiring any underlying access to vCenter , which is a feature that has some legs. They can currently request a machine , complete metadata and manage that specific groups of machines within an easy to use interface.

615836bf-1885-4629-a1eb-db8f42081340

It is also possible to pull the data from V-Commander into the VI Client via a plugin to VI – this is defiantly aimed at the administrator rather than the VM Owner.

f8efb37d-9386-491c-be05-95be4a673b55

 

Automation is the key here and there are many issues where the product highlights that very well. While there is a degree of automation currently within the product , I think the next version will sink or swim on how well that ability is provisioned. For example , when it comes to Rightsizing a virtual machine , identifying those machines that may need a CPU adding or removing is great, being able to update the hardware on those machines automatically is what would actually get used , particularly in a large environment. Smaller shops may have a better “gut feeling” on their VM’s, hence will quite possibly manually tune the workloads more often. The product doesn’t have a whole lot in terms of analytics of virtual machine performance – the capacity management policies are pretty simple metrics at the moment, its certainly another area for potential growth to put that policy based automation engine to use.

V-Commander is slated to support Hyper-V in the 3.7 release , which is out any time now. I shall be interested to see how it will interact with the Self Service Portal in the upcoming versions of SC:VMM. From what I’ve seen of the product it could sit quite neatly behind the scenes of your <insert self service portal product here> and provide some of the policy based lifecycle management – all it would need would be a hook in from that front end so that those policies can be selected accordingly.

You get a lot of product for your money – which depending on how you want to spend it could cost you a fixed fee + maintenance , or an annual “rental” fee. I’ve been weighing up the pro’s and con’s of each licensing model and it would look like the subscription based model is the easier one to justify. It also means that should there be a significant change in the way you run your infrastructure , you wont be left holding licences that you’ve paid for , but can’t really use.

 

So is this the only Management software you’ll ever need ? At the moment, no it isn’t. That said its got some really strong features which aligned with a good service management strategy could help align your virtual infrastructure with the rest of your business.

Nb. I’ve just had some clarification on the release schedule for hyper-v support.

“Given priorities and customer feedback (lower
than expected adoption rates of Hyper-V), we decided to do only an internal release of Hyper-V (Alpha) with 3.7 (basic plumbing), with a GA version of Hyper-V coming in the first half of 2011.  At the beginning of 2011 we will begin working with early adopters
on beta testing.”

If you have a hyper-v environment and would like to take advantage of the embotics product , I’m sure they would be keen to hear from you.

Reading , wRighting and Recording – Measure how your applications hit your disks!

I’ve spent the last week thinking more about storage than I usually would, particularly in the light of some of the conversations I’ve been having over Tech Field Day with the other delegates & sponsors who have had varying levels of interest & expertise within the storage world. If, like me you have a basic appreciation of storage but want to get in that little bit deeper , a good primer would be Joe Onisick’s storage protocols guide at DefinetheCloud.net

Admins working in smaller shops probably have a little closer control over the storage they buy as they are likely to be the ones specifying , configuring and crying over it when it goes wrong ; It’s one of the con’s of working for a large enterprise is that the storage team tends to be separate – they guard their skills and disk shelves quite closely , sometimes a little too closely – I do wonder if their school reports used to say “does not play well with others” . The SAN is seen as a bit of a black box by the rest of the department and generally as long as the required capacity is available to someone when they ask for it , be it a lun or VMware datastore , then everyone is happy to let them get on with it.

As soon as there is a performance issue however , that happy boat starts to rock .The storage team starts to get defensive, casting forth whitepaper & best practice guide as if they were a World of Warcraft character holding a last stand. At some point you may well find that you hit the underlying best performance of the SAN , no matter how well tuned it is. You are then left in a bit of a quandary of what to do, in the worst case you have to bite that bullet and move that application which looked like the lowest of the low hanging fruit back onto a physical server with direct attached storage , where it’ll smugly idle at 5% utilisation for the rest of its life , ever causing reproachful looks when you walk past it in the datacenter.

How do you avoid the sorry tale above ? In a nutshell, “Know your Workload!” When you start to map what your applications are actually using you can start to size your environment accordingly. One of the bigger shocks that I’ve come across when doing such an exercise is a much heavier proportion of writes than the industry would have us expect. This causes a big problem for storage vendors who rely on flash based cache to be able to hit their headline performance figures. When reading from a cache , of course the performance will be great , but under a heavy write intensive load the performance of the underlying disk starts to be exposed and it seems to come down to number and speed of spindles. Running a system that uses intelligent tiering to write hot blocks in the fastest way then cascade them down the array as they get cooler could help in this instance. Depending on your preference for File or Block level storage , there are a number of vendors who could help you with this, for example Avere Systems or 3PAR or the next Generation of EMC’s FAST technology.

I’m currently sat in a lounge at Schiphol Airport trying vain to get onto to the wireless network , even offering to pay , but to no avial. Thankfully due to the wonders of windows live writer , I can rant now , upload later !

As you may have guessed , the reason that I’m sat here is that I took the Beta Exam for the VMware Certified Advanced Professional – Datacenter Design certification. When I was invited to take the exam , the list of dates was pretty short & in order not to clash with my outbound trip to Tech Field Day tomorrow I had to sit it today but alas there where no seats available at the London test centre.

I’d almost given up hope of being able to sit the beta , when I noticed that the Global Knowledge Test centre in Amsterdam had a plethora of slots available, so checked some prices with easyjet and realised its not much more expensive to travel from Milton Keynes to London at Peak time than it is to travel from Milton Keynes to Amsterdam ! A plan was rapidly forming , which lead to having to get up at 4:30 am this morning to jump on a flight.

As I get older I get earlier , in this time arriving 2 hours before my 4 hour exam was due to start, thankfully the nice lady at the desk let me start early. I’d been doing my last minute revision on the flight and at the airport , so there really wasn’t any point delaying the inevitable !

Onto the Exam itself, I’m restircted by NDA as to how much I can say , beyond what has already been released which is that the exam consists of a …. number of questions , split into 3 types – multiple choice , drag & drop and design/visio(ish). In contrast to the DCA Exam , this felt much more like an extended VCP test to which I suspect I got the whole question deck thrown at me. I’m going to take a wild guess and assume the live exam would consist of a subset of the questions posted.

@hany_micheal was the first tweep I noticed to have taken the beta and his feed back was along the lines of having problems finishing the exam due to the ammount of reading. I suspect that not having english as a first language didn’t help there. He is right , there is a lot to read , but I felt the skill was in working out which buts of the text were relevenet. If you read the same exhibits over and over again form start to finsh I can see how time would be a problem. If course if you skim read them too much then you may well miss a key item , which I think I may well have done on a number of occasions.

I completed the question set with about 35 minutes to spare , which I felt was plenty of time to go back and check any answers & add additional comments ( as beta exam takers are often encouraged to do ) , however when I got to the end, the only option was to end the exam. No review stage meant that a) I was not able to add additional feedback and b) a couple of questions that I had flipped through in order to come back to if I’d had more time went unanswered. I dont know if this was just a beta “feature” or not.

In terms of “features” I felt the exam was pretty good – cetainly didn’t have any of the techinical challenges that the DCA exam had – the design interface was actually pretty good ot use once you had got the hang of it , though it did highlight my lack of visio-diagram-making-pretty skills !

I recognised a few faces coming into the exam room as I left , notably Duncan Epping of yellow bricks fame & also Frank Denneman of VMware , so I look forward to seeing how their feedback compares.

image

 

After the successful release of the Capacity Management suite product at VMworld , its all been pretty quiet on the VKernel front , which usually means they are up to something.In addition to coding away like the clever chaps they are , they’ve also been growing the company , always a handy thing to do if you’d like to put food on the table.Its been a bumper year and a record quarter for them with the key Metric of their client sizes continuing to grow, showing that people are taking the problem of optimisation planning & chargeback seriously. When I was invited onto a call with Bryan Semple , CMO for VKernel last week I was looking forward to something new. Little did I know that I’d actually seen a sneak peak of it back in July with the Chargeback 2.0 release.

 

One of the key features within the new versions of the chargeback product is that is supports chargeback for environments running on Microsoft’s Hyper-V platform , and specifically the support for the Virtual Machine Manager Self Service Portal Toolkit (MSVMMSSP) . This allow the creation of self service portals to not only provision Machines according to a quote , but to be able to collect metrics for static or utilisation based chargeback of those machines. This starts to become increasingly relevant as enterprises move towards a “cloud” model ( presumably private with hyper-v at the moment ) VKernel has been selected as the primary chargeback vendor. Other partners providing support for the toolkit include IBM , Dell , EMC NetApp and HP

 

Ok so I almost went two paragraphs without using the “C” word – I could have been a lot worse! When looking at the kind of product that VKernel offers from a cloud provider perspective , the importance of the 3 sub products ( Capacity Analysis , Optimisation & Chargeback ) gets juggled around a bit. A service provider doesn’t really care as much about VM rightsizing as the end users are going to pay for it. A public cloud is also going to be looking at capacity from a slightly different point of view so while its important , I would imagine they may well use a different toolset.

 

VKernel has integrated with Microsoft’s “cloud” product , but what will it do with VMware other than the existing integrations , I would suspect they are keeping a very careful eye on the vCloud Director API and how they can best plug into that for example to track the costs of a vApp in a hybrid cloud situation as it moves from the private to public datacenter.

and I still don’t own an ipad. I’m trying to work out if I *really* need a tablet right now or whether its just an exercise in e-manhood waving at meetings – after all its the content you create , not what you create it on that counts !

 

As Stephen from Gestalt recently blogged , there isn’t much out there that can take on the ipad when it comes to functionality. I was passing a local electronics outlet today and noticed they actually had a couple of non-iPads out on demo , so I thought I’d try and get a little hands on with them .

 

The first was the Toshiba Folio 100 Tablet – its got a 10” capacitive screen and runs Android 2.2 I have to admit to being an android fan and have tried one of the early Chinese made tablets running 1.6 on a 7” resistive screen and was hugely underwhelmed. The Toshiba wasn’t that much better. The screen still required a touch heavier than a mason chisel to actually get anything to respond and the “touch sensitive” buttons appeared to operate by committee.. of 1970’s British Leyland workers ( i.e. infrequently )  When a member of staff noticed I was having a play around with it , he came over to demo a few features , but the device crashed when he put a USB key into it. Not the best demo in the world.

Next to the Toshiba was a Samsung Galaxy Tablet – from a construction point of view this looked really well made other than the fact that the designers seemed to have got the plans scaled down a bit. It really isn’t that much bigger than my HD2 phone. the interface looked slightly different to the regular android screen I’m used to and while I could have operated the keyboard with thumbs it really wouldn’t have been worth the £500 investment.

I guess the smart money is just going to have to wait until Android 3.0 for a capable Tablet experience. I will instead concentrate on producing some great content from the meetings and presentations I’ll get to take part in San Jose with the rest of the #TechFieldDay team.

 

And finally… because its Friday its time for a lolcat for that brief giggle before you go home for the weekend. The model for this belongs to Mike Laverick of http://www.rtfm-ed.co.uk fame. I present to you , Molly.

 

image

image

My feet have hardly touched the ground since Copenhagen , after a hectic couple of day in the office I’m back on the road again , this time to IPExpo , a 2 day event held at London’s Earls Court.

 

imagePrior to hitting the show , myself and a few other bloggers (including Chris Evans from http://storagearchitect.blogspot.com/ and Barry Coombs from http://virtualisedreality.com/ ) where able to meet with with some of the Microsoft Cloud Team , including their General Manager , Zane Adams. As you may have realised from most of my posts, I’d say my flag was pretty firmly in the VMware camp , but I’ve been working with Microsoft products since I was in short trousers so still very keen to hear what they have to say , especially when it comes to their particular take on “what is cloud”

This was the first time I’ve ever done such a roundtable , especially one that’s been recorded – thankfully I was a little bit prepared by my chinwag with Mike Laverick, but somehow this felt a touch more intimidating than a Skype chat.

 

The session was pretty well organised , with a bit of a talk from Zane about Microsoft vision for the Azure platform and associated technologies, followed by some select questions with a more open Q&A afterwards. In a nutshell , the Azure plan is a platform as a service – if its something you want to take advantage of then now is the time to start to look at your applications and see how they might scale to fit that type of model. This has become a lot more than a pipedream for board level PowerPoint decks. With 70% of the Development effort at Microsoft devoted to something related to cloud technology , its something Microsoft are going to take seriously – past history as shown that usually when they do this , they tend to get results. I enjoyed the opportunity to be able to put some pretty tough questions to Zane and felt that I got some pretty good answers.

 

A short walk later and I arrived at Earls Court 2 for IPExpo – It was a very different show form VMworld , which of course was still quite fresh in mind , but it was interesting to see more than just raw vendors there. VAR’s and System Integrators where also present , making it a little more rounded. Virtualisation was also only a small part of the show , with sections dedicated to Physical Security , Networking & Storage amongst others , all with badge scanners at the ready for your attention. conversation ranged form a scan badge for freebie exchange to a much more in-depth chat with Magirus about their vBundle product – essentially a “my First vBlock” based on Cisco/EMC hardware with a vSphere virtualisation layer on top. I know that this wasn’t anything especially ground breaking but a neat demo of the integrated EMC management within the VI client and a good chat all the same!

 

A show wouldn’t be a show without a Veeam stand – today the intrepid man and women in green had something to celebrate , with the official launch of the v5 Backup & Replication product. v5 brings with it a host of new technologies based around being able to run a Vm directly from the compressed and deduplicated backup – allowing verification , item level restore and on demand test beds amongst others. What better way to celebrate it with Vodka & cupcakes ( green of course !! )

 

image

 

 

Right at the back of the show was an ICE cube Modular datacentre 40ft unit from SGI which was pretty impressive , not having seen one up close before – its units like them that will go to be the backbone of much of the public cloud datacentres, just add water, power and network for a maximum of 40,000 cores of processing !

While I had a valuable day , I’m not quite sure IPExpo would be something I’d want to do both days for as a delegate – it seemed to be more of a lead generating event than anything else ( though it is great to catch up with current vendors and make sure I’m up on anything interesting they might be doing)

No sooner than I was having a little bit of a moan about having to redeploy one of my lab hosts on ESX4.0 due to my trial of Kaviza 3.0 not supporting 4.1 , but I get notified of a version release.

 

In addition to the 4.1 support , the following features are added :

– Support for 64bit Windows 7 virtual desktops

– Support for linked clones with Citrix XenServer 5.6

– Support for CAC (Common Access cards) smartcards

 

Sadly I’m not actually able to test any of these apart from the first as I use nested ESXi – which does not support 64 bit guests 🙁 I’m also out of smartcard readers. What I did go through was the upgrade process , which while it was well documented by Kaviza would be a little bit fiddly in a large environment for those of you who shy away from a command line. I think they could take a leaf from vKernel’s book – they too started of with appliance based updates that required you to break out puTTY but now as long as the appliance has some form of internet access , the updater is built into the web GUI for the appliance. This is something I feel is quite important as when you are using an “appliance” , having to dive under the lid is a little bit undignified – especially if you are in the SMB space where time can be limited.

As well as an update to the appliance , the Kaviza agent that sits in each desktop also requires an update. this took me a couple of goes to get working , I suspect the reboot after agent removal wasn’t as clean as I’d have hoped , so the HDX install kept failing. An extra reboot set that back up. I wonder if there is a neater way for this to be done ?  I hope this isn’t the last you’ll see of the product from me , especially if I happen to win a full licence was part of Greg Stewart’s Giveaway on vDestination.com

 

http://vdestination.com/2010/09/21/win-your-own-personal-vdi-solution-from-kaviza/

 

and finally – I thought I’d see what I could get to talk to my Kaviza VDI in a box solution , so installed a Citrix Receiver on my aPad android based tablet. Its not quite as swish as an iPad – but it was a lot cheaper and certainly gives me ideas for the future !

In what seems to have become a bit of a theme on JFVI , I’ve been taking a peek at a recently released product , listening to what the Marketing / Sales ladies & gents have to say , then having a poke around with the product to see if they’ve been truthful ( allegedly sometimes Sales & Marketing people have been a little economical with the truth over the ages – I’m sure it happens much less now , but its always good to check don’t you think ? )

imageI have only recently become aware of the Kaviza solution since VMworld , where a number of people seemed to rate the offering pretty highly , notably winning the best of VMworld 2010 Desktop Virtualisation award , which isn’t to be sneezed at. Its also work awards from Gartner , CRN and at the Citrix Synergy show , wining the Business Efficiency award.

It seems a fair amount of “Silverware” for a company that launched its first product in January 2009 but being a new player to the market does not seem to have put Citrix off , who made a strategic investment in Kaviza in April of this year.

I spoke with Nigel Simpson from Kaviza to find out a little bit more. The key selling point of the VDI-in-a-box solution is cost. All too often you hear that switching to VDI does not save on CapEx – its only in the OpEx savings that you can realise the ROI of Virtualising client desktop. However if you are looking at a desktop refresh then you can get that ROI , but its not a case for every client. Kaviza aims to be able to provide a complete VDI solution for under £350 ( $500US ) per desktop. That cost includes all hardware & software at the client and server end. The low cost of the software and the fact that its designed to sit on standalone , low cost hypervisors using local storage means that particularly for smaller scale or SMB solutions , you are not getting hit by the cost of additional brokers or management servers. its also claimed to be scalable without a cluster of hypervisors due to the grid architecture used by the Kaviza appliance itself.

image

The v3.0 release of the product adds some extra functionality to improve the end user experience. Part of the investment form Citrix has allowed Kaviza to use the Citrix HDX technology for connection the client desktops. This allows what Citrix define as a “high Definition” end user experience including improved audio visual capabilities & wan optimisation. This is supported in addition to convention RDP protocol to the client VM’s.

I will freely admit that I’m a bit of a VDI virgin. While I knew a bit about the technology , My current employer hasn’t until very recently seen a need for it within our environment so I’ve tend to wander off for a coffee whenever someone mentioned it. At a recent London VMWare User Group meeting – Stuart McHugh presented on his journey into VDI and I was so impressed , I thought I’d take a closer look.  I’ve not had a chance to play around with view much so I can’t comment on how HDX compares to PCoIP however from reading other people opinions of it , it seems that HDX is as good. ( source : http://searchvirtualdesktop.techtarget.com/news/article/0,289142,sid194_gci1374225,00.html )

The kMGR appliance central to VDI-In-a-box will install on either ESX or XEN on 32 or 64 bit hardware. I’m told that hyper-v support is due pretty soon – having the appliance sit on the free hyper-v server would defiantly be good. It’ll also use the free version of Xen server , but sadly for VMware fans such as myself it will not currently run on the free versions of ESXi – according to Kaviza , this will only bump up the projected costs by around £30 per concurrent desktop.

The proof of the pudding will always be in eating , so rather than talk about the live demo I got from Nigel, I’ll dive right into my own evaluation of the product. Kaviza claim that the product is so easy to use , you can deploy an entire environment in a couple of hours. I would agree with this , even with the little snags I introduced by a minimal reading of the documentation and a quick trip to the shops I managed to get my first vm deploying surprisingly quickly.

Quick background on my test lab – I don’t have the space , cash or enough of a forgiving partner to be able to run much in the way of a full scale setup from home , so my lab is anything I can run under VMware Workstation; thankfully I have a pretty quick Pc with an i7 quad core CPU & 8 GB of Memory , so enough for a couple of ESXi hosts.

I downloaded a shiny new ESXi 4.1 ISO from VMware after a quick update to workstation and as ever within a few minutes I had a black box to deploy the Kaviza Appliance to. After a pretty hefty download and unpack ( to just over 1Gb ) the product deployed via an included OVF file. While I was waiting for the appliance to import , I started the build of what was to be my golden VM with a fresh Windows XP ISO. The kMGR appliance booted up to a pretty blank looking Linux Prompt.

imageAs the next step in config involves hitting a web management interface I think a quick reminder “ to manage this appliance browse to https://xxx.xxx.xxx.xxx “wouldn’t have gone amiss.

I was able to grab the IP of the appliance from the VI client so hit the web management page to start building the Kaviza Grid.

image

At this stage I hit the first gotcha with a wonderful little popup that very politely explained that ESXi 4.1 was not supported , and would I like to redeploy the appliance. After the aforementioned trip to the shops to calm down I trashed the ESXi 4.1 Vm and started again with an older 4.0 ISO I had handy.

This time I was able to build the grid , providing details of the server , and if I was going to use an external database and if I was using vCenter( in a production deployment , even though you would not require the advanced functionality of vCenter , I think there is a chance it would be used if you had an existing one so that you could monitor hardware alerts etc. Kaviza best practice states that you should put your VDI hosts into a different datacenter to avoid any conflicts of naming.

With a working server , I needed to define some Desktop Images , so I took the little XP desktop VM I’d built in the background ( please note I did pretty much nothing to this VM other than install windows from an ISO that had been slipstreamed with SP3 ) and started the process to turn it into a prepared image for desktop deployment.

image The first image is built from a running VM that you could have deployed or recently P2V’d to the host server.  I was hoping that the process would have been a little more automated than it was , and as a non manual reader it was not immediately obvious. I can confirm that creation of subsequent images is a much more straightforward process. As the image creation stage I because aware of the second little feature that caused a little delay. The golden VM requires the installation of the Kaviza Agent ( this isn’t automated , but it is pretty straightforward ) – this agent requires the 3.5 version of the .NET framework which took a little bit of time to download and deploy. I’m sure those of you with a more mature desktop image will most likely not his this little snag. After testing a sysprep of the image I was finally able to save it to that it would become an official image.

From the image , you can create templates. Templates represent a set of policies wrapped around a given machine so would enable a lot of the customisation ( for instance the OU that the machine will be joined to – the amount of memory it has and which devices can map back to the end user )

imageThis is also where you specify the size of the pool for this particular desktop – the total number of machines in the pool and the number to keep ready for pre-deploy. The refresh cycle of the desktops can also be set up – if you have a good level of user and application abstraction then you can have a desktop refresh as soon as a user logs out. I gave this a test , and even with the very small scale setup and tiny XP VM’s I was using I was able to keep the system pretty busy with a few test users logging in to see how quickly desktops where spawned and reclaimed. With a large scale deployments I can see that possibly causing some issues with active directory if you had a particularly high turnover of machines and a long TTL on AD records.

To test the user experience , I deployed a smaller number of slightly larger XP machines and installed the optional Citrix client to see what HDX was all about. I have to admit to be pretty surprised that a remote connection to an XP session inside a nested ESX host under workstation was able to play a TV show recorded on my windows home server at full screen with audio completely in sync. I would seriously consider it for the extra $30 per concurrent user license. I understand the HDX protocol does need a proper VPN or Citrix access gateway to be fully available over the internet and that the supplied Kaviza Gateway software which published the Kaviza desktop over an SSL encrypted link without the use of a VPN is for RDP only. Its not the end of the world but its something to think about.

I was very impressed with the ease at which I was able to start deploying desktops – and at the simplicity of the environment needed to do so. As well as the product woudl scale up on its own , I believe there is likely to be a sweetspot where a traditional VDI solution would work out cheaper. For SMB/SME /Branch office/ small scale  deployments, this really is an ideal solution form a cost point of view. . This was of course only at the pre-proof of concept stage , but to go with a production solution wouldn’t necessarily be much harder at the infrastructure level. The same level of work would need to be done to produce the golden desktop image regardless of the choice of  VDI technology. If you’d like to trial the product yourself , head over to http://www.kaviza.com and grab a trial.

DISCLOSURE : I have received no compensation and used trial software freely available on the Kaviza website to conduct the testing on this blog post.