After a short break , the vSoup podcast is back with a vengance. Christian , Ed and myself were joined by the Boy Wonder of Scripting , Johnathan Medd – in addition to plugging his book , we talk around our usual wander through all things virtual – Listen to the podcast for a chance to win a copy of the book from http://www.powerclibook.com/
Details have just been released for the next meeting of the London VMware User Group – In the few years since I have started attending , this event has continually got better and better with May’s meeting blossoming into a full days event with a couple of different tracks and 2 labs hosted by COLT , based around consumption of resources with vCould Director & Administering vCloud director. If these labs are up to the quality of those offered at VMworld Europe then it’ll be worth taking the day off just for those !
Did I mention this was all free ? After the glow of the projector bulb has died town the event concludes with a social reception at a nearby pub to give you a chance to meet , greet and drink with fellow and like minded individuals who actually want to listen to you talk about virtualisation all night ( my usual drinking buddies tend to fall asleep after the first hour or so ! )
As the tweet above proves I’m about to out scoop Eric “Scoop” Sloof of ntpro.nl fame and would like to be the first to break the news on the innovative Pork Product Delivery system (PPDS) from your favourite real time monitoring provider , Xangati.
In a recent briefing on the new VDI/VI Dashboards I was able to grab a screen shot as the present flicked to a preview screen that proves this to be the case.
Not only is Xangati able to provide role based dash board of real time data about your VI environment that reflect the real health issues within a system , but they are able to monitor the Saltiness Levels for Admins ( SLA’s ) and trigger off an Automated Bacon Delivery Service (ABDS) provided via a network of bacon resellers ( ButcherNet). This was already been successfully beta tested at Tech Field Day. Turkey based Bacon Substitute (TBBS) is available for environments that don’t dig on swine.
Yesterday saw the final day of the Virtualisation Jumpstart program hosted by Corey Hynes and Symon Perriman – I can finally try and get my body clock back to normal ! The main topic for the day was VDI and its associated technologies. When looking at use cases for a windows based hypervisor stack , I’ve always felt that VDI would be one of the stronger ones as its the management of VDI with Server 2008R2 and Windows 7 is where wins will be made rather than out and out consolidation ratio and features.
I don’t know if Corey and Symon had been reading my previous posts on the jumpstart but it was made very clear by Corey that the purpose of the sessions was not to disparage VMware View in the slightest, but to highlight where a Microsoft based solution would be strong and what features and benefits it can bring to a solution. I wasn’t able to take part in the entire session to to some “real life” issues , but I’m happy to say that they kept to their word for the portion of the session I was present at and I applaud you for it ! If the solution is good enough you don’t need to put your competition down.
Before we jumped into any demo sessions there was a quite a long talk on “what is VDI” – This was some of the clearest VDI message / evangelism I’ve heard for a long time and I found myself agreeing with a lot of it. Corey explained that sometimes due to some “golf course strategy” sessions , a client will decide he “want’s VDI” without really understand what is actually required. For a lot of solutions , simple session based virtualisation will be just fine ( of course those session based virtualisation hosts – Terminal Servers don’t have to be physical servers ! )
A lot was also said about the v-Alliance. This is a close working relationship between Microsoft and Citrix ( who have been like housemates who occasionally sleep together for as long as I’ve been working in IT) allowing very close integration between Xen and the Microsoft suite.
So was the jumpstart worth working 3 very long days for ? I really enjoyed the SCVMM preview and its certainly given me some ideas around how a multi hypervisor environment might be able to provide a right sized solution for a number of business needs without too much additional management overhead. If you’d like to review the slide decks from the jumpstart , they are available here . I’m told that the full recordings of the sessions will be available on technet and the Microsoft Virtual Academy.
The Jumpstart series of webinars held by Microsoft entered into day 2 with some high spirits and light heartedness as Corey and Symon take the stand to talk about management , with a heavy emphasis on the beta version of System Center Virtual Machine manager 2012. I’d heard a lot of good things about it , not only from some former colleagues who’d had some exposure during TAP programs , but from the community in general , so was really keen to see if it was as good as it promised.
After some general introductions of what kinds of things you can do with the whole system center suite of products ( including , but not limited to Config Manager , Operations Manager , Service Manager & Data Protection Manager ) the topics moved into Automation and scripting. I felt this whole section went down really positively as no matter what flavour of technology we run , we are all driven by the desire to make our day to day lives easier and automate repetitive tasks – it amazes me the number of times recently I’ve seen the “human task schedulers” used and it worries me that its a recipe for something to go seriously wrong. Computers are *really* good at repetitive tasks, humans tend to get bored with it ( if you don’t believe me , go work on a Taiwanese Production line! ) so let the scripts do the work Corey and Symon covered scripting with powershell – process automation with Service manager for for the hard-core nerds , some WMI scripting – I’m the last person in the world to start scripting , but I’m trying to wean myself into powershell as there is no doubt that its a skill with a lot of value. They also covered a product called Opalis ( which I understand its to be known as SCORCH – System Center Orchestrator : VMware take note – Microsoft are a LOT better at coming up with acronyms; find the person who thought VAAI was a good marketing term and slap them. Hard) Opalis is was a recently acquired product in the Microsoft Portfolio , but it is a really powerful and from what I’ve seen ( on the jumpstart & first hand ) easy to use orchestration tool. I will have to revisit VMware’s Orchestrator again , but its just didn’t seem as user friendly.
Once again the format of slides and demo’s worked well- there was plenty of unscripted interaction around the demo’s which was great and I really enjoyed it … BUT there where periodic nuggets of pure competitive marketing which really set my teeth on edge. Its not going to make you friends with an audience to imply that practices they use regularly are not really a good idea and are in fact pretty stupid. Hyper-V and vSphere are not a full feature parity – we know this so get over it. Hyper-V and the system center suite had plenty of things that they go well in their own right without having to try and convince people that its just as good as VMware all the time. I know this training is free and it reminds me of a church youth club I used to go to when I was a teenager. Sure you got to hang out and play ping pong , but you did have to listen to a sermon once a week.
Ok enough of a rant and back onto something that is most clearly a positive and that’s the new version of SCVMM. At first glance it really looks like an out of the box solution to run your private cloud – with facilities to provision new hosts and to create a hosted application from a drag and drop ( which in itself isn’t clever ) but being able to design the network , including load balancers and storage ( subject to an SMI-S compliant storage solution ) and deploy application packages into templates , along with the Self service portal really looked like a much more polished solution. I’m still not quite convinced about using SCVMM to manage my hosts , but I could be quite tempted to let it manage my VM’s. I am aiming to built out a test lab and see how far I can get with a hybrid vSphere/SCVMM solution , which I believe could offer the best of both worlds at this time ( until I can get hold of newscale ) .
I’m looking forward to the final Day today of the Jumpstart – featuring the Microsoft approach to VDI. If you’d like to attend the jumpstart , you can sign up here or follow my live tweets using the #msjumpstart hashtag on twitter. Previous slide decks are on the borntolearn.mslearn.net blog
For 3 days this week I’m adjusting my body clock to be able to attend the Microsoft Jumpstart for VMware professionals, a FREE 3 day series of webinars put on by Microsoft to explain the Redmond way of Virtualisation and how it might fit in better than we think with the VMware way of doing things. From what I’ve heard , over 1300 IT professionals attended the LiveMeeting based session yesterday , with similar numbers for Today ( Management ) and Tomorrow ( VDI ) Forecasted.
Apart from the odd time of day , I’m really impressed with the logistics of the course – sign up and connection was very straightforward, with a good level of interaction , via polling screens and Question and Answer.
What they hadn’t done , was to encourage conversation via social media , so being a fan of “talking at the back of class” – I started using the #msjumpstart hashtag To help report to those who might not have had time to join the whole session and hopefully to provoke some discussion around it.
The Presenters , Corey Hynes and Symon Perriman did a pretty good job of presenting objectively without too many “we’re better than VMware” comparisons , though I felt there were a few weaknesses of Hyper-V that were somewhat played down.
A good mix of slide decks , free conversation and live demo’s made up the majority of the first day , which was based around an introduction to the platform , showing off some features and attempting to translate common vSphere terminology into “Hyper-V speak” and vice versa. What I noticed were a number of things that as a VMware administrator, I take for granted as being an intrinsic part of the solution , that seemed to be a little bit more of a big deal to the Microsoft side. I think its because for an average windows admin , large scale clustered installations are much less common in a pre hypervisor world.
Storage seemed to be something “best left to the Storage guys” – I’m not quite convinced that’s the best message to use in order to break technology silos down. Again, I think many Windows admins ( myself included ) had very little exposure to storage design and technologies pre virtualisation , thankfully with the new breed of Storage Technology that’s hitting the market its is a lot easier for Average Joe Admin to provision some storage for his infrastructure , be it VMware , or Microsoft based.
If you are at a virtualisation conference and talking about backup technologies with someone in a green shirt , then there is a 50% chance you are having a conversation with the PHD Backup team. The product ,PHD Virtual Backup, which grew out of the innovative esXpress Backup product has recently hit version 5.1 promises to deliver a reliable , self contained backup solution that makes exceptionally good use of available disk space.
The approach that PHDVB takes differs from its competitors in that it deploys as a ready to run virtual appliance rather than an application. This means that you don’t have to invest in any additional hardware to maintain your backup platform , though you should plan for a little additional overhead to your Virtual Infrastructure to allow for the PHD Virtual Appliances to run.
I was able to get hold of some trial keys for the software and have managed to give it a bit of an extended tyre kicking session – I’d have liked to try and run it against a larger environment than my PoC lab , but didn’t have anything available at the time , though I’m told they can scale up to larger environments , the majority of their client base is SME – (note I suspect this is the US definition of SME so up to 1000 Users ! )
I’m not going to cover the step by step, screen by screen install / setup procedure as its been done by some of my fellow bloggers , but I will link to them at the end of the post.
I deployed the PHDVB appliance , which was very straightforward , then linked it to the VI client plugin. Setup was very easy & I was able to configure a backup within a few minutes.
As a backup target I configured a CIFS file share on my lab storage , a Netapp FAS2020. I’d set a reasonably low retention policy given the small size of file share I’d allocated (100Gb). The backup job was set to backup a total of 8 VM’s and a couple of templates , with a mix of Windows server 2003/2008 and windows 7 operating systems, with a virtual ESXi host & Linux appliance too.
I’m impressed to say that I have on average 5 backups of each VM in the catalogue which if restores would need 1.3Tb,via the mechanisms of Changed Block Tracking , compression and deduplication this is bought down to 45Gb of Data , which then gets an additional 5% space saving thanks to the Netapp dedupe. This gives a dedupe ratio of about 30:1 which isn’t to be sneezed at. I’m hoping to run the trial for a little bit longer to see if that ratio changes when I start to deploy a larger workload in the lab. With the numbers of VM’s I’m currently backing up I’m not worried about the backup window , but have seen this start to become a problem when the VM count goes up. The PHDVB appliance can scale up as well as out , simply by allocating more resources to the VA and eventually deploying multiple appliances.
The VI plugin does a respectable job of managing multiple appliances ( the current rule of thumb is that you should probably deploy 1 VBA per cluster but of course , YMMV ) but I’d like to see a better way of extracting reports from the system, be it from scheduled email or powershell integration. I spoke with some of the guys at PHD about this – currently you are limited to email reports , but I’m told this is due to be addressed shortly. I look forward to being able to tell you more about it as an when its available.
The other big card PHD has up its sleeve is that it can also bring the same backup technology benefits to those running Xen server – many shops run a multi hypervisor environment and I can see it being a huge benefit to be able to use the same software to back both of them up. I also liked the ability to “present” the backups via a share on the Appliance – this makes exporting to another product quite straight forward. You can also replicate the backup store to another location and attach a VBA to it for a cold standby solution for disaster recovery.
In conclusion I have to say I liked the product, and I think it is a great fit for that SME – the per Host licence strategy keeps it simple and for a 2 socket host it comes in at a similar price to its competitors ( who may also wear green shirts ) .If you don’t have space or licencing available to either run a physical backup host , or host a backup application on a windows based server, then PHD may well be the product for you.
I’ve recently come across a great use case for VMware thin provisioning which I felt worthy of sharing , not so much as a “how to” guide as I’m sure that thin provisioning has been covered before , but more of a proven use case.
The Latest and greatest version of Cisco Secure ACS can be shipped as an appliance where it will colour co ordinate with all of the other “cornflower blue” devices in your datacenter ( what is the official name of Cisco Blue anyhow ? )
And a lovely little box it is too , but underneath the covers , its just an x86 server and ripe for hosting on your virtual infrastructure. Being the forward thinking chaps that Cisco are , they make ACS available as a VM , which is fantastic , but they mandate that the VM has the same hardware spec as the appliance. Such so that if you attempt to install the software onto a VM that does not meet those requirements it will go into evaluation mode.
I have no doubt that there will be situations and environments that will require all 500GB of drive space that the ACS appliance requires & will also require 4GB of Memory and dual vCPU. However being a fan of the concept of one size not necessarily fitting all I was asked how I could deploy the appliance into an environment that had enough drive space to hold the required logs , but not the full 500 Gb. Concerns were also made about allocating that much RAM to the VM.
One thin provisioned volume later and I have a fully functioning ACS VM that will meet the clients requirements without having to purchase additional storage.
I’ve been lucky enough in the last couple of days to get hands on with some Cisco UCS kit. Coming from a 99% HP environment , its been a very new experience. I’ll try to go get too bogged down into technical details , but wanted to note down what I liked and what I didn’t like about my initial ventures into UCS.
As ever with things like this, I didn’t spend weeks reading the manual. If I did that I’d do nothing but read manuals with no time to do any actual work I did got through a few blog posts and guides by fellow bloggers who have covered UCS in much more detail than I will 9 at this stage at least.
It seems that the unique selling point of the UCS system is “server profiles” rather than setting up a given blade in a given slot , a profile is created and then either assigned to a specific server or allocated from a pool of servers. The profile contains a number of configuration items , such as number and config of NICs & HBA’s that a blade will have , and what order the server will try devices for boot.
The last item seems the most critical , because in order to turn our UCS blades into stateless bits of tin , I am building the server profiles to Boot-from-SAN. Specifically they will be booting up into ESXi , stored on a LUN of a Netapp FAS2020 storage unit. the Netapp kit was also a little on the new side to me so I’m looking forward to documenting my journey with that too!
Before heading deep into deploying multiple service profiles from a template, I thought I would start with some (relative) baby steps and create a single service profile , apply that profile to a blade and install ESXi onto an attached LUN , which I would then boot from. A colleague had predefined some MAC & WWN pools from me so I didn’t have to worry about what was going to happen with those.
Creating the service profile from scratch , using the expert mode ran me through a fairly lengthy wizard that allowed me to deploy a pair of vNIC’s and a pair of vHBA’s on the appropriate fabrics.A boot policy was also defined to enable boot form a virtual CDROM , followed by the SAN boot. At this point I found my first gotcha. It was a lot easier to give the vHBA’s a generic name , such as fc0 and fc1 rather than a device specific one e.g.. SRV01-HBA-A. Using the generic name would later allow me to use the same boot policy for all servers at a template level. As you also have to specify the WWPN for the SAN target, and at the time of writing the lab only had a single SAN , a single Set of WWPN’s can be put in. If you had requirements for different target WWPN’s you would need a number of boot policies.
Working our way back down the stack to the storage , the next task was to create the zone on the Nexus 5000 fabric switches. For cisco “old hands” here is a great video on how to do this via an SSH session.
I had just spent a bit of time getting a local install of fabric manager to run due to the local PostGres db. service account loosing rights to run as a service , which was nice so determined to use fabric manager to define the zones. As with zoning on any system you need to persuade the HBA to log into the fabric. As a boot target had already been defined the blade will attempt to log into the fabric on startup , but it did mean powering it on and waiting for the SAN boot to fail. Once this was done the HBA’s can be assigned an alias , then dropped into a zone along with the WWPN of the storage and finally rolled up in to a zone set. Given that the UCS is supposed to be a unified system , this particular step seems to be a little bit clunky and would take me quite some time if I had 100 blades to configure. I will be interested to see if I can find a more elegant solution in the upcoming weeks.
Last but not least , I had to configure a disk. For this I used Netapp System Manager to create a lun and associated volume. I then added an initiator group containing the two HBA WWPN and presented the lun to that group. Again this seems like quite a lot of steps to be doing when provisioning a large number of hosts. Any orchestration system to make the this more expansive would have to be able to talk to UCS or the fabric to pull the WWPN’s from , provision the storage and present it accordingly.
The last step was to mount an iso to the blade , and install ESXi. This is the only step I’m not really pondering how I would do the install if it was not 1 but 100 hosts I had to deploy. I’d certainly look to PXE boot the servers and deploy ESXi with something like the EDA . By this stage I figured It was time to sit back with a cup of tea and ponder further about how to scale this out a bit. However when I rebooted the server post ESXi install , in stead of ESXi starting , I was dumped back to the “ no boot device found: hit any key “ message.
This was a bit of a setback as you can imagine , so I started to troubleshoot from the ground up. Has I zoned it correctly ? Did I present it correctly ? Had I got the boot policy correct ? I worked my way through every blog post and guide I could find but to no avail. I even attempted to create the service profile on the same blade , but again no joy. It would see the LUN to install from , but not to boot from. As Charlie Sheen has shown , “when the going gets tough , the tough get tweeting” so reached out to the hive mind that is twitter. I had some great replies from @ChrisFendya and @Mike_Laverick who both suggested a hardware reset ( although mike suggested it in a non UCS way. The best way for me to achieve this was to “migrate” the service profile to another blade. This was really easy to do and one reboot later I was very relieved to see it had worked. It seems that sometimes UCS just doesn’t set the boot policy on the HBA, which is resolve by reassociating the profile.
I look forward to being able to deploy a few more hosts and making my UCS setup as agile as the marketing materials would suggest !
And this time its not because I’ve passed a certification , in fact I recently sat the beta for the new VMware Certified Associate , Desktop exam – covering much of the things a VMware View 4.5 Admin would do on a day to day basis. Not being a View admin I found it a little harder than that , but time will tell 🙂
The reason for new cards is a change of role – I might have let it slip on twitter / linked in during my notice period that I was moving on from my current position with Delloite to pastures new. I won’t be eating one of my former blogs , which may or may not be gently poking fun at the “bloggers in bowling shirts” from EMC either as I’m still not a vSpecialist! I’ve had a fantastic 4 and a bit years in my current role , taking my from “just another windows engineer” to really finding my niche in virtualisation and a bit of a segway into social media!
I shall be starting tomorrow with ONI , one of the UK’s fastest growing Cisco Partners as a project engineer within the Datacenter Team. I have some pretty big boots to fill as the rest of the team all hold the Cisco CCIE certification , so as the token non-networky guy, I’d be lying if I said I wasn’t a little apprehensive about working with quite such a smart team ! With the two VCAP exams under my belt , I’d like to think that a VCDX will show that I’m capable of running at the same speed.
I will still be continuing to blog and record the podcast , hopefully with some new experiences as I get to grips with the UCS range of Kit and some equally shiny storage to hook it up to. While I’ve had a great time as an infrastructure architect , I am missing getting my hands dirty with systems so the change is very much welcome.