Archive for February, 2010


My copy of “Foundation for Cloud Computing with VMware vSphere 4” arrived yesterday as a result of the mini twitter competition I won a few weeks back.

Its one of the smaller volumes (just over 100 pages ) on my Virtualistion Bookshelf , but size most defiantly isn’t everything. Especialy when you look at the calibre of its authors. John Arrasjid (@vcdx001 ) , Duncan Epping (@DuncanYB) and Steve Kaplan (@roidude) are all very well known in the VMWare community for the quality of their whitepapers , blogs and even comics!

The book isn’t designed to give you a how-to form start to finish of building your own cloud solution but it is a fantastic overview , starting off with the Virtualisation basics , essentially the “what” , “why” and “how” of Virtualisation at a reasonably high level. Common use cases for virtualisation are covered from branch office , to server consolidation and test labs.

The book then focuses a little on  how various products form VMware can fit into the environment , what benefits they bring and a very brief description of how they are deployed.

A virtual infrastructure is not much without workloads to put on it , so a whole chapter is devoted to methodologies and approaches to moving your existing workloads from physical or legacy virtual environments into your VMware vSphere 4 environment. Once those workloads are in place, the book covers some of the things to look for when optimising that environment be it via resource pools or additional vSphere 4 features such as DRS. High availability , Disaster recovery and Security are also covered. The final chapter coveres VMWare View , the Virtual Desktop component of vSphere 4.

That’s a lot to cover in 100 pages , but I felt thats the book achieves what it sets out to do , and thats give someone a good foundation in vSphere 4.It would be an excellent aid to explaining to colleagues and managers what we can and what we can’t do with our environment. Its going to stay within arms reach of my keyboard , ready to pass to anyone interested in virtualisation for a good time to come!

If you’d like to obtain a copy for yourself ( though yours won’t be signed like mine is ! ) then you can do so from here:

http://www.sage.org/pubs/21_vSphere4/

I’ve been trying to get some work done towards my VCDX qualification for some time. With the first part of the qualification , the Enterprise Admin exam booked for April I felt it was a good time to get pen to paper and make a few study notes along the way.

If you are considering your VCDX qualification in the not too distant future , then check out the “Brown Bag” sessions – an idea conceived by Tom Howarth of planetvm.net and Cody Bunch of professionalvmware.com . Tom and Cody’s posts have inspired me to blog some of my study notes , hopefully this will make it stick for me , and if its a point to improove , it might just stick with you too…

Working in a 100% Fibre Channel shop at the moment I dont get much day to day exposure to iSCSI storage technology , so have always felt it as a slight weak point in my skillset. As a result it seemed the natural choice to start my Command Line revision with.

Scenario/AIM

I have a spare host that I’m planning to use for this ‘mini-lab’ – it currently has plenty of space on a local datastore. Its also running ESX 3.5 Update 5

The Aim is to install a VM to provide some virtual storage , which I’m going to present back to the host in 2 forms, an iSCSI datastore and also a regular share via NFS ( could be handy for hosting build ISO files on at some point ! )

As I dont really want anything else connecting to my test environment I shall be hosting the storage on an internal vSwitch with no uplinks , of course in a ‘real’ environment you would use a regular vswitch.

In order for the Service Console and VMKernel to be aware of the storage VM I will need to connect a service console and vmkernel port to the internal vswitch as well as a NIC from an administrative VM in order to be able to set the storage up.

I’ve chosen to use Openfiler for the storage VM – in previous lives , I have used virtualised FreeNAS boxes but openfiler has a good level of community support and well docuemtned setup guides for use with ESX.

I wont reproduce the guides for the openfiler config word for word here , but Instead link you to Simon Seagrave’s excellent guide for openfiler setup and config , at http://www.techhead.co.uk/how-to-configure-openfiler-v23-iscsi-storage-for-use-with-vmware-esx

As well as presenting a 5GB LUN for iSCSI I have also enabled the NFS service on the Openfiler appliance and created a 4GB shared volume for use with NFS. I’m also aiming to complete the tasks just using an SSH session to the host. I’ll log into the VI client to verify things though.

Network Setup.

Create test internal switch :

esxcfg-vswitch -a Test
Create test internal port group :

esxcfg-vswitch Test -A TestPG
Add VMKernel Port:

 esxcfg-vmknic -a -I 192.168.0.1 -n 255.255.255.0 -m 9000 TestVMK

Add Service Console Port:

esxcfg-vswif -a vswif1 -p “TestPG” -i 192.168.0.4 -n 255.255.255.0

after connecting my admin VM to the vswitch , I end up with a switch looking like this :

Test networking setup

The easy step is to mount the NFS store , this is done with the esxcfg-nas command as follows

Esxcfg-nas -a TEST -o hostname 192.168.0.2 /mnt/nfs/testnfs/TEST/

A quick check back to the VI client shows the NFS datastore ( small as it is ) ready for use.

NFS Store mounted sucesfully!

( yes i know one of those datastores is dangerously full 🙂 )

 enabling the iSCSI connection is going to take a few more commands. Not only do we have to enable the software iSCSI initiator , but we have to partition the LUN then format the VMFS datastore.

first things first – enable the software initiator.

Esxcfg-swiscsi -e

this will add an extra vmhba to the host , as I’m running ESX 3.5 , that is always going to be called vmhba32. If I was running ESX 3.0 it would have been vmhba40 .

iSCSI vmhba

I then need to connect the vmhba to the target device , note the -D switch is to set the Dynamic Discovery mode.

Vmkiscsi-tool -D -a 192.168.0.2 vmhba32

Once this is done , scan the hba for targets , this can take a few moments to complete.

Esxcfg-swiscsi -s

Where does this leave us – we have a target LUN presented to the host , but no partitions on it. From the VI client this is quite easy to create a datastore on it , but this isn’t about doing things the easy way 🙂

you’ll need to create a partition on the disk, but which disk ?

issuing the following command gives me this..

vmhba-devs

the service console has assigned /dev/sdg to the disk , which we can then partiton with fdisk

 fdisk /dev/sdg

‘n’ to create a new partition then accept the defaults.

however we’re not out of the woods yet. the disk type needs to be changed to vmfs

hit ‘t’ to change the partition ID , then enter ‘fb’ for VMware VMFS. Finally hit W to write the changes to disk.

now we have a partition of the correct type , we can format a VMFS datastore on the partition

vmkfstools -C vmfs3 -S test-iscsi /vmfs/devices/disks/vmhba32:0:0:1

refrshing storage on the VI client shows the datastore sitting happily there.

all datastores in place..

I hope this has been helpful – its certianly been a good fresher for myself , not having playing with iSCSI since my VCP 3.5 revision. all comments are welcome here or on twitter ( @chrisdearden )
 
 
 

Clusters that is ?

I’ve spent a good amount of this time playing about with ideas on how we should arrange a number of new hosts  due into our data centre.

They are currently spilt according to their stage in the lifecycle ( Prod / Stage / QA / Integration / Development / Disaster Recover )  however the new Data centre will only be having non production ( QA and “below”) and Disaster Recovery Vm’s in place – In order to get the best out of the hosts , then going for large clusters of 8-12 hosts seems to make sense , but should the next cluster we purchase / migrate also be a mixed non-prod one ?

The problem I can see with building up a number of smaller clusters per lifecycle is that  when it comes to that time of year when new hardware is purchased , will it be CPU compatible with existing hardware , thus  forcing us to start a new cluster for those hosts.

I’d be interested to hear how other VmAdmins arrange their clusters – do you push your for maxiumum efficiency , or stick to a more logical layout ?

Mike has finished with the video editing on the skpye based “chinwag” I had yesterday – sadly we couldn’t get it up to you tube , but there are some links here.

We chatted about large vm migrations , snapshots and some of the other day to day challenges I face.

part 1
part 2

you can also get it as part of mikes pod cast series , here . If you’d rather listen to the MP3, its here

for my Chinwag with Mike Laverick – I *think* I should have plenty to chat about. If I dont , then I’m sure Mike wont post 10 minutes of awkward silence….