This a response to Gabrie van Zanten’s question – Design question: Why vCenter Server Datacenter?
To summarize, Gabe argues that the datacenter object in vCenter creates artificial limits on his operational abilities and asks why he shouldn’t just use folder objects across sites. Gabe postulates that perhaps the default for our designs should be to avoid multiple datacenter objects.
There are several reasons to use the datacenter object in your design. Primarily it’s there as a logical container for items that you want to create limits around. Yes, that’s right, you want to artificially limit mobility in the design because you recognize that those items have characteristics which should be contained. A folder object can contain those same sub-objects, but it’s precisely because you recognize that the physical equipment represented in the containers are best grouped together that you use datacenters.
To recap, the vCenter hierarchy is important for many reasons. Quoting from our VMware vSphere Design book:
The inventory structure creates a delineation that serves a number of purposes. It helps you organize all the elements into more manageable chunks, making them easier to find and work with. Monitoring can be arranged around the levels with associated alarms; events trigger different responses, depending on their place in the structure. You can set security permissions on hierarchical objects, meaning you can split up permissions as required for different areas and also nest and group permissions as needed. Perhaps most important, the inventory structure permits certain functionality in groups of objects, so they can work together.
It is not only VM mobility that is contained within a datacenter object. If you switch views in the vSphere client to Storage (called Datastores in the Windows client), you see datastores and datastore clusters are contained within a datacenter. You can create folders in there, but they are distinct to that view. Datacenter objects span each view. If you switch to the Networking view, datacenters are the containers for vDS and Port Groups. There’s a reason for this. The crux of datacenter objects, and what they do for you over and above folders, is logically explain where the network and storage boundaries are (and by association, host boundaries as well). Your design identifies these physical confines and implements it in vSphere using these logical objects.
- Where will you stretch your layer 2 networks?
- How far are you going to stretch your VMs from their storage (IP and/or FC)?
When you design your vCenter hierarchy, these choices are affected by things such as bandwidth, latency, storage fabric topology, etc.
If you recognise a datacenter as a separate physical location, then in the majority of cases you’ll split the location’s components into a separate datacenter object. There are certainly cases where this decision becomes less clear – for example, a campus-style design, where server rooms are only a couple of kilometers apart. The rooms are commonly recognized as two distinct sites (or not), but it’s feasible that with the right dark fiber links you could logical treat this as one datacenter. To quote from the vSphere Design book again:
Remember that despite the moniker, a datacenter doesn’t necessarily have to align with a physical datacenter or server-room location. However, network and storage connections do tend to be determined by geographical location, so it’s common to see this parallel used.
You can stretch layer 2 networks and storage across much larger distances, and this can provide very interesting highly available solutions, but this requires a substantial amount of planning. For example, during regular operations you don’t want your VMs’ disks on the remote storage array.
The vSphere datacenter object should be used in your design precisely because of the limitations it creates on your operational mobility. Without them, if you elect to only use folders, then you’ll need to create extremely complex operational processes to prevent problems. So I say, “by default, use datacenter objects to represent your datacenters” – they’re essential constructs in your design.
I’m delighted to reveal that the new version of the VMware vSphere Design book is now available from leading book stores.
The electronic versions are widely available now, and hard copies can be pre-ordered and should ship within the next couple of weeks.
This revised, updated, and largely rewritten second edition of VMware vSphere Design has been thoroughly overhauled to encompass all the great new changes that have been introduced in vSphere up to and including version 5.1. We’ve been blown away by the sheer volume of improvements and additions to this product. Every area of vSphere design has been affected deeply, and the revamped book reflects this.
In addition to the changing landscape of vSphere in the datacenter, the book now incorporates another key tenet of VMware’s datacenter portfolio: vCloud Director, its private/public cloud integration piece. This emerging technology is now deeply intertwined in the future of vSphere and becoming an essential skill for anyone currently involved or interested in vSphere design.
I’ve been asked several times if a book like this, which is so focused on design concepts, has really changed that much since the 4.x original edition. Well, for starters the book has grown around 150 pages – so right off the bat you can see there is a lot more goodness squeezed in. While I appreciate many of the fundamental principles remains, we poured over every section to ensure that it reflected the vSphere 5.x toolset. For example, chapter 2 in the first edition was based on the design choice of ESX or ESXi as the hypervisor. For this edition I largely re-wrote the chapter. It now dives under the covers of ESXi to explore how the image ticks, looks at how to deploy it across different environments, compares the design impacts of stateless versus installable ESXi, and how to configure and then manage the image. So yes, even if you have already read the first edition I highly recommend upgrading to this new release.
You can download the book’s introduction section here, which describes in detail what the book covers and why you’ll find it an essential addition to your technical library:
As with the first edition, we were very fortunate to have Jason Boche (blog/twitter) as our Technical Editor. I’d also like to personally extend my thanks to Maish Saidel-Keesing (blog/twitter); as an author on the first edition his involvement naturally bleeds through to this edition.
We’re all immensely proud of this book and truly believe that it’s a great resource for learning about vSphere design. We hope you snag yourself a copy and enjoy reading it as much as we enjoyed writing it.
This week I sat VMware’s VCAP5-DCD exam (and I’m proud to say I passed). As many have commented before me, time is real constraint (or is that a risk). I won’t list out the resources I used to prepare for it; suffice it to say that Gregg Robertson has an excellent post that covers this. Although I have heard about this fantastic design book out there…
Obviously I can’t reveal anything about the content of the exam, but I did want to highlight a small, but important change to the exam format that’s happened recently. The exam consists of a mixture of multiple choice, drag ‘n drop and a handful of Visio style questions. According to Jon Hall, VMware’s certification developer, these diagrammatic questions can account for around half the available points. The most common DCD advice I hear revolves around how to apportion your time to these few, but critical questions. The recommendation is that you skim through the exam once answering all the multiple choice questions and flag the rest. Then you can allocate the remaining time to the big and arguably more important questions. Personally I always read that advice and thought it was topsy-turvy; I was planning to tackle the diagrams first.
However the latest version of official exam blueprint, dated October 26th, has changed its wording and now states:
Once you have provided a complete answer or design for a given exam item and advanced to the next item, you will NOT be allowed to return to that item and the item cannot be flagged for later review. Please ensure when taking the exam that you have completed each answer and/or design before continuing to the next item. Drag-and-drop items and Design items will prompt you for confirmation that the item is complete before advancing to the next item.
Fortunately I noticed this the day before I sat the exam, but because I hadn’t heard anything about it in the community, I wasn’t sure how it would affect things.
Here’s what I found when I sat the exam. Right at the beginning of the exam, just before I hit the start button, the instructions page told me clearly that I was going to get 94 multiple choice and drag ‘n drop, and 6 diagram questions. I believe this ratio can vary, and I’ve heard folk getting only 4 or 5 of the diagram questions. As I progressed onto the questions each page only had a Next button. There was no option to go back, and no option to flag any questions. As the blueprint paragraph states, you get a confirmation dialogue box for the bigger questions to make sure you haven’t accidentally clicked next. On question 100, I picked my answer, hit next, and that was it. Straight to a congratulations/commiserations score page. So there is only one direction you can take and that is forward.
In retrospect I think it’s a good move on VMware’s part. I don’t like seeing these strategies float around that can give an advantage if you happen to know the special handshake. Now everyone is going to have to address the questions in the same way. It certainly resets the dynamic, and you really have to concentrate on how you want to spend your precious 225 minutes. I’ll be honest, there was plenty of multiple choice questions where I didn’t even read the scenario – I just scanned the actual question and selected what I thought the most likely answer was. There just wasn’t the time to analyse everything properly, and on these low value questions I had to take a chance. I literally finished with less than 2 minutes on the clock. I appreciate the need to test candidates resolving problems quickly, assessing their time management skills and keeping the pressure on; but to me this ability is more suited to break-fix scenarios. This is what the DCA is for. For me, design analysis should be more considered.
Anyway, forewarned is forearmed as they say. Hopefully some of you might read this and not be shocked when the first you realize of the change is when you’re partway through the exam.
Now that vSphere 5.1 has been announced, it’s time to update the reference card. Due to another project I’m working on, I won’t have the spare cycles to devote time to this for the next couple of months. Those of you who attended VMworld and caught the session that Scott Lowe and I presented, will know what I’m talking about. If you don’t know, don’t worry, I’ll write about the project here soon.
I expect that I’ll be able to start updating the card to 5.1 in November. What would great in the meantime, is if you spot anything which needs updating let me know in the comments below. If your feeling particularly focussed then grab a section, and try to list all the updates required and any new features which you think need added. I collate these cards and distribute them under a liberal license for everyone in the community to use as freely as possible. It would be great if we can get some feedback from the community itself this time around. Doing so will ensure that it’s released as quickly and accurately as possible.
P.S. I don’t plan on updating my Documentation Notes. I only update those on major releases, i.e. vSphere 4.0, 5.0, etc. I suggest you grab the 5.0 notes and supplement them with the latest What’s New in vSphere 5.1 whitepapers.
Following some great feedback from Brandt, Bjorn and Jakk recently in the comments, I’ve created a small update to the reference cards. Thanks guys! As ever, if you spot anything you think needs correcting or new changes that need applied then let us all know.
As usual, you can grab the latest copy over here.
I’ve made a couple of minor corrections to the networking section of the vReference Card:
Port groups per vSS = 256
Hosts per vDS = 350
Thanks to FerFables, Daniel M, Rene Reitinger, and Alex for pointing out these two typos!
As usual, you can grab the latest copy over here.
At long last, the vSphere 5.0 vReference Card is ready. Go and grab it over here. This time I’ve split it up into both an A4 version and a separate Letter size one. This should hopefully make the printing experience more consistent regardless of which side of the pond you try.
New with this release is a full page version. This is the same information as the card, but I’ve increased the font to a less eye-ball screamingly small font. This should make it make much more conducive to reading as a study guide, or if you want to bone-up on a particular area.
As ever, the card is a work-in-progress, so let me know if you spot any additions or updates you think are needed and we can improve this resource for everyone.
If you are linking to the card, please give out the page address not the card itself. This helps direct everyone to the latest version.
I think I might have stumbled across an interesting design conflict.
UCS Boot from iSCSI SAN support
Cisco UCS manager 2.0 now offers the ability for their blade servers to boot from iSCSI SAN. In the release notes it states:
iSCSI Boot - iSCSI boot enables a server to boot its operating system from an iSCSI target machine located remotely over a network.
Sounds good to me. I know a lot of blade aficionados were looking forward to this addition, as Boot from SAN and Blades are a popular combination. Digging a little deeper, it appears that during the install of non-Windows OSes the NICs offers an iBFT setup, which to me indicates they are considered “Dependent HW NICs” in VMware parlance. The adapters are configured with iSCSI settings in card’s firmware and handle some offload, but they are more similar to SW initiators than outright iSCSI HBAs in that the VMkernel is still responsible for most of the day-to-day storage traffic. From the latest UCS Manger 2.0 Configuration Guide, page 392 states:
The iBFT works at the OS installation software level and might not work with HBA mode (also known as TCP offload). Whether iBFT works with HBA mode depends on the OS capabilities during installation.
followed on page 393 by:
only Windows OS supports HBA mode during installation
VMware ESXi 5.0 boot from iSCSI SAN support
Now we flick over to the VMware vSphere 5.0 Storage Guide, on page 100:
With independent hardware iSCSI only, you can place the diagnostic partition on the boot LUN. If you configure the diagnostic partition in the boot LUN, this LUN cannot be shared across multiple hosts. If a separate LUN is used for the diagnostic partition, it can be shared by multiple hosts. If you boot from SAN using iBFT, you cannot set up a diagnostic partition on a SAN LUN.
VMware vSphere 5.0 Dump Collector
Lastly we need to refer to VMware KB article 2000781 regarding support of its Dump Collector tool which states:
The vSphere ESXi 5.0 Network Dump Collector feature is supported only with Standard vSwitches and cannot be used on a VMkernel network interface connected to a vSphere Distributed Switch or Cisco Nexus 1000 Switch.
Do the hokey cokey and turn around
So still following along? I’ll join the dots now so you can see where I’m going with all this… Once you’ve installed your shiny new UCS chassis and blades, you see the freshly-released boot from iSCSI SAN support and decide to install the latest and greatest ESXi 5.0 as your hypervisor of choice. Unfortunately VMware doesn’t create a diagnostic partition during the install because ESXi sees the iSCSI adapter using iBFT. No problems you think, you can setup the new centralized Dump Collector to make sure those diagnostic dumps don’t get lost during a kernel panic. Bang, but you’re using a Distributed Switch – uh oh spaghettios. Let’s face it, I would think it’s very likely that the sort of datacenter that uses Cisco UCS blades, and the sort of environment that would consider Boot from SAN ESXi installs, are likely to be using vDS or 1000v switches in their configuration.
Now I realize that not having a diagnostic partition is not the end of the world. You can still install and run ESXi fine without it. However, if you are using UCS with ESXi and were thinking about Boot from iSCSI as an option, then you should realize that your likely not capturing the kernel dumps. I’m sure that is not what most folk expect. Just a curious design quirk that might be useful to highlight.
There is a design workaround for this. You could create a separate 110MB partition on each blade’s local disk and redirect the dumps there. But that kinda defeats the point doesn’t it? Or you could use a shared SAN LUN and point all your hosts there. Just remember to be quick and grab that dump immediately after a crash or the next host crash will overwrite it. Not great options I agree, but if you *really* want to go this way…
Here’s a preview of the Storage section from the upcoming vSphere 5 vReference Card.
This is the last section I plan to include, so I hope to get a full beta of the card out in the next week or so. Formatting can be onerous and usually takes longer than I expect, but it’s not too far off. I laid it all out last week, but it takes up about 50% more real estate than 2 sides of A4/Letter paper provides. It calls for some clever typographical ingenuity to squeeze it in while still making it vaguely legible without a sub-atomic microscope.
I realize there are lots of VCP4s out there that only have until the end of February to upgrade, so I know folk are keen that I’m finished soon. Until then you can still grab each section individually: Networking, Resources, Availability, VM, vCenter, Install, Hosts, and the Storage bit below.
To help expedite the process make sure you let me know if you spot anything which needs correcting on any of these sections (or anything you think I should add or remove). Anything still in grey are areas I’ve not been able to confirm are still valid with vSphere 5.
Just drop your comments below or catch me on twitter (@forbesguthrie).
Click on the images below to see it full size or you can view/print it as a PDF here.
VMware have recommended for quite some time that we stick to multicast when configuring NLB (MS’s Network Load Balancing) where possible:
VMware recommends that you use multicast mode, because unicast mode forces the physical switches on the LAN to broadcast all Network Load Balancing traffic to every machine on the LAN.
If you need to use unicast, then to prevent port flooding you should change the Port Group’s “Notify Switches” policy to No - the default being Yes.
Windows 2008 R2 Failover Clustering
According to this white paper from Microsoft,
Multicast functionality has been discontinued in Windows Server 2008 failover clustering, and cluster communications now use User Datagram Protocol (UDP) unicast.
So Microsoft clustering gurus, does this mean for Window 2008 R2 Failover Clusters we should also change the “Notify Switches” policy off? Is the recommended setting for MS NLB clustering now applicable to MS’s latest version of MSCS?
- @nickmarshall9 I'll be at SFO this year, bit I'm running low on time to put things together. : 3 days ago
- @gurusimran Would like to but I need to think about what I'd submit as a design. : 3 days ago
- @coolsport00 @timantz @virtualizecr Thanks guys! (Shane, your study notes were great - vBeer coming your way at VMworld 2013) : 3 days ago
- Whoohooo! Just got my VCAP5-DCA score - 441. Very happy as I'm not so hands-on these days. Step 3 complete, onto step 4. : 3 days ago
- @theGuate Yes, that's no longer the case & is an error. When we realized we told the publishers to remove from reprints & add to errata. Thx : 3 days ago
- #KneeKnacker training run this morning http://t.co/EcDcACDohP : 1 week ago
- Off to sit my VCAP5-DCA. Fingers & toes crossed. : 2 weeks ago
- View 5.2 tip: Win 7 Optimization Guide advises setting Win Firewall service to disabled. This breaks Blast service (HTML5 access). : 3 weeks ago
- Check out this handy online IOPS calculator, very nicely done: wmarow.com/strcalc/strcal… : 3 weeks ago
- VMware Certified Professional on vSphere 5 (VCP5) Study Guide | The Virtualization Guy on vSphere 5 vReference card – vCenter section
- VMware Certified Professional on vSphere 5 (VCP5) Study Guide | The Virtualization Guy on vSphere 5 vReference card – VM section
- VMware Certified Professional on vSphere 5 (VCP5) Study Guide | The Virtualization Guy on vSphere 5 vReference card – availability section
- VMware Certified Professional on vSphere 5 (VCP5) Study Guide | The Virtualization Guy on vSphere 5 vReference card – networking section
- VMware Certified Professional on vSphere 5 (VCP5) Study Guide | The Virtualization Guy on vSphere 5 Card
- VMware Certified Professional on vSphere 5 (VCP5) Study Guide | The Virtualization Guy on vSphere 5 vReference card – resources section
- My VCP5 upgrade experience on vSphere 5 Card
- Paul Kelly on VCAP5-DCD: small but important change to the exam format
- Cheat Sheet: vSphere 5 Reference Card on vSphere 5 Card
- Chris. on vSphere 5 Card