It’s a popular thing to do at this time of year so I thought I’d grab your attention for a few minutes and tell you why I’m super excited for VMworld this year.
This year will be my sixth VMworld US conference. Every year I get a different perspective. Obviously the technologies and products change and evolve, and as the conference matures and grows, it broadens its scope. It’s easy to say things like “the talks just aren’t as technical as they used to be”, but I think that misses the real value of such a conference. For me the conference’s greatest reward has always been about the opportunity to meet and chat with my peers. The chance to discuss upcoming cool technologies, the design challenges we’re facing, what solutions we’re putting together, how the products and wider markets are changing, growing, diverging and consolidating. As ever I’m looking forward to talking to as many people as possible.
This year Scott Lowe and I will be back with a vSphere Design focused session. We’re looking at how the new software-defined approaches will impact, and can improve your vSphere design. As ever, Scott and I will do our best to get the discussion flowing in the room. This is always a fun session and I’m really looking forward to being there, talking about the design process and how things like SDN and SDS technologies are changing the landscape.
There are still some seats left, so add it to you session builder line-up, come over, and join us. I was pretty humbled that Duncan Epping put this session on his must see list: http://www.yellow-bricks.com/2014/07/25/must-attend-vmworld-sessions-2014/. Thank Duncan!
I’m also looking forward to Andy Warfield’s session (Coho’s CTO) which is a panel with luminaries from Tintri, Pure, Tegile and DataGravity. I know most of the guys on the panel, and it’ll be great to watch them discuss where the storage industry is at, and where it’s going. Submit your questions beforehand, head along and see who’s crystal ball provides the most clarity.
Solution Exchange hall
Another thing I’m excited about this year is the experience working in the solutions exchange “on booth duty”. This will be my first year at VMworld that I’ll be attending while working for a vendor. One of the great privileges of my job as a Technical Product Manager are the chances I get to chat to users about their infrastructure needs. There isn’t a single place on the planet that’s as densely-packed with as many fellow nerds, that I can chat to about the cool stuff we do at Coho. VMworld to me has always been about geeking out over awesomesauce technology. The difference this year is that I’m expected, nay employed, to gas with others who share the same passions about smoking-hot tech as me.
Come over to the Coho Data booth 835 and tell them Forbes sent you. Chances are I’ll be there and I can show you some of the new stuff we’ll be revealing at the show.
BTW, if you’re a vExpert, sign up for some extra goodies here: http://info.cohodata.com/VMworld-2014_vExperts.html
New Coho Stuff
Which leads me on to the next thing I’m excited about at the show. Andy Warfield has already alluded to a few areas that Coho will be showcasing. We’ll be presenting some sweet previews of things like our OpenStack support and our vCenter Plugin that we’re in the process of building at Coho.
I think the really interesting bits are the early signs of where we’re taking Coho Data in the future. We already have a fascinating story around our ability to scale-out both performance and capacity, but as we look at some new hardware options in the booth you start to understand how different our offering is to other products in the vSphere market place. We’ll be able to scale performance and capacity independently from each other. We’re looking at providing all-flash storage devices that use our granular auto-tiering where SAS flash is the lower performance tier! Outrageous. Imagine being able to add 4U capacity behemoths chocked full of SATA disks totalling close to ½ PB of raw disk. All managed as one device. Like Lego bricks, you add the bits you need to fit your own environment. And when your needs change, you add only the bricks required. That’s part of the vision.
The second bit to this is something were calling Cascade. Having a tremendously flexible storage system is cool-an’-all, but we know how complex it can be to figure out what storage is really needed. Particularly when you’re trying to predict a capacity management strategy for the next 5 years. Capacity itself is usually the easy bit, but understanding your performance needs is like milking unicorn tears. Unless you’re the bravest of technical architects, you’ll add in a fudge factor of muchos dolares. Cascade aims to figure this out for you. No need to wait until your users are hammering the service desk, complaining that their mega_corp_mission_critical_app is loading like molasses again, or their VDI desktop is now taking longer to boot up than it takes them to grab the first cup of morning joe – “who idea was it to replace my computer with this virtual thing?”. Yep, we’ve all heard it before. Cascade understands your actual workload and shows you what type of bricks you need to add. It can tell how much better your life would have been last week during that monthly billing cycle run that fell over just as you were about to tuck into that delicious double-double animal-style, if you were to add another shelf of this or that. That’s where we’re going and you’ll see previews of the hardware and the software at VMworld.
Come say hi
When I’m not in the booth, I expect I’ll be flying around trying to meet as many folk as possible. There’s a reasonable chance I’ll be in the Hang Space by the bloggers table and the vBrownBag setup cranking out a blog post or catching up on twitter. Get in touch if you want to meet up.
When the conference doors are closed, I still hope to be soaking up as much as possible and talking technology whenever I can. Here’s my plan at the moment:
Community kickoff at Johnny Foley’s: http://twtup.com/6878fiv3e9fjrqz
Hopefully I’ll be at the vBrownBags/VMunderground opening events in the afternoon: http://blog.vmunderground.com/2014/08/05/opening-acts-2014-panelists-moderators/
Then it’s v0dgeball Charity Tournament where I’ll gallantly display my distinct lack of hand-eye coordination for the Coho team.
Last, but certainly not least, in the evening I’ll be hanging out at the VMunderground event (still tickets available!):
I’m planning to start off socializing with my brethren from Tegile and SolidFire at their respective Monday night events (looks like I was too slow to grab a Nutanix ticket). I might compete with these guys during the day, but this is one big community where I know several friends that work for these companies. Fortunately VMworld is one of those special occasions where we can leave our nametags at the door and be respectful of each other.
Then off to the main event for the evening, the vFlipCup tournament and tweetup – should be fun.
It’s always a crazy busy evening on the Tuesday. First, the vExpert/VCDX party to meet some community rockstars.
Then on to VMware’s Office of the CTO party in the vaults of the Old Mint Building. This event was hosted here last year and what an amazing place for a soiree. If you get invited you want to make sure you go.
After that I’ll pop over to meet my Veeam buddies (a new venue this year), and finally to the vBacon session down at the Ferry Building for some hard-earned bacon refreshment.
I dunno. Is there a party on Wednesday night?
Although I won’t be there I want to give a shout out to Crystal Lowe’s Spouseivities event. If your spouse, significant other, or buddy that’s dossing in your conference hotel room while you’re geeking it up during the day, is in town; then maybe they’d like to hang out with other hangers-on. It looks like another great line-up:
I’ve being reading about, poking, prodding and playing with CoreOS recently and thought it would good to document how to build a very basic clustered CoreOS setup in your lab. In this first post I’ll describe the CoreOS building blocks and show how to deploy an instance into your vSphere environment.
What the heck is this newfangled CoreOS thingy?
CoreOS is a relatively new Linux distribution. I know, there a gazillion Linux distros out there. We need another Linux distro like a hole in head. So what’s so special about CoreOS? Well, for lots of good reasons it’s becoming the darling of cloud distributions at the moment. Just like the current buzz about the PaaS platform Docker, CoreOS is making waves as the basis of many cloudy infrastructures (due in to no small part to the fact that CoreOS runs Docker apps exceptionally well). It’s oh so de rigueur, and I know the VMware community can’t get enough of playing with the latest-and-greatest lab tools.
This isn’t your father’s oldsmobile
This new Linux platform is a stripped down distribution based on Google’s ChromeOS toolchain. It can run as a VM (or bare metal) on your own hardware, and is also available via most of the popular cloud providers. It’s initial disk footprint is around 400MB, uses less 200MB of RAM, and boots extremely quickly. It’s basically a Linux kernel using systemd (the same init system already adopted by Fedora and OpenSUSE, committed for Red Hat’s and Debian’s next releases, and begrudgingly Ubuntu’s replacement for their existing Upstart system).
CoreOS uses a dual active/passive boot image and updates by downloading a complete image into the passive partition, and activating the new version on the next reboot (Yes, just like we’re used to with ESXi! – except this downloads itself). That means there’s no concept of a package manager doing dependancy resolution, just a full image being dumped down. This makes patching easy, quick and provides a get-out-of-jail free rollback option if there’s a problem.
Applications run within Docker containers, so all the good stuff you’ve heard about Docker is already included, ready to go. And the coolest bit of all this is the inherent clustering architecture. Services and applications can seamlessly and dynamically balance themselves across multiple CoreOS nodes.
The Secret Sauce. Well, not so secret as it’s Open Source. Before I launch into the lab setup, here’s a very quick rundown of the most interesting components that make CoreOS special.
etcd is a daemon that runs on every CoreOS instance proving a distributed key-value store. It keeps configuration data replicated across a CoreOS cluster, and handles things like the election process of members, allows new nodes, services, applications to register themselves in the cluster, and deals with failures appropriately. If your familiar with ZooKeeper or doozerd then you’ll know where this sits. In a vSphere world it’s analogous to the HA service that runs on each ESXi host.
Okay, this isn’t unique to CoreOS, but it’s so fundamental to they way CoreOS is designed I thought a 40,000ft description was important. Docker containers are based on LXC (Linux Containers) using the kernel’s cgroups to isolate resources. Docker’s purpose is to isolate applications, so in that respect you could compare Docker applications to VMware ThinApps in that they are completely self-contained apps that provide isolated sandboxes allowing you to run multiple versions alongside each other. But LXC is more akin to a very thin super-efficient type 2 hypervisor, as it can provide an entire Linux userspace to each app with namespace isolation. So it’s more appropriate to think of Docker containers as very lightweight VMs.
Docker’s virtualized containers can start almost instantaneously, no waiting for another OS to boot. Cluster serveral CoreOS instances together, where the docker apps can run on any node, store configuration information in a distributed service like etcd, and you start to see where this becomes interesting. Imagine an environment where your applications automatically load balance and failover to redundant, scaleable nodes, without application-specific awareness – it’s like what we’ve been doing with VMs across vSphere clusters, but practically removing the Guest OS layer and giving the applications the same mobility as VMs.
Fleet is a daemon that takes the individual systemd service run on each CoreOS machine and clusters it across all the nodes. It basically provides a distributed init process manager and it’s what gives the docker applications the coordinated redundancy and failover services.
It’s a fast-moving project with new ways of doing things so it’s taken a bit of trial-and-error testing to get things working. Certain components aren’t as well documented as they could be. I expect that much of what I write here will also be superseded quickly so if you’re reading this a few months after I publish it then you might want to check around for easier way to do things.
Time to get started
Download the latest CoreOS release with the following link: http://beta.release.core-os.net/amd64-usr/current/coreos_production_vmware_insecure.zip
At the time of writing it was only a tiny 176MB sized download. It’s worth noting in the URL that I’m downloading this from the beta channel. In a future post I’ll explain the update mechanism and how to switch to a different release channel.
I’m going to be installing this on an ESXi host. The instructions here: https://coreos.com/docs/running-coreos/platforms/vmware/ explain how you can download and use the OVF Tool from VMware to convert their Fusion/Workstation VM for ESXi use (I’m sure you could also use VMware Converter if you had a copy to hand). But I’m just going to do this by hand using the VMDK file included in the download.
I created a new VM as you would normally do in your vSphere Web Client and selected the following options through the wizard:
VM name: coreos1
In a future post I’ll explain how to cluster multiple instances together, so it makes sense to append this first VM with a number.
Guest OS: Linux, Other 2.6.x Linux (64-bit)
Compatibility: I selected vSphere 5.0 and above (HW version 8), but you should be able to pick whatever is appropriate for your environment.
Hardware: Here I removed the existing hard disk and floppy drive and set the memory to 512MB (you could drop the RAM lower, but I wanted some headroom for additional apps I’ll be running).
Note: For this tutorial, you’ll need to initially put the VM on a subnet that has DHCP services enabled.
Next, I copied the “coreos_production_vmware_insecure.vmdk” file from the zipped download up to my ESXi host’s datastore.
Now we have to manually convert the VMDK file. SSH into the ESXi host and drop into the newly created coreos1 VM’s directory and run the following command:
vmkfstools -i coreos_production_vmware_insecure_image.vmdk coreos1.vmdk -d thin -a lsilogic
Go back to the newly created VM’s settings to attach the “Existing Hard Drive” and BOOM, hit the power button.
When the boot process first gets to the login screen it might not have initialized the IPv4 address yet. Wait a few seconds and hit Enter in the VM’s console until an IP address is displayed.
Where’s my password?
To log into your newly birthed CoreOS VM, you need to go back to the downloaded zip file and retrieve the “insecure_ssh_key” file. Use the following command to use the key to remotely login in:
ssh -i <path_to_key>/insecure_ssh_key core@<your_coreos1_ip>
As this is just my home lab environment I won’t replace the default certificate, but obviously this is something you want to fix if you’re using this in a production setting.
The End of the Beginning
So we’ve finished the first step by deploying a single CoreOS VM. By itself it doesn’t do anything particularly exciting, but looking forward to some forthcoming articles you can start to see where things start to become more useful and have an insight into why CoreOS is becoming a tour de force all of a sudden.
A handful of the areas I’ll like to cover next are:
- digging into the update mechanism and changing release channels
- using the power of Docker to install applications
- systemd and the mysterious (and largely undocumented) cloud-config
- etcd’s distributed cluster management service
- clustering Docker applications using Fleet
A reader of vReference was kind enough to translate my recent Zentyal articles, entitled A Linux-based Domain Controller for a vSphere lab – Parts 1 / 2 / 3 / 4 into Portuguese. First I’d like to introduce everyone to him.
My name is Fernando Pimenta and I’m a consultant and technical expert in Linux, Datacenter and Cloud. I do the consulting work in Brazil in heterogeneous environments, seeking the best solutions for customers.
I have a success story in Red Hat, for having developed a Linux distribution to the retail market in Brazil:http://www.redhat.com/
Besides, I like to work with Linux, including RedHat, CentOS, Zentyal, ClearOS and Ubuntu.
So without further ado, here are his translated guides:
Um controlador de domínio baseado em Linux para um laboratório vSphere – parte 1
Um controlador de domínio baseado em Linux para um laboratório vSphere – parte 2
Um controlador de domínio baseado em Linux para um laboratório vSphere – parte 3
Um controlador de domínio baseado em Linux para um laboratório vSphere – parte 4
I just realized I hadn’t thrown up post about this. No time like the present.
Voting for VMworld 2014 sessions closes this Sunday (18th) at midnight. Be sure to get your voice heard and tell VMware which sessions you’d like to see this year.
I’ve submitted 4 distinct breakout sessions this year, each of which look at storage and vSphere from very different angles. Let me know which you think would be more valuable to the community by logging in and voting. Here’s the line-up:
You can read the abstracts in full when you log in, but here’s the pithy, less rambunctious description for each so you can cut to the chase:
2366 How Scale-Out Storage approaches can improve your vSphere Architecture
Josh and I each work for two leading Scale-Out Storage companies (Solid Fire & Coho Data respectively), but we promise this session will be a vendor-agnostic, marketing-free, techfest – no product overviews or gartner quotes, just architecture.
2492 How the new Software-defined Paradigms will impact your vSphere Design
This year we’re going to explain our thoughts on how these new software-defined approaches and technologies impact your real-world design challenges. We’ll try to remove ourselves from the acronym game and discuss the reality of where the rubber meets the road for vSphere.
2749 Is your Storage Appropriate for Your vSphere Environment?
Matt Liebowitz (author and general BCA guru) and I are going to dive into how to look at your existing workloads honestly, and figure out what kind of storage you really need. We’ve all know for years that VM performance is massively reliant on storage performance, but there still isn’t a one-size fits all easy approach to storage planning.
2639 3 Things you should know about vSphere Storage
Okay, hands up, you caught me. This session is a solo effort, and one where I may slip off the vendor-neutral stance to explain certain storage realities. But I promise that it won’t be a marketecture slideathon. I’ll be looking at 3 realizations about the storage industry and how it could change the way you think about storage in your vSphere world.
So, without further ado get yourself over here: