Micro infrastructure server with OpenWRT – part 3

This is the third part in a series of three articles describing how I created a basic DNS/DHCP/NTP server for my lab that only uses 24MB RAM and 12MB disk space.

Micro infrastructure server with OpenWRT – part 1
Micro infrastructure server with OpenWRT – part 2

Setting up the services

If you’ve followed parts 1 and 2 correctly, you should be able to hit the web-based GUI now. By default you can access this on any of the interfaces configured. Log in with the root account and the password you just set.

Interface login

Clicking on the Network tab, followed by the Interface tab, shows the three interfaces that were manually configured in the /etc/config/network file. From here you can add new interfaces and edit existing ones.

Interface network

NTP

Configuring the OpenWRT instance to act as an NTP server is very straightforward. On the System > System tab, you can set the hostname and timezone. The Provide NTP server checkbox turns the OpenWRT VM into a local NTP server. The Enable NTP client checkbox keeps it’s local time synced with external time servers. You can set pool servers here as well. My lab is completely isolated from the outside world so I don’t set pool servers. If the time is out, I use the Sync with browser option to update it, which will then correct all my downstream devices. In the real world having accurate time is important for things like log files, but in my lab the important thing is that all the devices have the same time – this NTP server does that.

Once any changes have been made in the Web GUI, click Save & Apply in the bottom right corner.

ntp 2

General DHCP and DNS settings

On the Network > DHCP and DNS tab there are some general settings which apply to all the interfaces. If you can tick the This is the only DHCP on the local network then do so. Making it authoritative will speed up how quickly the clients get their leases. The Local Server setting is how non-FQDN hostnames get resolved by DNS, and the Local Domain setting is the default domain setting given out to DHCP clients.

The next section show the active DHCP leases. The last section is where you add your static DHCP reservations.

General DHCP DNS

DHCP pools

Each interface can be associated with a DHCP pool. On the Network > Interfaces page, click the Edit button next to the VLAN that you want DHCP leases to be provided. Below the Common Configuration section, is the DHCP server section. On the General Setup tab, the first checkbox will disable DHCP if ticked. Assuming this is a subnet where you want a DHCP scope, leave this unticked. You then set the starting address (for example 100  to start at x.x.x.100 on a /24 subnet), the number of leases (for example 99 which will make the pool x.x.x.100 to x.x.x.199), and the duration of the leases.

DHCP_general

On the Advanced Settings tab you can set the subnet mask and the scope options given out with each lease. In the screenshot below I’ve set option 3 to explicitly state the default gateway, and option 6 for the DNS servers.

DHCP_advanced 2

Remember to hit Save & Apply at the bottom of the page.

DNS hostnames

To set the DNS A records, select the Network > Hostnames tab and add in each record.

Hostnames

Review

To review the entire DNS and DHCP configuration, log into the console and take a look at the configuration file

cat /etc/config/dhcp

Backup

One last thing to do before finishing up. Click on the System > Backup/Flash Firmware tab. From here you can get a configuration backup exported as a compressed tarball. This is not only useful if you need to rebuild the server, but it’s an easy way to review all the important config files (it’s just a dump of those files in an archive file).

backup

When I get my SimpliVity sponsored Raspberry Pi delivered, I’ll try to follow up with another post to explain how to install OpenWRT on it.

Micro infrastructure server with OpenWRT – part 2

This is the second part in a series of three articles describing how I created a basic DNS/DHCP/NTP server for my lab that only uses 24MB RAM and 12MB disk space.

Micro infrastructure server with OpenWRT – part 1
Micro infrastructure server with OpenWRT – part 3

Installation

To install OpenWRT as  VM, start by downloading the latest version. At the time of writing the latest version is the 12.09 release from April 2013. A pre-build virtual disk image is available from here:

http://downloads.openwrt.org/attitude_adjustment/12.09/x86/generic/openwrt-x86-generic-combined-ext4.vmdk

In your vSphere Web Client (or Windows Client) create a new VM. I based it on Ubuntu 32bit.

Install - Ubuntu 32bit

Before powering on the VM, upload the openwrt-x86-generic-combined-ext4.vmdk image to the VM’s datastore folder. Then edit the VM’s settings to reduce the vRAM down (I run mine with 24MB, but you can probably go lower), make sure that only 1 vCPU is configured, delete the VMDK that was originally attached during the creation process, and attach the OpenWRT disk.

Install - Edit settings

Now that the install is complete, onto the configuration.

Configuration

Power on the VM and you’ll be faced with some console output:

Power on

Just hit enter and the command prompt is displayed:

Power on -enter

Set the password

First thing you’ll probably want to do is to set a password. By default the console will log you in as root and no password is required (it’s blank). So on the console:

passwd root

This ensures that the web interface, once it’s reachable via an IP interface, will have some protection. This by itself doesn’t force a login at the console. This is a lab so I’m not that concerned, but if you want to set this up there is a script here. (I think the reason is the OpenWRT image is primarily aimed at home routers, and you’d only see this if you were attached to it via a console serial cable. Telnet and Web access forces you to log in.)

Network setup

Please note: I’m only going to discuss the configuration of the VM and the host it sits on. How your hosts are connected to their switch, how the switch is configured and what it’s capable of (layer 3 switching?) is up to you.

Ordinarily, at least couple of interfaces are created (not including the loopback interface): lan and wan and they’re bridged together. But because we built a standard VM which only has a single vNIC, then only the lan interface is created. This is exactly what we want because we’re not planning on using this appliance for routing or firewalling traffic (although you could if you wanted to).

Initial network config

By default the lan interface is set to 192.169.1.1/24 so if the VM is on a subnet that you can connect to via this IP, then you should be able to connect with a web browser and configure everything in the GUI.

However, I want to set up DHCP for several trunked subnets and I’ve found it much quicker just to enter this straight into the config file from the outset. Here’s how I set it up.

vi /etc/config/network

I changed the lan (eth0) interface to remove the bridging and set the IP address appropriately.

I also added two virtual trunked interfaces (mgt and vms). The syntax to do this is eth0.x where x is the VLAN ID. For each virtual interface give it a name and an appropriate IP settings for that VLAN’s subnet. My lan interface doesn’t need VLAN tagged as it sits on the switch port’s default VLAN (PVID).

Here’s how I configured mine:

config interface 'loopback'
 option ifname 'lo'
 option proto 'static'
 option ipaddr '127.0.0.1'
 option netmask '255.0.0.0'
config interface 'lan'
 option ifname 'eth0'
 option proto 'static'
 option netmask '255.255.255.0'
 option gateway '192.168.1.254'
 option ipaddr '192.168.1.99'
config interface 'mgt'
 option proto 'static'
 option ifname 'eth0.1000'
 option ipaddr '10.0.0.99'
 option netmask '255.255.255.0'
 option gateway '10.0.0.1'
config interface 'vms'
 option proto 'static'
 option ifname 'eth0.1003'
 option ipaddr '10.0.3.99'
 option netmask '255.255.255.0'
 option gateway '10.0.3.1'

Top tip: in vi you can use yy to copy (yank) a line, and p to paste it.

Once you make any changes to the /etc/config/network file, you need to execute:

/etc/init.d/network reload

to stop and restart the network interfaces.

VM’s trunked connection

In most cases these days, a VM is a connected to a port group in ESXi using Virtual Switch Tagging (VST) – remember the contents of this classic white paper. But here we’re getting the guest OS in the VM to tag the traffic. We don’t want the port group to act as an access port, but we want it to act like a trunk port, sending and receiving traffic on multiple VLANs. To do this, create a new port group and set it to VLAN ID 4095:

VGT web

and set the port group as promiscuous:

Promiscuous web

Now, if everything is set correctly you should be able to ping each interface from something in each subnet (or from anywhere if you have layer 3 switching in your lab).

In the next post I describe how to configure NTP, DHCP and DNS services in OpenWRT.

Micro infrastructure server with OpenWRT – part 3

Micro infrastructure server with OpenWRT – part 1

This is the first part in a series of three articles describing how I created a basic DNS/DHCP/NTP server for my lab that only uses 24MB RAM and 12MB disk space.

Micro infrastructure server with OpenWRT – part 2
Micro infrastructure server with OpenWRT – part 3

OpenWRT

I’ve been building a new lab environment recently and wanted a small infrastructure server that could sit permanently in the management cluster and provide basic services such as DNS, DHCP and NTP. The lab will host multiple different testing environments, often for short periods of time. I use the ever awesome Autolab to rapidly provision new setups. Autolabs are self-contained, so they provide their own Active Directory (AD) domain controller (DC), which in turn includes DNS/DHCP/NTP, along with their own vCenter and nested hosts. But to save a considerable amount of rework after I’m finished with each test environment, a management cluster will provide the base services I need and can remain in-place through each cycle. Chris Wahl has a lengthier post espousing the benefits of such a strategy.

My management cluster revolves around two physical hosts – both Intel NUC i5 with AMT (providing out-of-band access) – joined to a vCenter Server Appliance (vCSA).

Why vCSA instead of a windows-based vCenter?

  • They both use similar amount of resources – you can happily run it on 3GB vRAM for a small lab
  • Autolabs use a windows-based vCenter, so I still gets lots of face-time with them
  • vCSA is more stable in my opinion
  • vCSA only takes a few minutes to deploy and is simple to update/patch
  • vCSA doesn’t need a Windows license. This is a deal breaker for me. I don’t want to rebuild my management vCenter every 180 days (technet is going away)

So what else do I need in the management cluster? AD? I thought carefully about this because of the 180 day license timeout. And honestly, for the small amount of servers it simple isn’t worth it. Nothing in my management cluster needs AD, so why bother.

So without a DC I also lose other basic services that we take for granted, such as DNS, DHCP and NTP. In reality I could do without those as well. I don’t need DNS if I’m happy to use IP addresses everywhere in the management cluster. I expect most long term components will have static IPs so there’s no strict need for DHCP. It’s a lab so is time that important? (actually it can be, and it becomes particularly obvious in a more volatile physical/virtual lab environment). So perhaps I could do it all with just one vCSA and the physical hosts in my management cluster.

But I thought it would be handy to provide these simple services as I plan to spin up a number of associated management tools such as backup, monitoring, logging, etc. I know before long I’ll have a non-trivial number of long-term appliances that would benefit from these services. We live in the world of bespoke virtual appliances I thought – it shouldn’t be that hard to find a good solution just designed for this purpose. Surprisingly, there wasn’t an obvious contender.

Which tool to use?

I reviewed a number of options:

721px-Pfs-logo-vector.svg

pfsense came highly recommended:

 pfsense tweet

pfsense is small firewall appliance that comes bundled with lots of additional features. It only uses 128MB of vRAM so is sufficiently small for a lab and it has a nice web interface. Unfortunately I wrangled with it for hours and never got it to work properly. I kept putting it back on the shelf, looking at alternatives, picking it back up again; would get frustrated with it, and went back out to find something else. I’m sure it’s a great package, but it evidently wasn’t for me.

 

FreeSCO logo-640a

I was somewhat familiar with FreeSCO as it’s the “router on a floppy” VM that Autolab uses as its gateway device. It ticks a lot of boxes and runs comfortably in only 16MB of vRAM. It’s not the most handsome web interface but configuring it is relatively straightforward. However the functionality of the server services that I was looking to host were more limited on FreeSCO than the competition.

 

openwrt_logo
OpenWRT is primarily aimed as a replacement for home router embedded firmware. It’s very popular in the hardware hacker community, and has spun off many similar projects such as DD-WRT.

OpenWRT includes the dnsmasq package to provide basic DNS and DHCP functionality. This is a common toolkit and would satisfy my lab needs. (You can replace it with a BIND configuration should you wish a richer DNS option. OpenWRT packaging is handled by okpg which is very similar to dkpg so anyone familiar with Debian/Ubuntu should feel right at home.)

What really made up my mind was an unrelated bit of news from SimpliVity last week. Simplivity are offering a free Raspberry Pi to all 2013 vExpert recipients. Kudos to SimpliVity!

SimpliVity

I’ve often thought about getting one of these Rapberry Pi devices, but could never come up with a good use-case (Xtravirt’s vPi looks interesting but I’m not convinced that it would be anything more than a toy for me). I then thought about the pfsense, FreeSCO and OpenWRT appliances I’d been testing. After some digging it seems that there is some preliminary support for OpenWRT on the Raspberry Pi. I thought how useful it could be to have a virtual copy on the first lab host that could travel with the lab (the Intel NUC hosts are very portable so they make an excellent mobile lab), and a small hardware appliance with a similar configuration (and no vSphere dependencies) when everything is static and ticking along. Very small footprint, low power, and plenty of geek cred!

Raspberry Pi

The next post will explain how I installed OpenWRT into a VM and configured the networking for multiple VLANs.

Micro infrastructure server with OpenWRT – part 2

VMworld 2013 session – Examining vSphere Design Through a Design Scenario

VMworld 2013 starts in less than a week! Mr Scott Lowe and I will be presenting another design-focused session this year and we hope that you can all make it. Unfortunately I doubt we’ll be able to fit 20,000+ folk into the allocated room, so I’d suggest reserving a spot while there is still places available. The session should be useful for anyone that’s interested in vSphere design choices, those pursuing design-style certifications, or budding architects.  This year we’re taking a case-study scenario and looking at how the design process helps us to examine the options hands-on. It should be a lot of fun.

VSVC4995 – Examining vSphere Design Through a Design Scenario
Led by authors Forbes Guthrie and Scott Lowe (co-authors of VMware vSphere Design and VMware vSphere Design 2nd Edition), this workshop-style session will provide attendees an insight into vSphere design by cooperatively working through a design scenario. The session will start with a brief review of key design concepts and the design process, then quickly move into a design scenario that will allow the audience to interactively participate and identify design requirements, explore various design decisions, and evaluate the impact of those decisions on the overall design.

  • Session time: Tuesday 4 – 5 pm

Sign up without delay.

 

VMworld 2013