Category: Virtualization

Getting My Real VM Server Back Online Part III: Storage, iSCSI, and Live Migrations

After some dubious network configurations (that I should have never configured incorrectly) I finally got multipath working to the main storage server. All of the multipath.conf examples I saw resulted in non-functional iSCSI MPIO, while having no multipath.conf left me with failover MPIO instead of interleaved/round-robin.

A large issue with trying to get MPIO configured was the fact that all the examples I found were either old (and scsi_id works slightly differently in Ubuntu 14.04) or just poor. Yes, I wound up using Ubuntu. Usually I use Slackware for EVERYTHING, but lately I’ve been trying to branch out. Most of the VMs run Fedora, “Pegasus” or VMSrv1 uses Fedora, “Titan” uses Ubuntu.

Before I did anything with multipath.conf (It’s empty on Ubuntu 14.04), I got this:

root@titan:/home/frankd# multipath -ll
1FREEBSD HTPC1-D1 dm-2 FREEBSD,CTLDISK
size=256G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 13:0:0:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 12:0:0:0 sdd 8:48 active ready running

Note the disks are both round-robin — with only one member each! This works for fail-over, but did nothing for performance. The only thing that wound up working for multipath.conf was this:

defaults {
 user_friendly_names yes
 polling_interval 3
 path_grouping_policy multibus
 path_checker readsector0
 path_selector "round-robin 0"
 features "0"
 no_path_retry 1
 rr_min_io 100
}

multipaths {
 multipath {
  wwid 1FREEBSD_HTPC1-D1
  alias testLun
 }
}

The wwid/alias doesn’t work, however. All of the MPIO is just coming from the defaults stanza. I attempted many things with no luck, unfortunately. I’m going to have to delve into this more especially if I want live migrations to work properly with MPIO. As it stands the disk devices are pointing at a single IP (ex /dev/disk/by-path/ip-172.17.2.2:3260-iscsi-iqn.2014-12.lab.frankd:htpc1-lun-0), I’ll need to point at aliases to get the VMs working with multipath.

The multipath tests themselves were promising though, dd was able to give me a whopping 230MB/s to the mapper device over a pair of GigE connections.

The output from ‘multipath -ll’ now looked more reasonable:

root@titan:/home/frankd# multipath -ll
mpath1 (1FREEBSD HTPC1-D1) dm-2 FREEBSD,CTLDISK
size=256G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 39:0:0:0 sde 8:64 active ready running
  `- 40:0:0:0 sdg 8:96 active ready running

You can see the drives are both under the same round-robin policy instead of two separate ones.

The storage server also saw some slight changes, including upgrading from one Intel X25-V 40GB for L2ARC to 2xX25-Vs for a total of 80GB. I also added a 60GB Vertex 2 as a SZIL device. I really need to build a machine with more RAM and partition out the SZIL. I’ll likely wind up using my 840Pro 256GB for L2ARC and leave the old X25Vs out of the main array once I get a pair of 10GbE cards for maximum speed (hopefully near-native of the 840Pro — perhaps better with a large amount of ARC) to my workstation.

So we’re at a point where everything appears to be working, although in need of some upgrades! Great! I’m looking at a KCMA-D8 Dual Opteron C32 motherboard as I have a pair of Opteron 4184s (6 core Lisbon, very similar to a Phenom II X6 1055T) laying around, so I could put together a 32GB 12 core machine for under $400 — but as always, budgetary constraints for a hobby squash that idea quickly.

Getting My Real VM Server Back Online Part II: Storage Server!

Anticipating the arrival of RAM for my VM server tomorrow I decided I needed some kind of real storage server, so I started working on one. I haven’t touched BSD since I was a kid, so I’m not used to it in general. I wasn’t sure how OpenSolaris would work on my hardware (I hear it’s better on Intel than AMD) so I opted for FreeBSD. Unfortunately I just found out FreeBSD doesn’t have direct iSCSI integration with ZFS, but that’s okay! We can always change OS’s later, especially since the storage array leaves a lot to be desired (RAID-Z1 with 4x1GB 2.5″ 5200RPM drives + 40GB Intel X25V for L2ARC, no separate ZIL).

I’m getting used to the new OS and about to configure iSCSI, which will be handed out via multipath over an Intel 82571EB NIC into two separate VLANs into a dedicated 3550-12T switch. We’ll see how it works, and if it’s fine I’m going to get my HTPC booting over it.

I’m going to look around for a motherboard with more RAM slots, for now I’m stuck with a mATX motherboard, a SAS card that won’t let the system boot, and 2 RAM slots (8GB) with an FX-8320.

Performance tests to come.. after I encounter a dozen issues and hopefully deal with them!

Getting My Real VM Server Back Online

My server has been off hiding somewhere far away from me for a while, so I’ve been running virtual machines on an AMD FX-8320 990FX based box. Unfortunately it only had 16GB of RAM and I gutted the server RAM for use in my workstations.

I’ve decided to order some used ECC Registered 4GB sticks off of eBay — 32GB ought to do for now. I won’t have to worry about whether I can launch a new VM due to RAM constraints (I was using a lot of swap before!), so titan.frankd.lab will soon be back online with the FX-8320 machine for failover. I’m going to need shared storage, so I’ll have to setup a real iSCSI storage box soon.

End short random thought.

Another VM Host Upgrade

And yet another not-exciting blog entry. My VM host with an FX-8320 was on an AMD 760G board so it lacked IOMMU which I’d love to have for SR-IOV among other things. I have a spare machine laying around that was formerly a gaming machine. Needing more RAM (The 760G board only had two slots) and IOMMU, I decided to repurpose the gaming machine as the VM host. The 990FX based board already had an FX-8120 in it, so I took a single step back in CPU generation but it’s fairly close. I only had 8GB of RAM in the old setup, so I combined that with 2x2GB sticks of ECC DDR3 RAM I had hiding in a box. I have a bit of head room now and can launch a few more VMs with 12GB of total RAM. While that’s not impressive as far as virtualization host hardware goes it does let me run a bunch of local services for testing/learning/re-learning. Not having onboard graphics with the new board necessitated the use of another video card, luckily I had some GTX 750 Tis laying around (I seem to lay ‘laying around’ about hardware pretty often) so one went in the bottom PCI-E x4 slot so as to not block any other slot for future upgrades. The Intel I350-T2 card went in the next x4 slot for iSCSI.

VM storage is going to be split off from the hardware, so it will all be through iSCSI with MPIO. That pretty much just leaves me with a ton of PCI-E slots for NICs.

I was able to reduce reported CPU TDP by offlining the “odd” cores (1/3/5/7) while load is low (better to offline these cores as 01, 23, 45, 67 are shared in AMD’s CMT architecture), locking the CPU at idle and reducing power state 6 (idle) voltage from 0.9375v to 0.825v which has been stable so far (sensors reports 0.85v). Power tends to stay close to 30w and never breaks 50w. If it was more heavily utilized I’d let it clock up, but nothing is CPU limited at the moment. I’ll have to try monitoring power usage at forced idle vs the ‘ondemand’ governor with various load transition points. I wouldn’t call anything sluggish, but I don’t have hundreds of devices on my network.

As for a power supply, the case already had a SeaSonic 660XP2 80+Platinum power supply, so even if I do have to run the CPU at full tilt there should be little waste in the PSU department. It’s completely overkill both for being Platinum at this power level (likely sub 100w at all times), and for its 660w rating. If I was going to buy something I probably would’ve got a SeaSonic Gold which would still allow me plenty of headroom even if it was full of NICs and RAM. It does feel a little safer than running off a 180w power supply with an FX-8320 and a drive array, though.

There’s plenty of local services running here, eventually I’m going to make some (counter)intuitive web GUIs for configuring stuff (ie IP Address Management which then configured DHCP/DNS).. so it was good to brush up on configuring these things from scratch.