Pre-requisites and Geo-replication configuration are still the same as identified here.
Except that geo-replication was configured for vmstore instead of snapstore. I also copied over the virtual machine xml to geo2 hypervisor.
Note: That this was tested on a local lab network, hence, network bandwidth should be considered if on the internet.
Validation
1. [root@geo1 ~]# virt-install --connect=qemu:///system --network=bridge:br0 --initrd-inject=rhel.ks --extra-args="ks=file:/rhel.ks console=tty0 console=ttyS0,115200" --name=shanks-rhel1 --disk path=/var/lib/libvirt/images/shanks-rhel1.qcow2,device=disk,format=qcow2,bus=virtio,cache=writeback,io=threads --ram 1024 --vcpus=1 --check-cpu --accelerate --hvm --location=http://download.shanks.com/RHEL-6/6.4/Server/x86_64/os/ --nographics
Here, /var/lib/libvirt/images is a gluster mount which is geo-replicated.
2. After the installation in complete, I login to the virtual machine and create a 600MB file. Generate the md5sum of it to have it validated against the geo-replicated image.
3. scp the virtual machine xml to geo2 hypervisor.
4. poweroff the virtual machine in geo1.
5. Ensure that the sync is complete:
[root@ninja ~]# gluster volume geo-replication vmstore gladiator.lab.eng.blr.redhat.com::vmstore status detail
MASTER: vmstore SLAVE: gladiator.lab.eng.blr.redhat.com::vmstore
NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING
---------------------------------------------------------------------------------------------------------------------------------
ninja.lab.eng.blr.redhat.com Stable 1 day 17:22:49 56 0 0Bytes 0
vertigo.lab.eng.blr.redhat.com Stable 1 day 17:22:34 0 0 0Bytes 0
[root@ninja ~]#
6. virsh start the virtual machine in geo2.
7. Login and run md5sum on the 600MB file.
Result: Virtual machines boots up fine and the md5sum matches.
Got to admit, it was a great exercise!
And this opens a whole bunch of new scenarios that can be tried and tested with virtualization integrated with glusterfs geo-replication.
-shanks