Experiment is to see how virtual machines behave when glusterfs quota is set on a volume which serves virtual machine qcow2 images and limit is exceeded.
I create a glusterfs volume and set virt group options.
[root@ninja ~]# gluster vol info
Volume Name: vmstore
Type: Replicate
Volume ID: c96de15d-024e-416d-a1c5-ff5fef44b25b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: xx.yy.zz.68:/rhs1/vmstore
Brick2: xx.yy.zz.56:/rhs1/vmstore
Options Reconfigured:
features.quota-deem-statfs: on
features.quota: on
storage.owner-gid: 107
storage.owner-uid: 107
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
Enable quota and deem-statfs on the above volume. features.quota-deem-statfs is necessary if you consider to estimate filesystem size using df command.
# gluster volume quota vmstore enable
# gluster vol set vmstore features.quota-deem-statfs on
[root@ninja ~]# gluster vol info vmstore
Volume Name: vmstore
Type: Replicate
Volume ID: c96de15d-024e-416d-a1c5-ff5fef44b25b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: xx.yy.zz.68:/rhs1/vmstore
Brick2: xx.yy.zz.56:/rhs1/vmstore
Options Reconfigured:
features.quota-deem-statfs: on
features.quota: on
storage.owner-gid: 107
storage.owner-uid: 107
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
[root@ninja ~]#
Ensure that the volume is mounted on the client
[root@client37 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 485M 40M 421M 9% /boot
ninja.shanks.com:vmstore 195G 5.2G 190G 3% /var/lib/libvirt/images
[root@client37 ~]#
Now set quota on the volume to limit usage to 20G.
[root@ninja ~]# gluster volume quota vmstore limit-usage / 20GB
volume quota : success
[root@ninja ~]#
Check on the client:
[root@client37 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 485M 40M 421M 9% /boot
ninja.shanks.com:vmstore 20G 19G 1.8G 92% /var/lib/libvirt/images
[root@client37 ~]#
Now keep adding data so that the quota usage limit is reached. You would see I/O error within the instance and the virtual machine becomes read-only
lost page write due to I/O error on dm-2
Buffer I/O error on device dm-2, logical block 1624644
lost page write due to I/O error on dm-2
Buffer I/O error on device dm-2, logical block 1624645
lost page write due to I/O error on dm-2
Buffer I/O error on device dm-2, logical block 1624646
lost page write due to I/O error on dm-2
Buffer I/O error on device dm-2, logical block 1624647
...
Aborting journal on device dm-2-8.
end_request: I/O error, dev vda, sector 118888896
end_request: I/O error, dev vda, sector 118898976
end_request: I/O error, dev vda, sector 118899984
EXT4-fs error (device dm-2) in ext4_reserve_inode_write: Journal has aborted
...
EXT4-fs (dm-2): Remounting filesystem read-only
EXT4-fs (dm-2): ext4_da_writepages: jbd2_start: 3244 pages, ino 13; err -30
...
JBD2: Detected IO errors while flushing file data on dm-2-8
[root@localhost home]# __ratelimit: 155 callbacks suppressed
__ratelimit: 19894 callbacks suppressed
Buffer I/O error on device dm-0, logical block 1081871
lost page write due to I/O error on dm-0
JBD2: Detected IO errors while flushing file data on dm-0-8
Aborting journal on device dm-0-8.
EXT4-fs error (device dm-0): ext4_journal_start_sb: Detected aborted journal
EXT4-fs (dm-0): Remounting filesystem read-only
Buffer I/O error on device dm-0, logical block 6324224
lost page write due to I/O error on dm-0
JBD2: I/O error detected when updating journal superblock for dm-0-8.
8<
Post reboot, kernel panic is detected.
dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line.
Kernel panic - not syncing: Attempted to kill init!
Pid: 1, comm: init Not tainted 2.6.32-358.el6.x86_64 #1
Call Trace:
[
[
[
[
[
[
8<
Resolution
On the hypervisor,
[root@client37 ~]# virsh destroy domain
On the gluster node, increase the limit-usage
[root@ninja ~]# gluster volume quota vmstore limit-usage / 40GB
volume quota : success
[root@ninja ~]#
On the client, verify that the new value is reflected.
[root@client37 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 485M 40M 421M 9% /boot
ninja.shanks.com:vmstore 40G 21G 20G 51% /var/lib/libvirt/images
[root@client37 ~]#
[root@client37 ~]# virsh start domain
And you are good to GO!!!