Setup GlusterFS server and integrate with openstack-cinder and openstack-glance (Grizzly) :
GlusterFS setup:
Creating a distributed-replicate gluster volume for cinder
# gluster vol create cinder-vol replica 2 \xx.yy.zz.183:/rhs/brick1/cinder-vol \
xx.yy.zz.223:/rhs/brick1/cinder-vol \
xx.yy.zz.183:/rhs/brick2/cinder-vol \
xx.yy.zz.223:/rhs/brick2/cinder-vol
volume create: cinder-vol: success: please start the volume to access data
Creating a distributed-replicate gluster volume for glance
# gluster vol create glance-vol replica 2 xx.yy.zz.183:/rhs/brick3/glance-vol xx.yy.zz.223:/rhs/brick3/glance-vol xx.yy.zz.183:/rhs/brick
4/glance-vol xx.yy.zz.223:/rhs/brick4/glance-vol
volume create: glance-vol: success: please start the volume to access data
Setting appropriate volume options for cinder and glance volume
cinder-vol:# gluster vol set cinder-vol group virt
volume set: success
# gluster vol set cinder-vol storage.owner-uid 165
volume set: success
# gluster vol set cinder-vol storage.owner-gid 165
volume set: success
glance-vol:
# gluster vol set glance-vol group virt
volume set: success
# gluster vol set glance-vol storage.owner-uid 161
volume set: success
# gluster vol set glance-vol storage.owner-gid 161
volume set: success
Verifying the volume before starting it
# gluster vol info cinder-volVolume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: 2f4edaef-678b-492a-b972-bd95c1c490a3
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: xx.yy.zz.183:/rhs/brick1/cinder-vol
Brick2: xx.yy.zz.223:/rhs/brick1/cinder-vol
Brick3: xx.yy.zz.183:/rhs/brick2/cinder-vol
Brick4: xx.yy.zz.223:/rhs/brick2/cinder-vol
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
[root@dhcpzz-183 ~]#
# gluster vol info glance-vol
Volume Name: glance-vol
Type: Distributed-Replicate
Volume ID: eedd5254-e0ca-4173-98eb-45eaef738010
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: xx.yy.zz.183:/rhs/brick3/glance-vol
Brick2: xx.yy.zz.223:/rhs/brick3/glance-vol
Brick3: xx.yy.zz.183:/rhs/brick4/glance-vol
Brick4: xx.yy.zz.223:/rhs/brick4/glance-vol
Options Reconfigured:
storage.owner-gid: 161
storage.owner-uid: 161
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
Starting both the volumes
# gluster vol start cinder-vol
# gluster vol start glance-vol
# gluster vol start glance-vol
Verifying the cinder volume status
# gluster vol status cinder-vol
Status of volume: cinder-vol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick xx.yy.zz 183:/rhs/brick1/cinder-vol 49152 Y 3824
Brick xx.yy.zz 223:/rhs/brick1/cinder-vol 49152 Y 3689
Brick xx.yy.zz 183:/rhs/brick2/cinder-vol 49153 Y 3835
Brick xx.yy.zz 223:/rhs/brick2/cinder-vol 49153 Y 3700
NFS Server on localhost 2049 Y 3916
Self-heal Daemon on localhost N/A Y 3924
NFS Server on xx.yy.zz 223 2049 Y 3779
Self-heal Daemon on xx.yy.zz 223 N/A Y 3786
There are no active volume tasks
Verifying the glance volume status
# gluster vol status glance-vol
Status of volume: glance-vol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick xx.yy.zz 183:/rhs/brick3/glance-vol 49154 Y 3893
Brick xx.yy.zz 223:/rhs/brick3/glance-vol 49154 Y 3756
Brick xx.yy.zz 183:/rhs/brick4/glance-vol 49155 Y 3904
Brick xx.yy.zz 223:/rhs/brick4/glance-vol 49155 Y 3767
NFS Server on localhost 2049 Y 3916
Self-heal Daemon on localhost N/A Y 3924
NFS Server on xx.yy.zz 223 2049 Y 3779
Self-heal Daemon on xx.yy.zz 223 N/A Y 3786
There are no active volume tasks
Status of volume: glance-vol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick xx.yy.zz 183:/rhs/brick3/glance-vol 49154 Y 3893
Brick xx.yy.zz 223:/rhs/brick3/glance-vol 49154 Y 3756
Brick xx.yy.zz 183:/rhs/brick4/glance-vol 49155 Y 3904
Brick xx.yy.zz 223:/rhs/brick4/glance-vol 49155 Y 3767
NFS Server on localhost 2049 Y 3916
Self-heal Daemon on localhost N/A Y 3924
NFS Server on xx.yy.zz 223 2049 Y 3779
Self-heal Daemon on xx.yy.zz 223 N/A Y 3786
There are no active volume tasks
Configuring Openstack Cinder to use GlusterFS volume:
Installing required packages
# yum install -y openstack-utils openstack-cinder openstack-glance
# yum install -y glusterfs-fuse glusterfs
Setting glusterfs default config parameters in cinder.conf
# openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# cat > /etc/cinder/shares.conf << EOF
xx.yy.zz.183:cinder-vol
EOF
xx.yy.zz.183:cinder-vol
EOF
Configuring Openstack Glance to use GlusterFS volume:
Ensure filesystem_store_datadir is specified as:
<snip>
# ============ Filesystem Store Options ========================
# Directory that the Filesystem backend store
# writes image data to
filesystem_store_datadir = /var/lib/glance/images/
</snip>
Create directory under glance as images:
mkdir /var/lib/glance/images
Mount gluster volume on filesystem_store_datadir:
mount -t glusterfs xx.yy.zz.183:/glance-vol /var/lib/glance/images
Update /etc/fstab for persistent mount of glance volume across reboot.
Installing openstack using packstack with --allinone option
# packstack --allinone --debug
Welcome to Installer setup utility
Installing:
Clean Up... [ DONE ]
Setting up ssh keys... [ DONE ]
OS support check... [ DONE ]
Adding pre install manifest entries... [ DONE ]
Adding MySQL manifest entries... [ DONE ]
Adding QPID manifest entries... [ DONE ]
Adding Keystone manifest entries... [ DONE ]
Adding Glance Keystone manifest entries... [ DONE ]
Adding Glance manifest entries... [ DONE ]
Adding Cinder Keystone manifest entries... [ DONE ]
Installing dependencies for Cinder... [ DONE ]
Checking if the Cinder server has a cinder-volumes vg...[ DONE ]
Adding Cinder manifest entries... [ DONE ]
Adding Nova API manifest entries... [ DONE ]
Adding Nova Keystone manifest entries... [ DONE ]
Adding Nova Cert manifest entries... [ DONE ]
Adding Nova Conductor manifest entries... [ DONE ]
Adding Nova Compute manifest entries... [ DONE ]
Adding Nova Scheduler manifest entries... [ DONE ]
Adding Nova VNC Proxy manifest entries... [ DONE ]
Adding Nova Common manifest entries... [ DONE ]
Adding Openstack Network-related Nova manifest entries...[ DONE ]
Adding Quantum API manifest entries... [ DONE ]
Adding Quantum Keystone manifest entries... [ DONE ]
Adding Quantum L3 manifest entries... [ DONE ]
Adding Quantum L2 Agent manifest entries... [ DONE ]
Adding Quantum DHCP Agent manifest entries... [ DONE ]
Adding Quantum Metadata Agent manifest entries... [ DONE ]
Adding OpenStack Client manifest entries... [ DONE ]
Adding Horizon manifest entries... [ DONE ]
Adding Swift Keystone manifest entries... [ DONE ]
Adding Swift builder manifest entries... [ DONE ]
Adding Swift proxy manifest entries... [ DONE ]
Adding Swift storage manifest entries... [ DONE ]
Adding Swift common manifest entries... [ DONE ]
Preparing servers... [ DONE ]
Adding Nagios server manifest entries... [ DONE ]
Adding Nagios host manifest entries... [ DONE ]
Adding post install manifest entries... [ DONE ]
Installing Dependencies... [ DONE ]
Copying Puppet modules and manifests... [ DONE ]
Applying Puppet manifests...
Applying xx.yy.zz.146_prescript.pp
xx.yy.zz.146_prescript.pp : [ DONE ]
Applying xx.yy.zz.146_mysql.pp
Applying xx.yy.zz.146_qpid.pp
xx.yy.zz.146_mysql.pp : [ DONE ]
xx.yy.zz.146_qpid.pp : [ DONE ]
Applying xx.yy.zz.146_keystone.pp
Applying xx.yy.zz.146_glance.pp
Applying xx.yy.zz.146_cinder.pp
xx.yy.zz.146_keystone.pp : [ DONE ]
xx.yy.zz.146_glance.pp : [ DONE ]
xx.yy.zz.146_cinder.pp : [ DONE ]
Applying xx.yy.zz.146_api_nova.pp
xx.yy.zz.146_api_nova.pp : [ DONE ]
Applying xx.yy.zz.146_nova.pp
xx.yy.zz.146_nova.pp : [ DONE ]
Applying xx.yy.zz.146_quantum.pp
xx.yy.zz.146_quantum.pp : [ DONE ]
Applying xx.yy.zz.146_osclient.pp
Applying xx.yy.zz.146_horizon.pp
xx.yy.zz.146_osclient.pp : [ DONE ]
xx.yy.zz.146_horizon.pp : [ DONE ]
Applying xx.yy.zz.146_ring_swift.pp
xx.yy.zz.146_ring_swift.pp : [ DONE ]
Applying xx.yy.zz.146_swift.pp
Applying xx.yy.zz.146_nagios.pp
Applying xx.yy.zz.146_nagios_nrpe.pp
xx.yy.zz.146_swift.pp : [ DONE ]
xx.yy.zz.146_nagios.pp : [ DONE ]
xx.yy.zz.146_nagios_nrpe.pp : [ DONE ]
Applying xx.yy.zz.146_postscript.pp
xx.yy.zz.146_postscript.pp : [ DONE ]
[ DONE ]
**** Installation completed successfully ******
Verifying that glusterfs mount was successful
# mount | grep gluster
xx.yy.zz.183:/glance-vol on /var/lib/glance/images type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
xx.yy.zz.183:/glance-vol on /var/lib/glance/images type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
xx.yy.zz.183:cinder-vol on /var/lib/cinder/volumes/586c24173ac3ab5d1d43aed1f113d9f6 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
Verifying cinder volume is mounted on glusterfs volume
~(keystone_admin)]# getfattr -d -etext -m. -n trusted.glusterfs.pathinfo /var/lib/cinder/volumes/586c24173ac3ab5d1d43aed1f113d9f6
getfattr: Removing leading '/' from absolute path names
# file: var/lib/cinder/volumes/586c24173ac3ab5d1d43aed1f113d9f6
trusted.glusterfs.pathinfo="((<DISTRIBUTE:cinder-vol-dht> (<REPLICATE:cinder-vol-replicate-0> <POSIX(/rhs/brick1/cinder-vol):dhcpzz-183.shanks.com:/rhs/brick1/cinder-vol/> <POSIX(/rhs/brick1/cinder-
vol):dhcpzz-223.shanks.com:/rhs/brick1/cinder-vol/>) (<REPLICATE:cinder-vol-replicate-1> <POSIX(/rhs/brick2/cinder-vol):dhcpzz-223.shanks.com:/rhs/brick2/cinder-vol/> <POSIX(/rhs/brick2/
cinder-vol):dhcpzz-183.shanks.com:/rhs/brick2/cinder-vol/>)) (cinder-vol-dht-layout (cinder-vol-replicate-0 0 2147483646) (cinder-vol-replicate-1 2147483647 4294967295)))"
getfattr: Removing leading '/' from absolute path names
# file: var/lib/cinder/volumes/586c24173ac3ab5d1d43aed1f113d9f6
trusted.glusterfs.pathinfo="((<DISTRIBUTE:cinder-vol-dht> (<REPLICATE:cinder-vol-replicate-0> <POSIX(/rhs/brick1/cinder-vol):dhcpzz-183.shanks.com:/rhs/brick1/cinder-vol/> <POSIX(/rhs/brick1/cinder-
vol):dhcpzz-223.shanks.com:/rhs/brick1/cinder-vol/>) (<REPLICATE:cinder-vol-replicate-1> <POSIX(/rhs/brick2/cinder-vol):dhcpzz-223.shanks.com:/rhs/brick2/cinder-vol/> <POSIX(/rhs/brick2/
cinder-vol):dhcpzz-183.shanks.com:/rhs/brick2/cinder-vol/>)) (cinder-vol-dht-layout (cinder-vol-replicate-0 0 2147483646) (cinder-vol-replicate-1 2147483647 4294967295)))"
Verifying glance image is created on glusterfs volume
~(keystone_admin)]# glance image-create --name="test" --is-public=true --container-format=ovf --disk-format=qcow2 < f17-x86_64-openstack-sda.qcow2
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 1f104b5667768964d5df8c4ad1d7cd27 |
| container_format | ovf |
| created_at | 2013-07-30T12:56:05 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | a66213ee-1a76-4d4a-959d-5df3f8f320ac |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | test |
| owner | 84e2f0fac93d402287a8eb97b6ba9711 |
| protected | False |
| size | 251985920 |
| status | active |
| updated_at | 2013-07-30T12:56:49 |
+------------------+--------------------------------------+
~(keystone_admin)]# ls -l /var/lib/glance/images/
total 246080
-rw-r-----. 1 glance glance 251985920 Jul 30 18:26 a66213ee-1a76-4d4a-959d-5df3f8f320ac
~(keystone_admin)]# getfattr -d -etext -m. -n trusted.glusterfs.pathinfo /var/lib/glance/images/a66213ee-1a76-4d4a-959d-5df3f8f320ac
getfattr: Removing leading '/' from absolute path names # file: var/lib/glance/images/a66213ee-1a76-4d4a-959d-5df3f8f320ac
trusted.glusterfs.pathinfo="(<DISTRIBUTE:glance-vol-dht> (<REPLICATE:glance-vol-replicate-1> <POSIX(/rhs/brick4/glance-vol):dhcpzz-223.shanks.com:/rhs/brick4/glance-vol/a66213ee-1a76-4d4a-959d-5df3f
8f320ac> <POSIX(/rhs/brick4/glance-vol):dhcpzz-183.shanks.com:/rhs/brick4/glance-vol/a66213ee-1a76-4d4a-959d-5df3f8f320ac>))"