Quantcast
Channel: Gluster Community Website » shanks
Viewing all articles
Browse latest Browse all 10

Add more GlusterFS integrated OpenStack Cinder nodes

$
0
0

Add more GlusterFS integrated OpenStack Cinder nodes to an existing OpenStack setup.

GlusterFS setup

Continuing from http://gsr-linux.blogspot.in/2013/07/glusterfs-integration-with-openstack.html. I wanted to scale out OpenStack Cinder nodes integrated with new GlusterFS volume.

Creating a new distributed-replicate volume to store cinder images.

~]# gluster vol create cinder-vol2 replica 2 \
xx.yy.zz.183:/rhs/brick1/cinder-vol2 \ 
xx.yy.zz.223:/rhs/brick1/cinder-vol2 \ 
xx.yy.zz.183:/rhs/brick2/cinder-vol2 \ 
xx.yy.zz.223:/rhs/brick2/cinder-vol2
volume create: cinder-vol2: success: please start the volume to access data

~]# gluster vol set cinder-vol2 group virt
volume set: success
~]# gluster vol set cinder-vol2 storage.owner-uid 165
volume set: success
~]# gluster vol set cinder-vol2 storage.owner-gid 165
volume set: success

Verifying newly created cinder volume (cinder-vol2)

~]# gluster vol info cinder-vol2
Volume Name: cinder-vol2
Type: Distributed-Replicate
Volume ID: a3d4831a-d754-4072-9a3a-c6641759ec09
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: xx.yy.zz.183:/rhs/brick1/cinder-vol2
Brick2: xx.yy.zz.223:/rhs/brick1/cinder-vol2
Brick3: xx.yy.zz.183:/rhs/brick2/cinder-vol2
Brick4: xx.yy.zz.223:/rhs/brick2/cinder-vol2
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off

~]# gluster vol start cinder-vol2
volume start: cinder-vol2: success

~]# gluster vol status cinder-vol2
Status of volume: cinder-vol2
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick xx.yy.zz.183:/rhs/brick1/cinder-vol2 49156 Y 16793
Brick xx.yy.zz.223:/rhs/brick1/cinder-vol2 49156 Y 10807
Brick xx.yy.zz.183:/rhs/brick2/cinder-vol2 49157 Y 16804
Brick xx.yy.zz.223:/rhs/brick2/cinder-vol2 49157 Y 10818
NFS Server on localhost 2049 Y 16816
Self-heal Daemon on localhost N/A Y 16824
NFS Server on xx.yy.zz.223 2049 Y 10830
Self-heal Daemon on xx.yy.zz.223 N/A Y 10837
There are no active volume tasks

Configuring OpenStack Cinder to user new GlusterFS volume

On the new Cinder node do:

~]# yum install openstack-utils openstack-cinder openstack-selinux 
~]# yum install glusterfs glusterfs-fuse


~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes


~]# cat > /etc/cinder/shares.conf << EOF
> xx.yy.zz.183:cinder-vol2
> EOF

Configuring to connect to exiting Cinder database


Configuration to connect this new node to an existing cinder database and also to an existing AMQP broker.

~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname xx.yy.zz.146

Note: Here xx.yy.zz.146 is my OpenStack setup installed using "packstack --allinone" option.

~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_username guest
~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_password notused
~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT sql_connection mysql://cinder:794153107f0a44d7@xx.yy.zz.146/cinder

Note: sql_connection value can be found in your /etc/cinder/cinder.conf of default cinder node.

All set!!!

~]# service openstack-cinder-volume start


Verifying your addition of new GlusterFS integrated OpenStack Cinder node


[root@dhcpzz-211 ~(keystone_admin)]# mount
/dev/mapper/vg_dhcpzz211-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/vda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
xx.yy.zz.183:cinder-vol2 on /var/lib/cinder/volumes/c1c93c8c420380b0c5bdea9f2eacd8b8 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@dhcpzz-211 ~(keystone_admin)]# 


The logs should indicate (provided verbose=true in cinder.conf) that qpid is connected to AMQP server as:
==> /var/log/cinder/volume.log <==
2013-07-31 17:15:31     INFO [cinder.service] Starting 1 workers
2013-07-31 17:15:31     INFO [cinder.service] Started child 3338
2013-07-31 17:15:31    AUDIT [cinder.service] Starting cinder-volume node (version 2013.1.2)
2013-07-31 17:15:32     INFO [cinder.volume.manager] Updating volume status
2013-07-31 17:15:32     INFO [cinder.openstack.common.rpc.impl_qpid] Connected to AMQP server on xx.yy.zz.146:5672
2013-07-31 17:15:32     INFO [cinder.openstack.common.rpc.impl_qpid] Connected to AMQP server on xx.yy.zz.146:5672


Now cinder-manage should list both the nodes:
~(keystone_admin)]# cinder-manage host list
host                   zone           
dhcpzz-146.shanks.com nova           
dhcpzz-211.shanks.com nova      



Lets test by creating cinder volumes
[root@dhcpzz-146 ~(keystone_admin)]# cinder create --display-name vol1 20
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2013-07-31T11:43:30.979520      |
| display_description |                 None                 |
|     display_name    |                 vol1                 |
|          id         | 0a3bf28b-0313-4f8f-b49d-d1337bcddac8 |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+


In ==> /var/log/cinder/volume.log <== of your first/default cinder node you should see (provided verbose=true in cinder.conf):
2013-07-31 17:23:03     INFO [cinder.volume.manager] volume volume-35e1f918-9c82-4e13-86c4-affe1d9f7091: creating
2013-07-31 17:23:03     INFO [cinder.volume.drivers.glusterfs] casted to xx.yy.zz.183:cinder-vol
2013-07-31 17:23:03     INFO [cinder.volume.manager] volume volume-35e1f918-9c82-4e13-86c4-affe1d9f7091: created successfully


[root@dhcpzz-146 ~(keystone_admin)]# getfattr -d -etext -m. -n trusted.glusterfs.pathinfo /var/lib/cinder/volumes/586c24173ac3ab5d1d43aed1f113d9f6/volume-35e1f918-9c82-4e13-86c4-affe1d9f7091 
getfattr: Removing leading '/' from absolute path names
# file: var/lib/cinder/volumes/586c24173ac3ab5d1d43aed1f113d9f6/volume-35e1f918-9c82-4e13-86c4-affe1d9f7091
trusted.glusterfs.pathinfo="(<DISTRIBUTE:cinder-vol-dht> (<REPLICATE:cinder-vol-replicate-1> <POSIX(/rhs/brick2/cinder-vol):dhcpzz-183.shanks.com:/rhs/brick2/cinder-vol/volume-35e1f918-9c82-4e13-86c4-affe1d9f7091> <POSIX(/rhs/brick2/cinder-vol):dhcpzz-223.shanks.com:/rhs/brick2/cinder-vol/volume-35e1f918-9c82-4e13-86c4-affe1d9f7091>))"




[root@dhcpzz-211 ~(keystone_admin)]# cinder create --display-name vol2 20
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2013-07-31T11:54:00.245068      |
| display_description |                 None                 |
|     display_name    |                 vol2                 |
|          id         | c384cc7b-f888-43e5-979c-915898fc70e8 |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

In ==> /var/log/cinder/volume.log <== of your newly added GlusterFS integrated OpenStack Cinder node you should see (provided verbose=true in cinder.conf):
2013-07-31 17:24:03     INFO [cinder.volume.manager] volume volume-c384cc7b-f888-43e5-979c-915898fc70e8: creating
2013-07-31 17:24:03     INFO [cinder.volume.drivers.glusterfs] casted to xx.yy.zz.183:cinder-vol2
2013-07-31 17:24:04     INFO [cinder.volume.manager] volume volume-c384cc7b-f888-43e5-979c-915898fc70e8: created successfully


[root@dhcpzz-211 ~(keystone_admin)]# getfattr -d -etext -m. -n trusted.glusterfs.pathinfo /var/lib/cinder/volumes/c1c93c8c420380b0c5bdea9f2eacd8b8/volume-c384cc7b-f888-43e5-979c-915898fc70e8 
getfattr: Removing leading '/' from absolute path names
# file: var/lib/cinder/volumes/c1c93c8c420380b0c5bdea9f2eacd8b8/volume-c384cc7b-f888-43e5-979c-915898fc70e8
trusted.glusterfs.pathinfo="(<DISTRIBUTE:cinder-vol2-dht> (<REPLICATE:cinder-vol2-replicate-0> <POSIX(/rhs/brick1/cinder-vol2):dhcpzz-223.shanks.com:/rhs/brick1/cinder-vol2/volume-c384cc7b-f888-43e5-979c-915898fc70e8> <POSIX(/rhs/brick1/cinder-vol2):dhcpzz-183.shanks.com:/rhs/brick1/cinder-vol2/volume-c384cc7b-f888-43e5-979c-915898fc70e8>))"




Viewing all articles
Browse latest Browse all 10

Trending Articles