Extending the management node pool¶
There are several situations that requires an extension of the original management node pool (top
& workers
):
GlusterFS filling-up
Hardware resources being depleted
…
The overall procedure is quite similar to the original installation: deploy the node, integrate it into clustered systems (GlusterFS, Fleet, Etcd, …).
Only set of 3-nodes can be added at a given time. In this section, we will add worker[4-6] nodes. See Gluster for details about GlusterFS architecture.
Node installation¶
First, deploy and configure the node like any other hypervisor. See Worker nodes deployment for the complete procedure.
The result of this, should be nodes that are reachable through SSH, local storage (for GlusterFS bricks) configured and mounted and, if any, multi-gig ethernet card configured are ready to be used.
Node configuration¶
Same as before, use the original procedure to configure the node itself up to the GlusterFS setup. See Puppet agents bootstrap for details.
GlusterFS integration¶
To begin, add the newly deployed nodes into the GlusterFS cluster by executing this on an existing node :
top1# gluster peer probe worker4-data.$(facter domain)
top1# gluster peer probe worker5-data.$(facter domain)
top1# gluster peer probe worker6-data.$(facter domain)
Then, extend the GlusterFS volume with the new bricks, execute this on any GlusterFS cluster member :
# gluster volume add-brick volspoms1 worker4-data.$(facter domain):/glusterfs/brick1/data\
worker5-data.$(facter domain):/glusterfs/brick1/data\
worker6-data.$(facter domain):/glusterfs/brick1/data\
worker6-data.$(facter domain):/glusterfs/brick2/data\
worker4-data.$(facter domain):/glusterfs/brick2/data\
worker5-data.$(facter domain):/glusterfs/brick2/data\
worker5-data.$(facter domain):/glusterfs/brick3/data\
worker6-data.$(facter domain):/glusterfs/brick3/data\
worker4-data.$(facter domain):/glusterfs/brick3/data
# gluster volume add-brick volspoms2 worker4-data.$(facter domain):/glusterfs/brick4/data\
worker5-data.$(facter domain):/glusterfs/brick4/data\
worker6-data.$(facter domain):/glusterfs/brick4/data
Finally, rebalance the volumes:
# gluster volume rebalance volspoms1 start
# gluster volume rebalance volspoms2 start
You can monitor the rebalancing process by executing status
commands:
# gluster volume rebalance volspoms1 status
# gluster volume rebalance volspoms2 status
Note
If you’ve set quota on volume, don’t forget to update them. If you don’t, volume’s available size won’t be updated.
ETCD integration¶
To integrate etcd into the existing cluster, for each node, execute the following sequence:
Add the member into the cluster:
# etcdctl -u root -C https://top1.$(facter domain):2379 member add $NODE https://worker4.$(facter domain):2380
Password:
Added member named worker4 with ID ... to cluster
ETCD_NAME="worker4"
ETCD_INITIAL_CLUSTER="..."
ETCD_INITIAL_CLUSTER_STATE="existing"
Configure etcd on the node with configuration given in the preceding output (
/etc/etcd/etcd.conf
). Disable the default proxy-mode (setETCD_PROXY=off
inetcd.conf
)This configuration is most likely managed by puppet.
Stop and Wipe-out etcd data on the node:
# systemctl stop etcd
# rm -Rf /var/lib/etcd/*
Start etcd:
# systemctl start etcd
Check the cluster’s health:
# etcdctl -C https://top1.$(facter domain):2370 cluster-health
Finalize the configuration¶
To complete the integration, add the new cluster nodes into the configuration.
For example, GlusterFS mount points should also use these nodes, ETCD clients should use these nodes too. Other things might have to be updated too.
This configuration is completely done within Puppet.