Add a new service VM¶
Overview¶
Adding a new service VM is a fairly simple procedure, it can easily be done while in production. This operation will be done as follows:
IPs and MAC addresses assignement and configuration generation.
Infrastructure configuration with the new VM (DHCP, DNS, Fleet, Pcocc).
Puppet role assignement and configuration.
Virtual machine bootstrap.
Procedure¶
IP and MAC assignement¶
Using the work_on_git
tool, assign an new IP and MAC address in the hiera/addresses.yaml
and hiera/hwaddrs.yaml
files of the confiture repo.
The MAC address assignement should follow the following convention:
- MAC is within a locally-managed MAC range
Typically the first 3 bytes are:
52:54:00
- Physical network is encoded in the MAC
Here, we’re using the fourth byte as a network ID (1 is management, 3 is backbone)
- Machine name is encoded in the MAC
Here, we’re using the last 2 bytes as a machine ID (3 is batch1, 4 is batch2, 40 is batch3)
Example Confiture configuration files could be:
# hiera/addresses.yaml
addresses:
batch[1-3]:
default: [ adm ]
adm: 10.1.1.[3-4,58]
bone: 192.168.0.[3-4,58]
[...]
# hiera/hwaddrs.yaml
hwaddrs:
batch1-adm: "52:54:00:01:00:03"
batch1-bone: "52:54:00:03:00:03"
batch2-adm: "52:54:00:01:00:04"
batch2-bone: "52:54:00:03:00:04"
batch3-adm: "52:54:00:01:00:40"
batch3-bone: "52:54:00:03:00:40"
Check with a git diff
that you’ve achieved the intended changes. Add them to git’s index with a git add
diff --git a/hiera/addresses.yaml b/hiera/addresses.yaml
index 46059be..84796a0 100644
--- a/hiera/addresses.yaml
+++ b/hiera/addresses.yaml
@@ -39,10 +39,10 @@ addresses:
eq: 10.0.1.[100-101] #/23
bone: 172.30.134.[23-24] #/24
- batch[1-2]:
+ batch[1-3]:
default: [ adm ]
- adm: 10.1.1.[3-4]
- bone: 192.168.0.[3-4]
+ adm: 10.1.1.[3-4,58]
+ bone: 192.168.0.[3-4,58]
db1:
default: [ adm ]
diff --git a/hiera/hwaddrs.yaml b/hiera/hwaddrs.yaml
index c728b44..e881960 100644
--- a/hiera/hwaddrs.yaml
+++ b/hiera/hwaddrs.yaml
@@ -141,3 +141,5 @@ hwaddrs:
webrelay2-adm: "52:54:00:01:00:38"
webrelay2-bone: "52:54:00:03:00:38"
auto1-adm: "52:54:00:01:00:39"
+ batch3-adm: "52:54:00:01:00:40"
+ batch3-bone: "52:54:00:03:00:40"
\ No newline at end of file
Generate DNS configuration file and zones.
$ confiture dns --uniq-ptr
INFO:generating configuration output/named.conf
INFO:generating configuration output/zones/my.domain.name.com
INFO:generating configuration output/zones/1.1.10.in-addr.arpa
INFO:generating configuration output/zones/0.168.192.in-addr.arpa
Add to git’s index the relevant files and discard changes of the other ones.
$ git add output/zones/my.domain.name.com output/zones/1.1.10.in-addr.arpa output/zones/0.168.192.in-addr.arpa
$ git checkout output/zones/*
Review DNS changes with git diff --cached
.
Now, generate the DHCP configuration file.
$ confiture dhcp
INFO:generating configuration output/dhcpd.conf
Review the changes with a git diff
. Please note that some parts of the file generation require some randomization, so you may have some IP changes that are not related to your changes. But, you should see that your newly assigned IP is now allocated to the MAC you’ve assigned.
diff --git a/output/dhcpd.conf b/output/dhcpd.conf
index 27dcc33..254ca5d 100644
--- a/output/dhcpd.conf
+++ b/output/dhcpd.conf
@@ -65170,6 +65170,12 @@ group {
option host-name "batch2";
option domain-name "my.domain.name.com";
}
+ host batch3-adm.my.domain.name.com {
+ hardware ethernet 52:54:00:01:00:40;
+ fixed-address 10.1.1.58;
+ option host-name "batch3";
+ option domain-name "my.domain.name.com";
+ }
host db1-adm.my.domain.name.com {
hardware ethernet 52:54:00:01:00:05;
fixed-address 10.1.1.5;
@@ -65510,6 +65516,12 @@ group {
option host-name "batch2";
option domain-name "my.domain.name.com";
}
+ host batch3-bone.my.domain.name.com {
+ hardware ethernet 52:54:00:03:00:40;
+ fixed-address 192.168.0.58;
+ option host-name "batch3";
+ option domain-name "my.domain.name.com";
+ }
host db1-bone.my.domain.name.com {
hardware ethernet 52:54:00:03:00:05;
fixed-address 192.168.0.27;
Add the generated file to git’s index, commit your changes and push them:
$ git add output/dhcpd.conf
$ git commit -m "Added the 'batch3' VM"
$ git push origin HEAD:master
Configuration files deployment¶
Deploy into your infrastructure the previously generated files.
$ cp confiture/output/zones/* domain/all-nodes/var/named/primary
$ cp confiture/output/dhcpd.conf domain/all-nodes/etc/dhcp/dhcpd.conf
VM Allocation¶
Add a new pcocc::standalone::vm
resource to the hypervisors profiles, these profiles should be in domaine-specific hiera data sources.
For clarity, other profile specifications are removed of the following snippet. Please use the following as an example and merge it with your configuration.
# hieradata/91_pcocc_mngt.yaml
resources:
pcocc::standalone::vm:
batch3:
reference_image_name: 'volspoms:cloud-ocean2.7'
fleet: true
cpu_count: 4
mem_per_cpu: 2000
ethernet_nics:
adm: "52:54:00:01:00:40"
bone: "52:54:00:03:00:40"
ssh_authorized_keys: "%{alias('vm_authorized_keys')}"
resource_set: 'adm_bone'
persistent_drives:
- "/volspoms1/pcocc/persistent_drives/batch3.qcow2":
mmp: "yes"
cache: "writethrough"
persistent_drive_dir: "/volspoms1/pcocc/persistent_drives"
yum_repos: "%{alias('vm_yum_repos')}"
constraints:
- "MachineMetadata=role=top"
puppet_server_name: "%{hiera('puppet_server_hostname')}"
Any configurations that are locally specific should be made here.
Note
Please note that this snippet contains hiera lookups like %{alias('key_to_lookup')}
.
These keys are recursively resolved by hiera and they can be declared like any other hiera key.
Interpolation tokens are documented in PuppetLabs documentation website
VM profile assignement and configuration¶
Configure the VM’s roles in puppet’s ENC configuration file, this is only a simplified example, adapt to your local needs.
diff --git a/all-nodes/etc/puppet/puppet-groups.yaml b/all-nodes/etc/puppet/puppet-groups.yaml
index 52676aee..9f1843b3 100644
--- a/all-nodes/etc/puppet/puppet-groups.yaml
+++ b/all-nodes/etc/puppet/puppet-groups.yaml
@@ -10 +10 @@ roles:
- 90_vm_adm_bone: 'batch[1,2],db1,webrelay[1-2]'
+ 90_vm_adm_bone: 'batch[1-3],db1,webrelay[1-2]'
@@ -15 +15 @@ roles:
- 99-common: "top[1-3],worker[1-3],admin[1-2],batch[1-2],db1,infra[1-2],lb[1-2],monitor1,ns[1-3],nsrelay1,webrelay[1-2],auto1"
+ 99-common: "top[1-3],worker[1-3],admin[1-2],batch[1-3],db1,infra[1-2],lb[1-2],monitor1,ns[1-3],nsrelay1,webrelay[1-2],auto1"
@@ -24 +24 @@ roles:
- gluster_client: "admin[1-2],auto1,batch[1-2],db1,infra[1-2],lb[1-2],monitor1,ns[1-3],nsrelay1,webrelay[1-2],irene[105,106,245-271]"
+ gluster_client: "admin[1-2],auto1,batch[1-3],db1,infra[1-2],lb[1-2],monitor1,ns[1-3],nsrelay1,webrelay[1-2],irene[105,106,245-271]"
@@ -30,2 +30,2 @@ roles:
- monitored_server: 'top[1-3],worker[1-3],admin[1-2],batch[1-2],db1,infra[1-2],lb[1-2],monitor1,ns[1-3],nsrelay1,webrelay[1-2],irene[105,106,245-271],irene[105,106,245-271]a,irene[105,106,245-271]b,irene[130-131,140-141,150-153,170-173,190-195]'
- batch_server: 'batch[1-2]'
+ monitored_server: 'top[1-3],worker[1-3],admin[1-2],batch[1-3],db1,infra[1-2],lb[1-2],monitor1,ns[1-3],nsrelay1,webrelay[1-2],irene[105,106,245-271],irene[105,106,245-271]a,irene[105,106,245-271]b,irene[130-131,140-141,150-153,170-173,190-195]'
+ batch_server: 'batch[1-3]'
Any locally specific configuration to the applied profiles should be made here.
Generate puppet certificates for your newly created VM
(i0conf1) $ puppet cert generate batch3.$(facter domain)
(admin1) $ scp i0conf1:/etc/puppetlabs/puppet/ssl/certs/batch3.mg1.hpc.domain.fr.pem domain/nodes/batch3/etc/puppetlabs/puppet/ssl/certs/
(admin1) $ scp i0conf1:/etc/puppetlabs/puppet/ssl/private_keys/batch3.mg1.hpc.domain.fr.pem domain/nodes/batch3/etc/puppetlabs/puppet/ssl/private_keys/
Generate SSH host keys for your newly created VM
$ ssh-keygen -t rsa -f nodes/batch3/etc/ssh/ssh_host_rsa_key -N ""
$ ssh-keygen -t ecdsa -f nodes/batch3/etc/ssh/ssh_host_ecdsa_key -N ""
$ ssh-keygen -t ed25519 -f nodes/batch3/etc/ssh/ssh_host_ed25519_key -N ""
$ ssh-keygen -t dsa -f nodes/batch3/etc/ssh/ssh_host_dsa_key -N ""
Commit and push your changes.
$ git add -A
$ git commit -m "Allocate and configure the 'batch3' VM"
$ git push origin HEAD:production
Configuration deployment¶
Deploy the DNS and DHCP configurations with puppet:
$ clush -bw ns[1-3],infra[1-2] puppet-changes --tags dns,dhcp
---------------
ns[1-3] (3)
---------------
File:
noop: /var/named/primary/0.3.10.in-addr.arpa
noop: /var/named/primary/1.1.10.in-addr.arpa
noop: /var/named/primary/0.168.192.in-addr.arpa
noop: /var/named/primary/my.domain.name.com
---------------
infra[1-2] (2)
---------------
File:
noop: /etc/dhcp/dhcpd.conf
$ clush -bw ns[1-3],infra[1-2] puppet-apply-changes --tags dns,dhcp
Deploy the Pcocc and Fleet configurations on the hypervisors and the admin VMs
$ clush -bw top[1-3],worker[1-3],admin[1-2] puppet-changes --tags pcocc
-----------------------------------
admin[1-2],top[1-3],worker[1-3] (6)
-----------------------------------
File:
noop: /etc/fleet/units/pcocc-vm-batch3.service
noop: /etc/pcocc/cloudinit/batch3.yaml
noop: /etc/sysconfig/pcocc-vm-batch3
Yaml_settings:
noop: pcocc_batch3_template
$ clush -bw top[1-3],worker[1-3],admin[1-2] puppet-apply-changes --tags pcocc
Bootstrap the VM¶
In order to be scheduled on the cluster, the VM has to be inserted as a service in Fleet. To do so, load the puppet-managed unit file on a hypervisor.
(top1) $ fleetctl load /etc/fleet/units/pcocc-vm-batch3.service
Unit pcocc-vm-batch3.service inactive
Unit pcocc-vm-batch3.service loaded on a1ff44e6.../worker1
You can now launch the VM and monitor its bootstrap process
$ fleetctl start --no-block pcocc-vm-batch3
Triggered unit pcocc-vm-batch3.service start
Triggered unit pcocc-vm-batch3.service start
$ pcocc console -J batch3 vm0
If you want to rebuild the VM from scratch, stop the VM, delete the qcow and restart the VM
$ fleetctl stop --no-block pcocc-vm-batch3
Triggered unit pcocc-vm-batch3.service stop
Successfully stopped units [pcocc-vm-batch3.service].
$ rm /volspoms1/pcocc/persistent_drives/batch3.qcow2
$ fleetctl start --no-block pcocc-vm-batch3