Puppet """""" Schéma de base '''''''''''''' .. graphviz:: digraph "arch" { nodesep=0.6; ranksep=1.7; subgraph cluster_client1 { label="Client 1"; "http_client1"[label="puppet_agent"]; } subgraph cluster_client2 { label="Client 2"; "http_client2"[label="puppet_agent"]; } subgraph cluster_proxy1 { label="Proxy 1" "haproxy_1"[label="haproxy"]; } subgraph cluster_proxy2 { label="Proxy 2" "haproxy_2"[label="haproxy"]; } subgraph cluster_httpd1 { label="Islet Puppet Server 1" "server1"[label="puppet"]; } subgraph cluster_httpd2 { label="Islet Puppet Server 2" "server2"[label="puppet"]; } subgraph cluster_httpd3 { label="Worker Puppet Server 1" "server3"[label="puppet"]; } subgraph cluster_httpd4 { label="Worker Puppet Server 2" "server4"[label="puppet"]; } subgraph cluster_httpd5 { label="Worker Puppet Server 3" "server5"[label="puppet"]; } "http_client1" -> "haproxy_1"[color=blue,weight=10] "http_client1" -> "haproxy_2"[color=blue,weight=10, style=dotted] "http_client2" -> "haproxy_2"[color=blue,weight=10] "http_client2" -> "haproxy_1"[color=blue,weigth=10, style=dotted] "http_client1" -> "haproxy_1"[color=blue,weight=10,dir=back] "http_client1" -> "haproxy_2"[color=blue,weight=10,style=dotted,dir=back] "http_client2" -> "haproxy_2"[color=blue,weight=10,dir=back] "http_client2" -> "haproxy_1"[color=blue,weigth=10,style=dotted,dir=back] "haproxy_1" -> server1 [color="green:blue"] "haproxy_1" -> server2 [color="green:blue"] "haproxy_1" -> server3 [color="green:blue"] "haproxy_1" -> server4 [color="green:blue"] "haproxy_1" -> server5 [color="green:blue"] "haproxy_2" -> server1 [color="green:blue"] "haproxy_2" -> server2 [color="green:blue"] "haproxy_2" -> server3 [color="green:blue"] "haproxy_2" -> server4 [color="green:blue"] "haproxy_2" -> server5 [color="green:blue"] "haproxy_1" -> "haproxy_2" [label=VRRP,constraint=false,dir=both] { rank = sink; Legend [shape=none, margin=0, label=<
Legend
Requêtes HTTP
Checks
>]; } } Description ''''''''''' We suggest to use two HAProxy servers for load balancing the requests to the Puppet servers. At TCP level, `HAProxy` will be configured as transparent `TCP proxy`. If `HAProxy` should be subject to some scalability issues, the machines can be configured in such a way to send `HTTP redirects` – if the `puppet` client can still follow the requests. Load balancing can be achieved by dispatching the number of connections for the `puppet` servers running on the WORKERs. For safety reasons, the number of connections should be limited. The two `HAProxy` servers are accessed via an DNS server in round robin mode - which is common to all clients. We suggest to not rely on Puppet’s the default certificate mangament structure, which is based on on a CSR (certificate signing request) per client manually signed by an admin. Our approach is to cope with the certificates in advance. The certificate generation is supported by an additional script of the `puppet-addon` package, and will also generate the correct `dns-alt-names` for configuring correctly the servers in case you do not use the `HAProxy` based solution. This approach requires the deployment of the certificates before launching puppet. The CA and the certificates will be deployed locally (and not on the gluster file system). Only the CRLs are to be shared amongst all puppet servers. Via `git hooks` we can launch an `r10k` deployment.`r10k` will use a git repository shared on `gluster` (and not shared via `ssh`). .. raw:: latex \clearpage