DHCP¶
Schéma de base¶
![digraph "arch" {
subgraph cluster_client {
label="Client";
"client_dhcp"[label="Client DHCP"];
}
subgraph cluster_client2 {
label="Client 2";
"client2_dhcp"[label="Client DHCP"];
}
subgraph cluster_client4 {
label="Client 3";
"client3_dhcp"[label="Client DHCP"];
}
subgraph cluster_client3 {
label="Client 4";
"client4_dhcp"[label="Client DHCP"];
}
subgraph cluster_chassis {
label="Chassis Switch";
"switch_relay"[label="Opt82 Relay to first server"];
}
subgraph cluster_chassis2 {
label="Chassis Switch 2";
"switch_relay2"[label="Opt82 Relay to second server"];
}
subgraph cluster_islet_switch {
label="Islet Switch"
"islet_router"[label="Router"];
}
subgraph cluster_islet_switch2 {
label="Islet 2 Switch"
"islet_router2"[label="Router"];
}
subgraph cluster_top_switch {
label="Top Switch"
"top_switch"[label="Switch"];
}
subgraph cluster_dhcp_server1 {
label="Server 1"
"server1_dhcp"[label="dhcpd"];
}
subgraph cluster_dhcp_server2 {
label="Server 2"
"server2_dhcp"[label="dhcpd"];
}
"client_dhcp" -> "switch_relay"[label="brodcast",color=green];
"client2_dhcp" -> "switch_relay"[label="brodcast",color=green];
"client3_dhcp" -> "switch_relay2"[label="brodcast",color=red];
"client4_dhcp" -> "switch_relay2"[label="brodcast",color=red];
"switch_relay" -> "islet_router"[label="unicast",color=green];
"switch_relay2" -> "islet_router2"[label="unicast",color=red];
"islet_router" -> "top_switch"[label="unicast",color=green];
"islet_router2" -> "top_switch"[label="unicast",color=red];
"top_switch" -> "server1_dhcp"[label="unicast",color=green];
"top_switch" -> "server2_dhcp"[label="unicast",color=red];
}](../../_images/graphviz-752eb41133af01264436652ef1a7f47e54105706.png)
Description¶
We suggest to use N DHCP servers, where the load balancing is done via DHCP relays in a static and predefined manner. Indeed, the current (and probably also the future) generation of supercomputers require the option 82 (circuit-id + remote-id). This requires to configure all the switches of each chassis with a DHCP relay.
The relays are configured using confiture, and the switches will be able to use different DHPC servers. The N servers will thus all be configured in the same way with the same configuration (generated by confiture), but each one will have a unique service IP address, allowing for static load balancing.