You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -250,9 +253,6 @@ The Node will transition to an unready state which would be detected by a Machin
250
253
though there may be some delay depending on the configuration of the MachineHealthCheck.
251
254
In the future, a termination handler could trigger the Machine to be deleted sooner.
252
255
253
-
254
-
255
-
256
256
### 'Interruptible' label
257
257
258
258
In order to deploy the termination handler, we'll need to create a DaemonSet that runs it on each spot instance node.
@@ -285,8 +285,7 @@ if !interruptible {
285
285
```
286
286
287
287
### Future Work
288
-
289
-
#### Termination handler
288
+
### Termination handler
290
289
291
290
To enable graceful termination of workloads running on non-guaranteed instances,
292
291
a DaemonSet will need to be deployed to watch for termination notices and gracefully move workloads.
@@ -297,6 +296,70 @@ This would be preferable as a DaemonSet would not be required on workload cluste
297
296
Since this is not essential for running on non-guaranteed instances and existing solutions exist for each provider,
298
297
users can deploy these existing solutions until CAPI has capacity to implement a solution.
299
298
299
+
#### Termination handler design
300
+
301
+
To enable graceful termination of workloads running on non-guaranteed instances,
302
+
a termination handler pod will need to be deployed on each spot instance.
303
+
304
+
The termination pod will be developed to poll the metadata service for the Node
305
+
that it is running on.
306
+
We will implement request/response handlers for each infrastructure provider that supports non-guaranteed instances
307
+
that will enable the handler to determine if the cloud provider instance is due
308
+
for termination.
309
+
310
+
The code for all providers will be mostly similar, the only difference is logic for polling termination endpoint and processing the response.
311
+
This means that cloud provider type can be passed as an argument and termination handler can be common for all providers and placed in a separate repository.
312
+
313
+
Should the instance be marked for termination, the handler will add a condition
314
+
to the Node object that it is running on to signal that the instance will be
315
+
terminating imminently.
316
+
317
+
```yaml
318
+
- type: Terminating
319
+
status: True
320
+
Reason: TerminationRequested
321
+
Message: The cloud provider has marked this instance for termination
322
+
```
323
+
324
+
Once the Node has been marked with the `Terminating` condition, it will be
325
+
the MachineHealthCheck controller's responsibility to ensure that the Machine
326
+
is deleted, triggering it to be drained and removed from the cluster.
327
+
328
+
To enable this, the following MachineHealthCheck should be deployed:
329
+
330
+
```yaml
331
+
---
332
+
apiVersion: cluster.x-k8s.io/v1alpha3
333
+
kind: MachineHealthCheck
334
+
metadata:
335
+
name: cluster-api-termination-handler
336
+
labels:
337
+
api: clusterapi
338
+
k8s-app: termination-handler
339
+
spec:
340
+
selector:
341
+
matchLabels:
342
+
cluster.x-k8s.io/interruptible-instance: "" # This label should be automatically applied to all spot instances
343
+
maxUnhealthy: 100% # No reason to block the interruption, it's inevitable
344
+
unhealthyConditions:
345
+
- type: Terminating
346
+
status: "True"
347
+
timeout: 0s # Immediately terminate the instance
348
+
```
349
+
350
+
#### Running the termination handler
351
+
The Termination Pod will be part of a DaemonSet, that can be deployed using [ClusterResourceSet](https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/proposals/20200220-cluster-resource-set.md). The DaemonSet will select Nodes which are labelled as spot instances to ensure the Termination Pod only runs on instances that require termination handlers.
352
+
353
+
The spot label will be added to the Node by the machine controller as described [here](#interruptible-label), provided they support spot instances and the instance is a spot instance.
354
+
#### Termination handler security
355
+
The metadata services that are hosted by the cloud providers are only accessible from the hosts themselves, so the pod will need to run within the host network.
356
+
357
+
To restrict the possible effects of the termination handler, it should re-use the Kubelet credentials which pass through the [NodeRestriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller. This limits the termination handler to only be able to modify the Node on which it is running. Eg, it would not be able to set the conditions on a different Node.
358
+
359
+
To be able to mount and read the Kubelet credentials, the pod must be able to mount host paths and, since the credentials have to be mounted as read-only just for the root account, the termination handler must be able to run as root.
360
+
361
+
### Future Work
362
+
300
363
#### Support for MachinePools
301
364
302
365
While MachinePools are being implemented across the three cloud providers that this project covers,
0 commit comments