You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
+30-1Lines changed: 30 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,6 +68,8 @@ spec:
68
68
topologyKey: <string>
69
69
whenUnsatisfiable: <string>
70
70
labelSelector: <object>
71
+
nodeAffinityPolicy: <string> # optional; alpha since v1.25
72
+
nodeTaintsPolicy: <string> # optional; alpha since v1.25
71
73
```
72
74
73
75
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
@@ -93,7 +95,7 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
93
95
Pod topology spread treats "global minimum" as 0, and then the calculation of `skew` is performed.
94
96
The "global minimum" is the minimum number of matching Pods in an eligible domain,
95
97
or zero if the number of eligible domains is less than `minDomains`.
96
-
- When the number of eligible domains with matching topology keys equals or is greater than
98
+
- When the number of eligible domains with matching topology keys equals or is greater than
97
99
`minDomains`, this value has no effect on scheduling.
98
100
- When `minDomains` is nil, the constraint behaves as if `minDomains` is 1.
99
101
- When `minDomains` is not nil, the value of `whenUnsatisfiable` must be "`DoNotSchedule`".
@@ -112,6 +114,33 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
112
114
113
115
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
114
116
117
+
- **nodeAffinityPolicy** indicates how we will treat Pod's nodeAffinity/nodeSelector
118
+
when calculating pod topology spread skew. Options are:
119
+
- Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
120
+
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
121
+
122
+
If this value is nil, the behavior is equivalent to the Honor policy.
123
+
124
+
{{< note >}}
125
+
The `nodeAffinityPolicy` is an alpha-level field added in 1.25. You have to enable the
When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.
116
145
117
146
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
0 commit comments