@@ -130,7 +130,7 @@ no container experiences resource starvation, as kubernetes currently lacks a
130130mechanism for sharing resources between containers easily.
131131
132132Introducing pod-level resource requests and limits offers a more simplified
133- resource management for multi-container pods as it is easier to gauge theß
133+ resource management for multi-container pods as it is easier to gauge the
134134collective resource usage of all containers in a pod rather than predicting each
135135container’s individual needs. This proposal seeks to support pod-level resource
136136management, enabling Kubernetes to control the total resource consumption of the
@@ -1519,17 +1519,28 @@ Yes. Any new pods created after disabling the feature will not utilize the
15191519pod-level resources feature. Disabling the feature will not affect existing pods
15201520not using pod-level resources feature -- they will continue to function.
15211521
1522- For pods that were created with pod-level resources, disabling the feature can
1522+ * For pods that were created with pod-level resources, disabling the feature can
15231523result in a discrepancy between how different components calculate resource usage
1524- and the actual resource consumption of those workloads. To resolve this, users
1525- can delete the affected pods and recreate them.
1526-
1527- For example, if a ResourceQuota object exists in a namespace with pods that were
1528- using pod-level resources, and then the feature is disabled, those existing pods
1529- will continue to run. However, the ResourceQuota controller will revert to using
1530- container-level resources instead of pod-level resources when calculating
1531- resource usage for these pods. This may lead to a temporary mismatch until the
1532- pods are recreated or the resource settings are updated.
1524+ and the actual resource consumption of those workloads. For example, if a
1525+ ResourceQuota object exists in a namespace with pods that were using pod-level
1526+ resources, and then the feature is disabled, those existing pods will continue to
1527+ run. However, the ResourceQuota controller will revert to using container-level
1528+ resources instead of pod-level resources when calculating resource usage for
1529+ these pods. This may lead to a temporary mismatch until the pods are recreated or
1530+ the resource settings are updated.
1531+
1532+ * Another scenario arises if the feature is disabled after a series of deployments
1533+ have been admitted with only pod-level resources. In this case, the scheduler
1534+ will interpret these pods as having the minimum implementation-defined resource
1535+ requirements. As a result, these pods may be classified as BestEffort QoS pods,
1536+ which means they are considered to have the lowest priority for resource
1537+ allocation. Consequently, this classification could lead to scheduling issues, as
1538+ the pods may be placed on nodes that lack sufficient resources for them to run
1539+ effectively. This situation highlights the potential risks associated with
1540+ toggling the feature gate after deployments are already in place, as it can
1541+ affect pod scheduling and resource availability in the cluster.
1542+
1543+ To resolve this, users can delete the affected pods and recreate them.
15331544
15341545###### What happens if we reenable the feature if it was previously rolled back?
15351546
0 commit comments