📱

Read on Your E-Reader

Thousands of readers get articles like this delivered straight to their Kindle or Boox. New articles arrive automatically.

Learn More

This is a preview. The full article is published at kubernetes.io.

Kubernetes 1.35: In-Place Pod Resize Graduates to Stable

By Natasha SarkarKubernetes Blog

Kubernetes 1.35: In-Place Pod Resize Graduates to Stable This release marks a major step: more than 6 years after its initial conception, the In-Place Pod Resize feature (also known as In-Place Pod Vertical Scaling), first introduced as alpha in Kubernetes v1.27, and graduated to beta in Kubernetes v1.33, is now stable (GA) in Kubernetes 1.35! This graduation is a major milestone for improving resource efficiency and flexibility for workloads running on Kubernetes. What is in-place Pod Resize? In the past, the CPU and memory resources allocated to a container in a Pod were immutable. This meant changing them required deleting and recreating the entire Pod. For stateful services, batch jobs, or latency-sensitive workloads, this was an incredibly disruptive operation. In-Place Pod Resize makes CPU and memory requests and limits mutable, allowing you to adjust these resources within a running Pod, often without requiring a container restart. Key Concept: Desired Resources: A container's spec.containers[*].resources field now represents the desired resources. For CPU and memory, these fields are now mutable. Actual Resources: The status.containerStatuses[*].resources field reflects the resources currently configured for a running container. Triggering a Resize: You can request a resize by updating the desired requests and limits in the Pod's specification by utilizing the new resize subresource. How can I start using in-place Pod Resize? Detailed usage instructions and examples are provided in the official documentation: Resize CPU and Memory Resources assigned to Containers . How does this help me? In-place Pod Resize is a foundational building block that unlocks seamless, vertical autoscaling and improvements to workload efficiency. Resources adjusted without disruption Workloads sensitive to latency or restarts can have their resources modified in-place without downtime or loss of state. More powerful autoscaling Autoscalers are now empowered to adjust resources and with less impact. For example, Vertical Pod Autoscaler (VPA)'s InPlaceOrRecreate update mode, which leverages this feature, has graduated to beta. This allows resources to be adjusted automatically and seamlessly based on usage with minimal disruption. See AEP-4016 for more details. See Address transient resource needs Workloads that temporarily need more resources can be adjusted quickly. This enables features like the CPU Startup Boost ( AEP-7862 ) where applications can request more CPU during startup and then automatically scale back down. Here are a few examples of some use cases: A game server that needs to adjust its size with shifting player count. A pre-warmed worker that can be shrunk while unused but inflated with the first request. Dynamically scale with load for efficient bin-packing. Increased resources for JIT compilation on startup. Changes between beta (1.33) and stable (1.35) Since the initial beta in v1.33, development effort has primarily been around stabilizing the feature and improving its usability based on community feedback. Here are the primary changes for the stable release: Memory limit decrease Decreasing memory limits was previously prohibited. This restriction has been lifted, and memory limit decreases are now permitted. The Kubelet attempts to prevent OOM-kills by allowing the resize only if the current memory usage is below...

Preview: ~500 words

Continue reading at Kubernetes

Read Full Article

More from Kubernetes Blog

Subscribe to get new articles from this feed on your e-reader.

View feed

This preview is provided for discovery purposes. Read the full article at kubernetes.io. LibSpace is not affiliated with Kubernetes.

Kubernetes 1.35: In-Place Pod Resize Graduates to Stable | Read on Kindle | LibSpace