Skip to content

feat(recommender): add OOMMinBumpUp&OOMBumpUpRatio to CRD #8012

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

omerap12
Copy link
Member

@omerap12 omerap12 commented Apr 5, 2025

What type of PR is this?

/kind feature

What this PR does / why we need it:

This PR adds new options to the Vertical Pod Autoscaler (VPA) to better handle Out of Memory (OOM) events:
It adds two new settings to the VPA configuration:

  • OOMBumpUpRatio: How much to increase memory after an OOM event
  • OOMMinBumpUp: The smallest amount to increase memory after an OOM event

These settings can be set for each container within a VPA's resource policy.
If not set for a specific container, it will use default values from the VPA recommender.

Example:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: oom-test-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: oom-test
  updatePolicy:
    updateMode: Auto
  resourcePolicy:
    containerPolicies:
    - containerName: "*"
      oomBumpUpRatio: 1.5
      oomMinBumpUp: 104857600

Which issue(s) this PR fixes:

part of #7650

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Added OOMBumpUpRatio and OOMMinBumpUp options to VPA for customizing memory increase after OOM events.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

VPA now supports OOMBumpUpRatio and OOMMinBumpUp for fine-tuning memory recommendations after OOM events, configurable globally or per-VPA.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Apr 5, 2025
@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. area/vertical-pod-autoscaler size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 5, 2025
@omerap12
Copy link
Member Author

omerap12 commented Apr 6, 2025

We might want to create a proper AEP for this, but this is the general direction I'm thinking. I can open additional issues to track the specific flags we’d like to support for this type of configuration.
What do you think?
cc @voelzmo @raywainman
(Wanted to loop in Adrian as well, but he's currently traveling :) )
/hold
/kind api-change

@k8s-ci-robot k8s-ci-robot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels Apr 6, 2025
Copy link
Contributor

@voelzmo voelzmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @omerap12 thanks for the PR!

I agree it makes sense to be able to configure the OOM bump behavior on VPA level. There's a few questions on how to implement this, though:

  • I'm not sure if we want this to be a configuration on Container level or on Pod level, i.e. should this apply to all Containers controlled by a certain VPA or should this rather be something that's controlled per individual Container? I think so far we've been mostly offering configurations on Container level, probably this would also apply here. Or do we have some indication that people who want to configure custom OOM bumps want to do this for all Containers of a Pod in the same way?
  • I don't think we should introduce a new configuration type recommenderConfig. Technically, all of these properties are configuration options of the recommender (histogram decay options, maxAllowed, minAllowed, which resources to include in the recommendations, etc), so this doesn't seem like a reasonable way to group things. If we agree to make this configuration Container specific, I'd rather add this to the ContainerResourcePolicy
  • Currently, OOM bump configuration is part of the AggregationsConfig, as it is assumed to be globally configured, like all the other options in there. This config is only initialized once, in the main.go:
    model.InitializeAggregationsConfig(model.NewAggregationsConfig(*memoryAggregationInterval, *memoryAggregationIntervalCount, *memoryHistogramDecayHalfLife, *cpuHistogramDecayHalfLife, *oomBumpUpRatio, *oomMinBumpUp))
    • If we, however, want to make this configurable per VPA, I'd rather opt for pushing this configuration down, rather than adding some if-else to the cluster_feeder resulting in having to find the correct VPA for a Pod every time we add an OOM sample
    • IMHO, a possible place to put this configuration options would be the aggregate_container_state, where we already have the necessary methods to re-load the ContainerResourcePolicy options on VPA updates and then read this in cluster.go, right before we add the OOM sample to the ContainerAggregation:
      err := containerState.RecordOOM(timestamp, requestedMemory)

WDYT?

@omerap12
Copy link
Member Author

omerap12 commented Apr 7, 2025

Hey @omerap12 thanks for the PR!

I agree it makes sense to be able to configure the OOM bump behavior on VPA level. There's a few questions on how to implement this, though:

  • I'm not sure if we want this to be a configuration on Container level or on Pod level, i.e. should this apply to all Containers controlled by a certain VPA or should this rather be something that's controlled per individual Container? I think so far we've been mostly offering configurations on Container level, probably this would also apply here. Or do we have some indication that people who want to configure custom OOM bumps want to do this for all Containers of a Pod in the same way?

  • I don't think we should introduce a new configuration type recommenderConfig. Technically, all of these properties are configuration options of the recommender (histogram decay options, maxAllowed, minAllowed, which resources to include in the recommendations, etc), so this doesn't seem like a reasonable way to group things. If we agree to make this configuration Container specific, I'd rather add this to the ContainerResourcePolicy

  • Currently, OOM bump configuration is part of the AggregationsConfig, as it is assumed to be globally configured, like all the other options in there. This config is only initialized once, in the main.go:

    model.InitializeAggregationsConfig(model.NewAggregationsConfig(*memoryAggregationInterval, *memoryAggregationIntervalCount, *memoryHistogramDecayHalfLife, *cpuHistogramDecayHalfLife, *oomBumpUpRatio, *oomMinBumpUp))

    • If we, however, want to make this configurable per VPA, I'd rather opt for pushing this configuration down, rather than adding some if-else to the cluster_feeder resulting in having to find the correct VPA for a Pod every time we add an OOM sample
    • IMHO, a possible place to put this configuration options would be the aggregate_container_state, where we already have the necessary methods to re-load the ContainerResourcePolicy options on VPA updates and then read this in cluster.go, right before we add the OOM sample to the ContainerAggregation:
      err := containerState.RecordOOM(timestamp, requestedMemory)

WDYT?

Thanks for the input!

  1. You're right. it makes sense to keep this as a per-container configuration, in line with most of our existing settings.
  2. The recommenderConfig was just part of my initial POC, so with (1) in mind, we definitely don’t need it.
  3. Agreed. Thanks for pointing out the relevant spot in the code!

So yep, I agree with all of your suggestions :)

@omerap12 omerap12 force-pushed the oom-feat branch 2 times, most recently from 5e23c1d to b7b84de Compare April 7, 2025 16:54
@k8s-ci-robot k8s-ci-robot added area/cluster-autoscaler area/provider/cluster-api Issues or PRs related to Cluster API provider and removed approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Apr 7, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: omerap12

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 7, 2025
@omerap12
Copy link
Member Author

omerap12 commented Apr 7, 2025

/remove area provider/cluster-api
/remove area/cluster-autoscaler

@omerap12
Copy link
Member Author

omerap12 commented Apr 7, 2025

/remove-area provider/cluster-api
/remove-area cluster-autoscaler

@k8s-ci-robot k8s-ci-robot removed area/provider/cluster-api Issues or PRs related to Cluster API provider area/cluster-autoscaler labels Apr 7, 2025
omerap12 added 5 commits April 7, 2025 17:14
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
@omerap12 omerap12 requested a review from voelzmo April 8, 2025 06:33
Copy link
Contributor

@voelzmo voelzmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the adjustments!
Some comments and nits inline. Additionally, I'd really like to see a test verifying that the values from the VPA are used instead of the global defaults.

@omerap12
Copy link
Member Author

omerap12 commented Apr 9, 2025

o see a test verifying that the values from the VPA are used instead of the global defaults.

Thanks for the review. I plan to add tests soon :)

Signed-off-by: Omer Aplatony <[email protected]>
@omerap12 omerap12 requested a review from voelzmo April 9, 2025 19:09
@omerap12
Copy link
Member Author

/unhold

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 11, 2025
@k8s-triage-robot
Copy link

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

Signed-off-by: Omer Aplatony <[email protected]>
@raywainman
Copy link
Contributor

Thanks for putting this together Omer!

Since this does add an API field, what do you think about putting together a quick AEP? Doesn't need to be complicated or super elaborate, just somewhere we can capture some of the rationale for adding this new field.

Something really simple like https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/enhancements/4566-min-replicas could work here.

WDYT?

@omerap12
Copy link
Member Author

Thanks for putting this together Omer!

Since this does add an API field, what do you think about putting together a quick AEP? Doesn't need to be complicated or super elaborate, just somewhere we can capture some of the rationale for adding this new field.

Something really simple like https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/enhancements/4566-min-replicas could work here.

WDYT?

Sounds reasonable. I’ll put together a quick AEP :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/vertical-pod-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants