-
Notifications
You must be signed in to change notification settings - Fork 525
Errors when scaling down mongodb #1635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This issue is being marked stale because it has been open for 60 days with no activity. Please comment if this issue is still affecting you. If there is no change, this issue will be closed in 30 days. |
This is still relevant |
This issue was closed because it became stale and did not receive further updates. If the issue is still affecting you, please re-open it, or file a fresh Issue with updated information. |
I have been able to scale my MongoDB instance down by patching the number of members to 0 in the mongodbcommunity resource before scaling the statefulset to 0 replicas:
To restore service I patched the mongodbcommunity resource back to the usual number of members and waited for the statefulset to restart. |
What did you do to encounter the bug?
Steps to reproduce the behavior:
Setting replicas of the mongodb member and arbiter to 0.
What did you expect?
The mongodb operator accepts this configuration of the mongodb. Not having any pods for the mongodb should be a valid configuration for example to scale down the cluster at night or on the weekends.
What happened instead?
The new spec is not valid. Therefore we get the following error
Operator Information
Kubernetes Cluster Information
Additional context
Add any other context about the problem here.
Operator logs:
The text was updated successfully, but these errors were encountered: