Replies: 4 comments 1 reply
-
I will convert this issue to a GitHub discussion. Currently GitHub will automatically close and lock the issue even though your question will be transferred and responded to elsewhere. This is to let you know that we do not intend to ignore this but this is how the current GitHub conversion mechanism makes it seem for the users :( |
Beta Was this translation helpful? Give feedback.
-
According to the log you have two concurrent operations on the same exchange:
Deletion of an exchange involves deletion of all of its bindings, and potentially some auto-delete queues, which can be a non-trivial tree of objects even if you ignore the fact that every binding uses more than one row in as many tables. Over the years we have tried So, using a long-lived exchange would be the easiest to reason about way out. |
Beta Was this translation helpful? Give feedback.
-
Oh, I forgot to mention that actually this exchange is already long-lived. We had a network connectivity problems some time ago and it is possible that one of the cluster nodes was broken. I don't know why this exchange in "deleting" state now but it is how there is. |
Beta Was this translation helpful? Give feedback.
-
More info from my team - they recreated vhost (with the same name) after connectivity problem occurred. |
Beta Was this translation helpful? Give feedback.
-
Hi. We recently faced incomprehensible behavior in one of our services in production. The service at the start began to throw an exception:
It is strange because the error began to arise where it has never been. Moreover, we have other services that work in a similar way (i.e. correct without any error). We conducted an experiment - we deleted the exchange and started the application. We did it on both the development machine and on one of the test stands. In both cases, the exchange was created and the application worked, OK. So the problem affects only production.
Then we looked into the production server logs:

As I understood, the situation is that the exchange is in an ambiguous state. On the one hand, it still exists, but on the other hand no. For some reason, the exchange's removal operation has not ended. So my question is - how to understand why this happened and how to fix the situation?
Our RabbitMQ server is clustered and consists of 3 nodes. I can provide more information if necessary.
Beta Was this translation helpful? Give feedback.
All reactions