Skip to content

use destruct method to flush the producer #341

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

sash
Copy link
Contributor

@sash sash commented Apr 11, 2025

I had issues with the new async producer. Because the reference is captured in the terminate callback, in fact the rdkafka producer is never garbage collcted and because of that in the queue the connection to kafka is never terminated.

That causes issue when used in the queue as new connections indeed created, but the old ones are never closed.

The solution would be to use the destructor. In that way - if you use the main facade method - the reference is kept around for a second publish, and if you do Kafka::fresh(), the message will be flushed when it foes out of scope and then when it is destroyed the connection is closed too. That works great for the Queue because the Facade cache is reset after each job execution! If someone wants to implement a persistent publisher that works cross-jobs - he can create a singleton!

… binding the class to a callback in terminate
@mateusjunges
Copy link
Owner

@sash can you take a look at the test failures please?

@mateusjunges
Copy link
Owner

and sorry for the delay on checking this, but GitHub is not notifying me for some reason

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants