-
-
Notifications
You must be signed in to change notification settings - Fork 209
Fix synchronous pool shutdowns being arbitrarly delayed #1041
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Previously, the connection pool thread did not drop its upgraded `Arc` pool reference while sleeping until the next idle duration check. This causes a drop of the `SmtpTransport` to not shut down any connections until said thread wakes up again (since it still holds a reference to the pool), which can take up to 60s with default settings. In practice, this means that connections will most likely not be properly closed before the program exists, (since the `SmtpTransport` is most likely dropped when the program shuts down) which violates the SMTP specification which states that: > The sender MUST NOT intentionally close the channel until it sends a QUIT command, and it SHOULD wait until it receives the reply (even if there was an error response to a command).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! Thanks for the PR.
👍 for having a proper method of shutting down the SMTP transport that waits for it to QUIT and disconnect
Is this something you would like to work on yourself, or would you accept a PR for this? I imagine that the cleanest way to implement this is to add a |
I'd love to have a PR about this. I currently have limited time to give to the project so everything is appreciated. As for the implementation I agree with your proposal. Do we want to have |
That shouldn't be an issue if the As a brief side remark, thanks for taking the time to maintain this crate! I'm currently still on vacation, but I'll try to get a PR sent your way implementing this sometime next week. |
This doesn't work because transports can be cloned, but all clones reference the same underlying connection pool, kind of like having an |
Ah, seems like I missed that impl (all my programs have their transports inside mutex-like primitives to prevent the connection pool from growing too big). In that case having |
One more alternative implementation I just thought of: instead of having |
I feel like this alternative approach may be more idiomatic in a way, but also I'm not fully convinced by it. It'd be nice to have to handle an Also now that I think of it I've long had the idea of doing something in the style of |
I've now opened PR #1045 using the design we discussed here ( |
Previously, the connection pool thread did not drop its upgraded
Arc
pool reference while sleeping until the next idle duration check. This causes a drop of theSmtpTransport
to not shut down any connections until said thread wakes up again (since it still holds a reference to the pool), which can take up to 60s with default settings. In practice, this means that connections will most likely not be properly closed before the program exists, (since theSmtpTransport
is most likely dropped when the program shuts down) which violates the SMTP specification which states that:Ideally, lettre would offer an API which allows a user of the crate to properly shut down their transport, which allows them to ensure that all connections are closed before the program exists. This is especially relevant for
AsyncSmtpTransport
: it currently executes all cleanup work on a background task spawned using the executor, with no way to block on said task; as such the only thing a user can do to (try to) ensure that all connections are closed when the program shuts down is to insert an arbitrary delay into the shutdown sequence, which is a hack at best. However, I've decided to limit the scope of this PR to just fixing the more "obvious" flaw with the synchronous pooling implementation, leaving such an API up for future implementation.