Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interest in a PR that adds TCP keep-alive? #547

Open
saethlin opened this issue Oct 3, 2022 · 5 comments
Open

Interest in a PR that adds TCP keep-alive? #547

saethlin opened this issue Oct 3, 2022 · 5 comments

Comments

@saethlin
Copy link

saethlin commented Oct 3, 2022

I'm currently working through what to do about a codebase that uses ureq and is suffering from what looks like a very occasionally unresponsive server. I'd like to toggle on TCP keepalive to see if that fixes the problem, but currently there is no API for this. reqwest has this: https://docs.rs/reqwest/0.11.12/reqwest/blocking/struct.ClientBuilder.html#method.tcp_keepalive and the implementation which is in the middle of being migrated to hyper-util looks pretty simple: https://github.com/hyperium/hyper-util/blob/85aade41a080fc4b40a9a53c818680feed14f138/src/client/connect/http.rs#L593-L598

Are you interested in accepting a PR which adds this feature?

@algesten
Copy link
Owner

algesten commented Oct 4, 2022

I looked into this recently in another project, and found that Rust std TcpStream does not have any way of setting keepalive. reqwest (which is based on hyper), pulls in socket2 to do this, but I'm not keen on adding another dep to ureq just to support keepalive.

I haven't studied hyper in detail, if they have more reasons that keepalive to use socket2, but there's also a philosophical difference. Ureq actively tries to keep the dependency tree small, while tokio/hyper has less such constraints.

@algesten
Copy link
Owner

algesten commented Oct 4, 2022

Or in other words: This is the first time anyone raised the issue about TCP_KEEPALIVE. If there is strong user demand for it, I could be swayed of increasing the number of deps. I suggest we leave this issue open here to monitor demand and let people weigh in. Hopefully we can close this using just Rust std in the future.

@jsha
Copy link
Collaborator

jsha commented Oct 4, 2022

By the way, my experience of this category of problem is that it is often caused by a firewall that silently drops idle TCP connections (that is, without sending a RST). For clients that keep a connection pool, the connection appears to be open and the next request is sent on it. From the firewall's perspective, it receives a TCP segment for a connection that does not exist, and drops it. The client winds up waiting for a response that will never come. Depending on the OS settings it might wait 30 minutes to an hour before it gets an error.

TCP keepalive is one way to work around this problem. Some others to try for the short term (of course, these have their drawbacks):

  • Set a smallish response timeout, and retry timed-out requests (assuming they are idempotent).
  • Turn off connection pooling.
  • Limit the connection pool to 1 connection per host, and make sure to send at least 1 request per N minutes (where N minutes is less than the observed time after which a connection drops). The request can be an idempotent one, e.g. a GET or HEAD.
  • Implement a TlsConnector that makes a connection, sets keepalive using socket2, turns that socket into a TcpStream, wraps it with the TLS library of your choice, and hands off the resulting connection to ureq.

@HookedBehemoth
Copy link

I'd be interested to see this too. I'm unsure if this helps with my use case though.

@sosthene-nitrokey
Copy link

sosthene-nitrokey commented Jan 18, 2024

Interested in this too, to handle a firewall that kills idle connections: Nitrokey/nethsm-pkcs11#164

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants