-
Notifications
You must be signed in to change notification settings - Fork 560
quic transport support #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good. But your commit history is misformed, please rebase on the main branch.
server = [] | ||
client = [] | ||
tls = ["tokio-native-tls"] | ||
noise = ["snowstorm", "base64"] | ||
quic = ["quinn", "rustls", "rustls-pemfile", "futures-util"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use native-tls, which rathole already depends on? In that way, we can drop the dependencies of rustls and other things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like it should be possible, but I need time to manually write the code between native-tls and quinn.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 quinn chooses to only support rustls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 quinn 选择只支持 rustls?
No, rustls is optional, but quinn does not provide bindings for other similar crates other than rustls. So using native-tls should be feasible, but it cannot be used out of the box.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
emmm, unfortunately I don’t know much about this, it seems to implement the relevant interface in https://github.com/quinn-rs/quinn/blob/main/quinn-proto/src/crypto.rs You can replace rustls, but I don't have this knowledge (:
@@ -144,6 +146,9 @@ async fn test(config_path: &'static str, t: Type) -> Result<()> { | |||
server_shutdown_tx.send(true)?; | |||
let _ = tokio::join!(server); | |||
|
|||
// Wait for the server connection to be closed (quic) | |||
time::sleep(Duration::from_millis(2500)).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you figure out why this is needed? server
has been joined. And connections should have already been closed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because quinn actually wraps the udp socket in the arc and puts it in the tokio task to run. So after the endpoint drop, the udp socket will not drop immediately. If you start the server again at this time, you will get the port reuse error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then this is an issue for the hot-reloading feature. If a full restart is trigger, the new instance will try to bind on the same address and fail, which causes rathole to abort.
Any way to shutdown the UDP socket immediately, along with the server? If quinn exposes such API, maybe we call it in the drop of QuicTransport.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't find the relevant method in the source code and documentation, but I can send an issue to ask.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some bug fixes has been merged into the main branch. I don't know if that's relevant but maybe you want to do a rebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I am taking an exam these few days, I will continue to do this later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect there's a short interval for all tokio tasks to shutdown, but normally that's not an issue because every CI that has been run so far is successful. 2.5s is too long for that. Anyway, I will find time to investigate this in the original code.
@@ -6,7 +6,7 @@ use crate::protocol::{ | |||
self, read_ack, read_control_cmd, read_data_cmd, read_hello, Ack, Auth, ControlChannelCmd, | |||
DataChannelCmd, UdpTraffic, CURRENT_PROTO_VRESION, HASH_WIDTH_IN_BYTES, | |||
}; | |||
use crate::transport::{TcpTransport, Transport}; | |||
use crate::transport::{QuicTransport, TcpTransport, Transport}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line will fail when quic
feature is not enabled. See use
below for TLS.
Hello, I had the same idea as you a week ago :) Having just completed my own QUIC implementation, I noticed this PR before going to open my own PR. Should I open a "competing" PR? It may make it easier to compare our approaches. |
I had a thought by the way. A very nice feature of QUIC is that it can be used in a partially-reliable way.
Now, this is nice (almost necessary) for the control channel. But it is actually a problem for the data channel. Retransmission of TCP packets will make it seem as if there is 0 loss from point of view of the original sender, which will make it impossible for them to detect congestion. (see TCP-meltdown) having all packets be ordered will introduce head-of-line blocking in the transport itself. If we are carrying UDP, this is very unfortunate. If we are carrying TCP, then we now have 2 layers of HOL blocking, where just one should suffice. What we may be able to do is to have all the control channel communication on the bi stream already implemented, but then add an additional unreliable stream for the data channel using pure datagrams @rapiz1 1) should I move this thought/comment somewhere else (Issue??) |
@emillynge I've already seen the congestion causing problems with TCP over TCP and wondered if there's a good solution. Your proposal sounds great.
Yeah. I think you should do that.
I'm open to such a change. But you should keep #84 in mind when redesigning the trait. I'm recently planning to make a minor change to the trait but haven't find the time yet, and it may have conflicts with QUIC. I will post my primitive thoughts in #84 |
Note that |
hi, @emillynge is also doing this and he seems to have some new ideas, can I close this pr and keep doing this by emillynge? |
@juzi5201314 I have now created a PR. |
👍 |
No description provided.