-
Notifications
You must be signed in to change notification settings - Fork 1.3k
tokio_util::sync::CancellationToken usage in local-tun #1923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
How to test |
It requires at least 2 different IPs. Starts a Starts a Run Some previous test results: #756 |
|
I just ran a test in my local environment:
Tested with It couldn't run with release build because it would crash: smoltcp-rs/smoltcp#1048 . As we can see in the image, Updates: Just looked deeper into |
- ref #1923 Checksum running on receiving packets are the most significant cost of CPU time in local-tun.
|
Everything now working pretty well. |
|
Not work for me: iperf3 server (56.78.9.10) iperf3 -sserver (192.168.24.100) ssserver -c /home/me/ss-cfg.json{
"server":"0.0.0.0",
"server_port":8388,
"local_port":1080,
"password":"barfoo!",
"method":"chacha20-ietf-poly1305"
}client (127.0.0.1) sudo sslocal.exe -U --protocol tun -s "192.168.24.100:8388" -m "chacha20-ietf-poly1305" -k "barfoo!" --outbound-bind-interface "Wi-Fi" --tun-interface-name "shadowsocks"iperf3 client (127.0.0.1) iperf3 -c 56.78.9.10iperf3 can recieve data, but I can't watch it's through the |
|
Please test my PR by yourself. If it passed your tests, please merge it yourself. |
|
I think the key changes in your PR is the smoltcp poll loop running in a tokio's Task. Because the loop is going to be very busy, it may occupy a worker thread in tokio's runtime. On the other hand, the SpinMutex (spinlock) in the poll loop would be problematic that there will have no chance for the other tasks to run because there will be no I think it would be nicer to keep the current implementation that run the loop in a separate thread. |
|
I want to change it to run a separate task per TCP session, so it would not be appropriate to have all sessions handled by a single thread. So I submitted this PR. |
|
I tested it just now, it can work. but the CPU 100% issue not resolve. I think it's a problem whether my PR exists or not. |
| log::error!("TcpTun smoltcp-poll error: {:?}", e); | ||
| } | ||
| log::debug!("TcpTun::drop, waiting for manager thread to exit"); | ||
| std::thread::sleep(std::time::Duration::from_millis(100)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There must be another way to make it work gracefully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I'm researching it.
It would be nice to have. |


I think this time it works.