-
Notifications
You must be signed in to change notification settings - Fork 13
fix(proxy): stream requests #771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
basically allows us to funnel the data into a buffer for later usage
completion signal from `tee_body` allows us to then process in a separate task without compromising performance
c7d4d75
to
31c615a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR improves the Lambda Runtime Proxy by implementing streaming requests instead of collecting entire request bodies before forwarding, addressing performance issues with Lambda's new 200MB payload limit.
- Introduces a TeeBodyWithCompletion struct that streams body data while capturing it for interceptor processing
- Refactors proxy handlers to use streaming requests instead of blocking body collection
- Updates dependencies to support the new streaming architecture
Reviewed Changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 4 comments.
Show a summary per file
File | Description |
---|---|
bottlecap/src/proxy/tee_body.rs |
New streaming body wrapper that tees data into buffer while forwarding |
bottlecap/src/proxy/mod.rs |
Adds tee_body module declaration |
bottlecap/src/proxy/interceptor.rs |
Refactors proxy handlers to use streaming with TeeBodyWithCompletion |
bottlecap/LICENSE-3rdparty.csv |
Removes duplicate dependency entries |
bottlecap/Cargo.toml |
Updates http-body, hyper-util, and tokio dependencies |
also added safety docs on unsafe pin
What?
Improves the Lambda Runtime Proxy by streaming requests, as opposed to collecting the whole body which blocks the runtime.
Why?
Lambda just increased outgoing payloads to 200MB – and I wanted to ensure we could improve our proxy here. I realized this could be improved bc of #768