Skip to content

Conversation

@eduardomourar
Copy link
Contributor

@eduardomourar eduardomourar commented Apr 22, 2022

Motivation and Context

Currently, it is very complicated to deploy into AWS Lambda a service that has been generated by Smithy. That is because the lambda_http crate does not accept a Hyper service out of the box.

Description

With this changes, we will be able to deploy a Lambda function that uses the server SDK generated by the Rust Smithy plugin.

Testing

Unit tests

Also, we are able to run locally using the cargo-lambda. Additionally, I have deployed successfully a REST JSON service through CloudFormation and it is working as expected.

Checklist

  • I have updated CHANGELOG.next.toml if I made changes to the smithy-rs codegen or runtime crates
  • I have updated CHANGELOG.next.toml if I made changes to the AWS SDK, generated SDK code, or SDK runtime crates

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

[pull] main from awslabs:main
@eduardomourar eduardomourar changed the title Feat/support lambda feat(http-server): support lambda service Apr 22, 2022
@crisidev
Copy link
Contributor

Wow, thanks a lot for this contribution! I am excited to see where this is going! ❤️

@eduardomourar eduardomourar marked this pull request as ready for review April 26, 2022 10:12
@eduardomourar eduardomourar requested a review from a team as a code owner April 26, 2022 10:12
@eduardomourar
Copy link
Contributor Author

eduardomourar commented Apr 26, 2022

We are missing unit tests, at least. Done

@eduardomourar eduardomourar changed the title feat(http-server): support lambda service feat(http-server): support service on lambda May 2, 2022
@eduardomourar eduardomourar changed the title feat(http-server): support service on lambda feat(http-server): support service on aws lambda May 2, 2022
@david-perez
Copy link
Contributor

Thanks @eduardomourar for opening this PR. I'm unsure as to whether we should merge this.

Smithy-rs Services are supposed to be as agnostic as possible from the concrete body type of the HTTP requests and responses. To this end,

Service::Requests do not pose a problem (not since awslabs/aws-lambda-rust-runtime#398 was fixed). I've managed to get Router<lambda_http::Body> working (adding convert_to_request_rejection!(Box<dyn std::error::Error + Send + Sync>, HttpBody); to aws-smithy-http-server/src/rejection.rs to work around #1364 though).

Service::Errors do not work as of now because lambda_http::run requires that the errors be lambda_http::Error, something that I don't understand but I think should be fixable by making run generic over the error type too.

Service::Responses do pose a problem. This is because lambda_http requires that the response be something that implements lambda_http::IntoResponse. Since Smithy-rs responses are just http::response::Response<BoxBody>, we can't implement the trait because of coherence. We also can't leverage the blanket implementation through body types that convert into lambda_http::Body, (which is defined in aws_lambda_events), even if we contributed upstream, because converting to it would require awaiting, given that we can only instantiate it with a String or a Vec<u8>. I've nevertheless verified that everyting type-checks to ensure I wasn't missing anything, by patching aws_lambda_events with

pub type BoxBody = http_body::combinators::UnsyncBoxBody<bytes::Bytes, axum_core::Error>;

impl From<BoxBody> for Body {
    fn from(_b: BoxBody) -> Self {
        let foo: Vec<u8> = todo!();
        Self::Binary(foo)
    }
}

I'm also not sure whether converting always to the Binary variant is correct (see here), although it seems like without information from the Smithy model, a "better" decision cannot be made.

So, to summarize: the code in this PR to convert Requests in theory should not be needed (provided we fix #1364 and we fix lambda_http::run bounds on Error); for Responses, I'm not sure.

@eduardomourar
Copy link
Contributor Author

@david-perez, thanks for such a nice explanation.

My understanding is that we want to keep most aspects of our server code under Smithy control so that we don't break any contract with our customers (e.g. errors being thrown). My assumption is that API Gateway (+ Lambda) will have an important role for AWS in the adoption of Smithy server-side. Based on the code from the Typescript server SDK here, we see the conversion from API Gateway events (provided to Lambda) to a default HTTP request as a separate library provided by the Smithy team.

In the case of Rust, that are a lot of moving parts and putting them together is somewhat more complicated (as you can see on line 86, we need to strip out the stage from the URL otherwise Smithy path resolution is not met). We have been able to move forward with the solution provided in this PR (by deploying to our staging environment) because it bridges the gap between each component you mentioned.

My proposal is to hide this functionality (and its dependencies) using cargo features. Also making the feature optional will signal as being experimental in nature. Then, as soon as this PR lands in crate lambda_http, we (with AWS support, of course) can migrate slowly the necessary logic to the relevant external libraries. @ballpointcarrot and @aesterline, what do you guys think?

@david-perez
Copy link
Contributor

Service::Errors do not work as of now because lambda_http::run requires that the errors be lambda_http::Error, something that I don't understand but I think should be fixable by making run generic over the error type too.

I opened awslabs/aws-lambda-rust-runtime#474.

@david-perez
Copy link
Contributor

My assumption is that API Gateway (+ Lambda) will have an important role for AWS in the adoption of Smithy server-side.

Yep we've really noticed the interest. Perhaps it'd be best if we had a separate binary for running the Pokémon Service on lambda.

we need to strip out the stage from the URL otherwise Smithy path resolution is not met

How does smithy-typescript deal with this? cc @adamthom-amzn

My proposal is to hide this functionality (and its dependencies) using cargo features. Also making the feature optional will signal as being experimental in nature. Then, as soon as awslabs/aws-lambda-rust-runtime#466 lands in crate lambda_http, we (with AWS support, of course) can migrate slowly the necessary logic to the relevant external libraries.

I think there's no need to hide behind features. We publish semi-regularly 0.x breaking versions. We can remove this glue code later. I'd like to have an understanding of what the end situation looks like though.


Do you have any thoughts regarding the below? It seems like always converting to the Binary variant might not be appropiate. I also don't understand why smithy-typescript always returns non-binary responses (?)

I'm also not sure whether converting always to the Binary variant is correct (see here), although it seems like without information from the Smithy model, a "better" decision cannot be made.

Comment on lines +124 to +125
let body = hyper::body::to_bytes(body).await?;
let res = Response::from_parts(parts, LambdaBody::from(body.as_ref()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deployed a Lambda with the Pokémon service, always converting the Smithy response into a lambda_http::Body::Binary variant like here, and I got binary output in the payload, even when setting Accept: application/json.

For example, I got eyJjYWxsc19jb3VudCI6MH0= for the /stats endpoint, which base64 decodes to the otherwise expected response, {"calls_count":0}.


@eduardomourar I had presumed that you were working with this patch applied in your Lambdas. How are you working around this for smithy-rs operation outputs with text payloads? My thinking right now is that it's impossible to provide a Lambda response without model-specific knowledge that tells you whether what you're returning is a text or a binary payload.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are only working with JSON at the moment. I will confirm how/if we are handling other payload type in a bit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But JSON is text... I'm getting base64-encoded text e.g. eyJjYWxsc19jb3VudCI6MH0= in the body instead of {"calls_count":0}.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eduardomourar I created a reproducer repo and explained the problem in more detail here: awslabs/aws-lambda-rust-runtime#491 (comment).

There's a difference in how API Gateway returns the response depending on if the API is HTTP or REST. HTTP APIs work fine, so maybe that is the reason why you did not experience this erroneous behavior using this patch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We (Hugo and I) investigated our configuration and we are using the REST API. Our current deployment is still using a previous version of this branch (while using the latest released version of the lambda_http crate) and it gives the expected response. Hugo is currently working to update to use both changes from this PR as well the other PR from lambda runtime. Most update should happen in that other PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most update should happen in that other PR.

Yeah, I believe after that PR lands, the only thing that we would need to do here is the "stripping out the stage from the request URI path" part right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, hopefully so. Some extra logic might be placed in this repo. We will see.

@david-perez
Copy link
Contributor

Thanks for your work on this and in the AWS Lambda runtime.

Closing; superseded by #1551.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants