"This blog originally appeared on sachasmart.com. Sacha is a Software Engineering team lead at Weavik and excels in managing product development and client interactions."
AWS Lambda is a serverless, event-driven compute service commonly used in microservice architectures, offering developers greater flexibility in application development. For instance, using AWS Lambda, a Node.js developer can integrate image processing into their application by leveraging a Python Lambda function and receive the results seamlessly within the platform.
Unlike Amazon Elastic Compute Cloud (EC2), which is priced by the hour but metered by the second, AWS Lambda is billed by rounding up to the nearest millisecond with no minimum execution time. This means developers only pay for the processing they use, and there are no charges outside the period of invocation.
AWS Lambda scales with load, rapidly creating more functions as demand increases and scaling down to zero when demand decreases. This flexibility allows developers to handle varying workloads efficiently and cost-effectively, without the need to provision or manage servers.
During Andy Jassy's AWS re:Invent 2020 keynote, it was announced that AWS Lambda would support containers. Developers can now package and deploy Lambda functions as container images using a process similar to the following:
Dockerfile
using an AWS provided base images,An additional benefit of using containers is the increase in ephemeral storage, increasing the limits from 512MB to 10GB.
AWS Lambda offers an HTTP API that allows custom runtimes to receive invocation events from Lambda and send response data back while operating within the Lambda execution environment.
Filename | Path | Function |
---|---|---|
bootstrap | /var/runtime/ | A script that runs at startup that interacts between the Lambda service and the function code |
[anything] | /var/task/ | Function code to be executed |
he Runtime API is the orchestration layer that invokes the Lambda function's code, passing in the context and receiving the response of the invocation request. A custom runtime can be defined by updating the bootstrap
executable. In a traditional Python lambda function, this bootstrap
executable looks like this:
#!/bin/bash
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
export AWS_EXECUTION_ENV=AWS_Lambda_python3.8
if [ -z "$AWS_LAMBDA_EXEC_WRAPPER" ]; then
exec /var/lang/bin/python3.8 /var/runtime/bootstrap.py
else
wrapper="$AWS_LAMBDA_EXEC_WRAPPER"
if [ ! -f "$wrapper" ]; then
echo "$wrapper: does not exist"
exit 127
fi
if [ ! -x "$wrapper" ]; then
echo "$wrapper: is not an executable"
exit 126
fi
exec -- "$wrapper" /var/lang/bin/python3.8 /var/runtime/bootstrap.py
return $?
This is effectively just executing boostrap.py
, a Python script that instantiates a LambdaRuntimeClient
with the HTTP address (AWS_LAMBDA_RUNTIME_API
by default this is 127.0.0.1:9001
) of the Lambda runtime API:
lambda_runtime_api_addr = os.environ["AWS_LAMBDA_RUNTIME_API"]
lambda_runtime_client = LambdaRuntimeClient(lambda_runtime_api_addr)
With the invocation of the Lambda function, the rest is quite straightforward. The handler
script is found and loaded as a module, then executed. Upon successful execution, the Runtime API receives a POST
request with the invoke_id
and result_data
from the RAPID client (interestingly, written in C++ for AWS Python 3.8 container images).
More information about the OpenAPI specification can be found here.
Developing and integrating Lambda functions into existing projects can be a challenging process. One common pain point is the need to repeatedly go through the deployment steps (see above) for each minor revision of the microservice. As well, most companies have a separate team for managing and deploying cloud infrastructure.
As a developer, I wanted to have the ability to run my Lambda function code locally and invoke it from an existing service. Sure, there are alternatives to this approach like LocalStack that could facilitate rapid prototyping. However, in this context, it's overblown and (in my opinion), should be used for AWS services that cannot be dockerized easily (SNS, Firehose, STS, etc. - note: S3 can easily be swapped in with MinIO, stay tuned for its write up)
Prototyping locally allows for quick iterations and refactoring without the need for frequent deployments to AWS Lambda. Once the microservice has been tested and is performing well, I could then go through a more formal deployment process and make it available to the platform in a production or staging environment.
Some of my requirements therefore include:
bootstrap.py
script that is executed by the Lambda runtime. The modification involves reloading the handler after each invocation, ensuring that the latest function code is always executed. Additionally, the script will request the next invocation event from the Lambda Runtime API, ensuring that the function remains responsive.Dockerfile
:FROM public.ecr.aws/lambda/python:3.8
COPY requirements.txt /opt/requirements.txt
RUN pip install -r /opt/requirements.txt -t ${LAMBDA_TASK_ROOT}/
# COPY ./bootstrap.sh /var/runtime/bootstrap # <-- optional if creating a custom runtime.
COPY ./bootstrap.py /var/runtime/bootstrap.py
RUN chmod +x /var/runtime/bootstrap*
services:
lambda:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./lambda.py:/var/task/lambda.py # <-- mount function code here
command: "lambda.handler"
ports:
- "3002:8080" # <-- expose port 3002 and map to container port 8080
networks:
- local_lambda_network
networks:
local_lambda_network:
name: local_lambda_network
driver: bridge
import base64
import datetime
import io
import requests
from PIL import Image
def handler(event, context):
response = download_cat_image()
with open("/tmp/output_image.png", "rb") as f:
image_data = f.read()
encoded_image = base64.b64encode(image_data).decode("utf-8")
body = {
"message": f"Image Processed at {datetime.datetime.now()}",
"input": event,
"image": encoded_image,
}
response = {
"statusCode": 200,
"body": body,
"headers": {"Content-Type": "application/json"},
}
return response
def download_cat_image():
try:
response = requests.get("https://cataas.com/cat") # Underrated service
response.raise_for_status()
image = Image.open(io.BytesIO(response.content))
grayscale_image = image.convert("L")
output_buffer = io.BytesIO()
grayscale_image.save(output_buffer, format="PNG")
with open("/tmp/output_image.png", "wb") as f:
f.write(output_buffer.getvalue())
return output_buffer.getvalue()
except requests.exceptions.HTTPError as err:
raise Exception(f"HTTP error occurred: {err}")
except requests.exceptions.RequestException as err:
raise Exception(f"Request error occurred: {err}")
except Exception as err:
raise Exception(f"An unexpected error occurred: {err}")
docker compose up -d
(include --build
when needed)curl -XPOST "http://localhost:3002/2015-03-31/functions/function/invocations" -d '{"payload":"Give me a snug"}' | jq
# For convenience and copy the body.image to clipboard use:
curl -XPOST "http://localhost:3002/2015-03-31/functions/function/invocations" -d '{"payload":"Give me a snug"}' | tee /dev/tty | jq -r '.body.image' | pbcopy
jq
and pbcopy
to copy the body.image
to clipboard
body.image
here and see the grayscale image.lambda-1 | START RequestId: edbf340b-bae2-4607-aea5-c7b399469afc Version: $LATEST
lambda-1 | 28 Apr 2024 18:44:37,442 [INFO] (rapid) INVOKE START(requestId: 9df1db0f-8074-453e-b70c-d5c992ca8ce0)
lambda-1 | 28 Apr 2024 18:44:38,602 [INFO] (rapid) INVOKE RTDONE(status: success, produced bytes: 0, duration: 1159.407000ms)
lambda-1 | END RequestId: 9df1db0f-8074-453e-b70c-d5c992ca8ce0
lambda-1 | REPORT RequestId: 9df1db0f-8074-453e-b70c-d5c992ca8ce0 Duration: 1160.77 ms Billed Duration: 1161 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
import aws from "aws-sdk";
async invokeCat(): Promise<CatResponse> {
const lambda = new aws.Lambda({
apiVersion: "2015-03-31",
endpoint: "http://127.0.0.1:3002", // the port exposed for the container
sslEnabled: false,
region: "ca-central-1", // arbitrary for local development
accessKeyId: "any", // arbitrary
secretAccessKey: "any", // arbitrary
});
const params = {
FunctionName: "cat",
InvocationType: "RequestResponse",
Payload: JSON.stringify({
payload: "Give me a snug",
}),
};
return lambda.invoke(params).promise();
}
Developing Python Lambda functions locally offers developers a streamlined workflow, enabling rapid iteration and efficient testing. By leveraging containers and understanding the Lambda environment, developers can enhance their serverless development process. Embracing these practices not only improves productivity but also ensures smoother deployments and a more robust application architecture.