AWS Lambda Deployment with CloudFormation and GitHub Actions

Eduardo

Eduardo Cusihuaman

Hey there, if you've landed here, you're probably a developer interested in deploying an AWS Lambda as quickly and simply as possible. ⚡
You’ve come to the right place! Let me share some of my thoughts on Lambda and serverless. Many tutorials out there involve a lot of manual steps and can be hard to maintain or even set up if you just want to try out AWS services as quickly as possible.

Before Everything

This example uses CloudFormation because, although it isn't the best IaC tool, it's the quickest for a one-click deployment POC. Even though Terraform isn’t ideal for this, CDK or SAM may be easy to maintain.
The resources involved are minimal: a bucket where we'll store our code as a zip file, a helper Lambda to create an empty zip as a placeholder and then clean the bucket when we delete it, and our actual Lambda with a public endpoint URL. There is no API Gateway here—too much complexity for a simple POC. Of course, there are IAM roles for each Lambda.

Content

Architecture Boring Stuff

User Interaction
A user sends an HTTP request to the Lambda function URL (LambdaFunctionUrl).
The LambdaFunction handles the HTTP request, assuming the LambdaExecutionRole IAM role for the necessary permissions.
It generates a URL and interacts with an S3 bucket (codebucket), which stores the .NET code.
S3 and Helper Lambda
The S3 bucket (codebucket) stores the .NET code.
A helper Lambda function (HelperS3ObjectsFunction) manages S3 objects within this bucket, such as creating and deleting objects.
The helper Lambda function assumes the HelperS3ObjectsWriteRole IAM role to manage these S3 operations.
IAM Roles
LambdaExecutionRole: Provides the necessary permissions for the primary Lambda function.
HelperS3ObjectsWriteRole: Provides permissions for the helper Lambda function to manage S3 objects.
Pipeline User for GitHub Actions The GitHub Actions workflow uses an IAM user (PipelineUser) to deploy updates to the Lambda function and upload the packaged code to the S3 bucket. The workflow includes steps for:
Checking out the code
Setting up .NET
Installing dependencies
Building the project
Zipping the Lambda package
Configuring AWS credentials
Uploading the package to S3
Updating the Lambda function code
Flow
User -> Lambda Function URL -> Lambda Function -> S3 Bucket
Helper Lambda -> S3 Bucket (for management tasks)
Pipeline User -> GitHub Actions -> S3 Bucket and Lambda Function (for deployment)
cloudformation-template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Description: Code Bucket with CodeLambda and Helper

Parameters:
S3BucketName:
Description: Name for the code bucket
Type: String
Default: blazing-lambda-code-bucket
AllowedPattern: "^[a-z0-9-]{3,63}$"
ConstraintDescription: "Bucket name must be between 3 and 63 characters long and contain only lowercase letters, numbers, and hyphens."
LambdaExecutionRoleName:
Description: "The name of the IAM role for the Lambda function"
Type: String
Default: "LambdaExecutionRole"

Resources:
codebucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref S3BucketName

HelperS3ObjectsWriteRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: S3Access
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: AllowLogging
Effect: Allow
Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
Resource: "*"
- Sid: S3BucketAccess
Effect: Allow
Action:
- "s3:ListBucket"
- "s3:GetObject"
- "s3:DeleteObject"
- "s3:DeleteObjectVersion"
- "s3:PutObject"
Resource:
- !Sub "arn:aws:s3:::${S3BucketName}"
- !Sub "arn:aws:s3:::${S3BucketName}/*"

helpers3objectshook:
Type: "Custom::S3Objects"
Properties:
ServiceToken: !GetAtt HelperS3ObjectsFunction.Arn
Bucket: !Ref S3BucketName
DependsOn: codebucket

HelperS3ObjectsFunction:
Type: AWS::Lambda::Function
Properties:
Description: Delete objects from bucket and create my-lambda.zip
Handler: index.handler
Runtime: python3.9
Role: !GetAtt HelperS3ObjectsWriteRole.Arn
Timeout: 120
Code:
ZipFile: |
import os
import json
import cfnresponse
import boto3
import logging
from zipfile import ZipFile

from botocore.exceptions import ClientError
client = boto3.client('s3')
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def handler(event, context):
logger.info("Received event: %s" % json.dumps(event))
bucket = event['ResourceProperties']['Bucket']

result = cfnresponse.SUCCESS

try:
if event['RequestType'] == 'Delete':
result = delete_objects(bucket)
elif event['RequestType'] == 'Create':
result = create_zip_and_upload(bucket)
except ClientError as e:
logger.error('Error: %s', e)
result = cfnresponse.FAILED

cfnresponse.send(event, context, result, {})

def delete_objects(bucket):
paginator = client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket=bucket)
objects = [{'Key': x['Key']} for page in page_iterator for x in page['Contents']]
client.delete_objects(Bucket=bucket, Delete={'Objects': objects})
return cfnresponse.SUCCESS

def create_zip_and_upload(bucket):
zip_file_path='/tmp/my-lambda.zip'
dummy_file_path='/tmp/dummy.txt'
with open(dummy_file_path,'w') as dummy_file:
dummy_file.write("This is a placeholder text.")
with ZipFile(zip_file_path,'w') as zipf:
zipf.write(dummy_file_path,arcname='dummy.txt')
boto3.client('s3').upload_file(zip_file_path,bucket,'my-lambda.zip')

client.upload_file(zip_file_path, bucket, 'my-lambda.zip')
return cfnresponse.SUCCESS

LambdaExecutionRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Ref LambdaExecutionRoleName
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: LambdaS3AccessPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:ListBucket
Resource:
- !Sub "arn:aws:s3:::${S3BucketName}"
- !Sub "arn:aws:s3:::${S3BucketName}/*"
- PolicyName: LambdaCloudWatchLogsPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource:
- arn:aws:logs:*:*:*

CodeLambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: 'aws-lambda-code'
Handler: 'ChauMundo::ChauMundo.LambdaEntryPoint::FunctionHandlerAsync'
Runtime: 'dotnet8'
Role: !GetAtt LambdaExecutionRole.Arn
Code:
S3Bucket: !Ref S3BucketName
S3Key: 'my-lambda.zip'
Environment:
Variables:
ASPNETCORE_ENVIRONMENT: 'Production'
DependsOn: HelperS3ObjectsFunction

CodeLambdaFunctionUrl:
Type: 'AWS::Lambda::Url'
Properties:
AuthType: NONE
TargetFunctionArn: !GetAtt CodeLambdaFunction.Arn

CodeLambdaInvokePermission:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: !Ref CodeLambdaFunction
Action: 'lambda:InvokeFunctionUrl'
Principal: '*'
FunctionUrlAuthType: 'NONE'

Outputs:
LambdaFunctionUrl:
Description: "The URL endpoint of the Lambda function"
Value: !GetAtt CodeLambdaFunctionUrl.FunctionUrl

LET’S START!

Let's get going if you already have an AWS account and the AWS CLI installed (jq is also necessary but not mandatory).

Step-01: 🚀 Deploy CloudFormation Stack for Code

export S3_BUCKET_NAME=blazing-lambda-code-bucket
export STACK_NAME=aws-serverless-stack

aws cloudformation deploy \
--stack-name $STACK_NAME \
--template-file cloudformation-template.yaml \
--parameter-overrides S3BucketName=$S3_BUCKET_NAME \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-1

Note: The Lambda has a predefined name for easy identification in the pipeline: aws-lambda-code. The Lambda's zip file is called my-lambda.zip—important as it serves as a placeholder for deploying our empty Lambda.

Step-02: 🛠️ Create AWS IAM User for Pipeline

In this step, we'll create a Pipeline User that GitHub Actions will use. This user will have the necessary policies attached to copy files to our bucket and update our Lambda function.
# Set user name
export USER_NAME="codepipeline"

# Create IAM user
aws iam create-user --user-name $USER_NAME

# Attach necessary policies
# ⚠️ THIS IS NOT PRODUCTION READY - USE A LEAST PRIVILEGE ROLE INSTEAD
aws iam attach-user-policy --user-name $USER_NAME --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
aws iam attach-user-policy --user-name $USER_NAME --policy-arn arn:aws:iam::aws:policy/AWSLambda_FullAccess

# Create access keys for the user
ACCESS_KEYS=$(aws iam create-access-key --user-name $USER_NAME)

{
"AccessKey": {
"UserName": "codepipeline",
"AccessKeyId": "AKI***NWJVT",
"Status": "Active",
"SecretAccessKey": "ZVAf261Bu***JEaTbiIkQckj",
"CreateDate": "2024-08-05T13:41:48+00:00"
}
}

# Extract AccessKeyId and SecretAccessKey
ACCESS_KEY_ID=$(echo $ACCESS_KEYS | jq -r '.AccessKey.AccessKeyId')
SECRET_ACCESS_KEY=$(echo $ACCESS_KEYS | jq -r '.AccessKey.SecretAccessKey')

# ⚠️ Output the Access Key details
echo "Access Key ID: $ACCESS_KEY_ID"
echo "Secret Access Key: $SECRET_ACCESS_KEY"

Step-03: 🔒 Add Secrets to GitHub Secrets

After creating the IAM user and generating the access keys, follow these steps to add these credentials to your GitHub repository secrets for use in GitHub Actions:
Navigate to Secrets in Your GitHub Repository:
Go to the main page of your repository on GitHub.
Click on the Settings tab at the top of the repository page.
In the left sidebar, click on Secrets and variables > Actions.
Add New Repository Secret:
Click the New repository secret button.
AWS_ACCESS_KEY_ID:
Name: AWS_ACCESS_KEY_ID
Value: Enter the AccessKeyId value you obtained.
Click Add secret.
AWS_SECRET_ACCESS_KEY:
Name: AWS_SECRET_ACCESS_KEY
Value: Enter the SecretAccessKey value you obtained.
Click Add secret.
S3_BUCKET_NAME:
Name: S3_BUCKET_NAME
Value: Enter your S3 bucket name (e.g., blazing-lambda-code-bucket).
Click Add secret.
Following these steps, you'll securely add the credentials to your GitHub repository for use in your GitHub Actions workflow.

Step-04: 📦 Dependencies

Some compiled languages, like in this C# example, need some dependencies:
And a small entry point:
using Amazon.Lambda.APIGatewayEvents;
using Amazon.Lambda.AspNetCoreServer;
using Amazon.Lambda.Core;

namespace HelloWorld
{
public class LambdaEntryPoint : APIGatewayHttpApiV2ProxyFunction
{
protected override void Init(IWebHostBuilder builder)
{
builder.UseStartup<Startup>();
}
}
}

public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}

public IConfiguration Configuration { get; }

public void ConfigureServices(IServiceCollection services)
{
services.AddEndpointsApiExplorer();
services.AddSwaggerGen();
}

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseHttpsRedirection();
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGet("/hello", () =>
{
return Results.Ok("hi");
});
});
}
}

For Java, it's similar:

Step-05: 🤖 Pipeline Magic

Copy the pipeline into .github/workflows/deploy.yml:
name: Deploy to AWS Lambda

on:
push:
branches:
- 'main'
- 'master'

jobs:
deploy:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Setup .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: '8.0.x'

- name: Install dependencies
run: dotnet restore ./HelloWorld/HelloWorld.csproj

- name: Build project
run: dotnet publish ./HelloWorld/HelloWorld.csproj -c Release -o ./publish

- name: Zip Lambda package
run: |
cd publish
zip -r ../my-lambda.zip .

- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1

- name: Upload to S3
run: aws s3 cp my-lambda.zip s3://${{ secrets.S3_BUCKET_NAME }}/my-lambda.zip

- name: Update Lambda function code
run: |
aws lambda update-function-code --function-name aws-lambda-code --s3-bucket ${{ secrets.S3_BUCKET_NAME }} --s3-key my-lambda.zip

Important: The my-lambda.zip was named to overwrite the previous zip with our actual code. aws-lambda-code is predefined in the CloudFormation stack.

Cleaning Up

To delete the CloudFormation stack, use the following command:
# Set the stack name
export STACK_NAME="aws-serverless-stack"

# Delete the CloudFormation stack
aws cloudformation delete-stack --stack-name $STACK_NAME

# Delete the IAM user
aws iam delete-user --user-name $USER_NAME

BLAZINGGG ENJOYY 🎉🔥

You can see your pipeline triggered, the code zipped, uploaded to S3, and the Lambda updated. This workflow will trigger on each commit to the main branch. 🚀
We take the Lambda URL from the stack output or directly from the Lambda interface to make an HTTP request. 🌐
~/edu curl https://6u7hqooiek7rasomnytusyjp2i0lecee.lambda-url.us-east-1.on.aws/hello/
hi

The best part? You can go to S3 and, if you want to try out DynamoDB, go to the LambdaExecutionRole and add more permissions. You can do it in IAM or redeploy the CloudFormation stack if you're brave enough.
And there you have it! A blazing fast, simple way to deploy AWS Lambdas. Enjoy!
Happy deploying :)
Like this project

Posted Nov 17, 2025

Deployed AWS Lambda using CloudFormation and GitHub Actions for a quick POC.