How to Use Terraform with Serverless for Lambda RDS and Internet Access

Geoff Dutton
Fake Weblog
Published in
7 min readOct 30, 2020

--

Photo by Taylor Vick on Unsplash

I’m currently working on a project that requires a Lambda function to connect to RDS and also connect to the outside world. It seems to be overly complex, but that’s probably because I don’t know much about networking. Here’s my attempt at setting it up.

Here is the final repo of what I go through below: https://github.com/geoffdutton/serverless-rds-aws-structure

There are a few ways of doing this, such as using one Lambda function (A) that’s not on the RDS subnet, and one (B) that is which receives data from A. For my purposes, I don’t want to do it that way.

The expected output is a lambda function that does a HEAD request to google.com and does a SELECT statement to an RDS PostgreSQL serverless database. It will be fully coded out so that it can be spun up and spun down consistently.

Terraform will be used for provisioning, RDS, VPC and other networking. Serverless will be used for the application code. I want to keep them separate because I like the way Serverless manages the application code, and I may want to re-use the basic infrastructure for future projects.

First thing to do is initiate a new project, which will have a directory structure like:

src/ <-- Serverless app code in Node JS
terraform/ <-- infrastructure code
package.json

Perquisites

First, in the src directory, run:

$ serverless create — template aws-nodejs

This will generate a basic “hello world” function and aserverless.yml file. In the provider section, add your AWS named profile.

Now deploy it and verify we have the easy part set up. Since I’ll be deploying frequently, and I may end up added arguments, I made an npm script called deploy which looks like cd src && serverless deploy --stage dev && cd .. . I also set the npm test script to look like cd src && serverless invoke -f hello --stage dev && cd .. since I’ll be running that a lot.

$ npm run deploy> serverless-rds-aws-structure@1.0.0 deploy /Users/geoff/Projects/serverless-rds-aws-structure
> cd src && serverless deploy && cd ..
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
........
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service serverless-rds-aws-structure.zip file to S3 (228 B)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
............
Serverless: Stack update finished...
Service Information
service: serverless-rds-aws-structure
stage: dev
region: us-east-2
stack: serverless-rds-aws-structure-dev
resources: 5
api keys:
None
endpoints:
None
functions:
hello: serverless-rds-aws-structure-dev-hello
layers:
None

And now test the function:

$ npm test> serverless-rds-aws-structure@1.0.0 test /Users/geoff/Projects/private/serverless-rds-aws-structure
> cd src && serverless invoke -f hello
{
"message": "Very much success!!",
"event": {}
}

Now the hard part…

Setting up the Infrastructure with Terraform

In the terraform directory, first create provider.tf with contents geoffdutton/serverless-rds-aws-structure/terraform/provider.tf. Then run terraform init and you should see a lot of green. If you run terraform plan it will just report that there is nothing to change and the infrastructure is up-to-date, naturally.

Now I’ll create a few more terraform files:

  • rds.tf
  • vpc.tf
  • sercurity_groups.tf
  • ec2.tf
  • locals.tf
  • vars.tf
  • route_tables.tf

Here comes the “fun” part, making sure everything connects to everything appropriately. In general, the RDS instance will live on the private subnets in the VPC. The Lambda functions will use these private subnets when deployed. The EC2 that’s created will be used for one of querying via an SSH tunnel. See the repo for all the contents of these files. One thing I initially ran into was using Network ACLs. I found that NACLs can conflict with the Security Group rules. There is probably a smarter way to do this, but after removing those, everything reliable works.

A few other important findings after stumbling through this exercise:

  1. The security group that will be attached to the lambda functions needs to allow all outbound traffic
  2. The pubic subnet(s) need to have map_public_ip_on_launch = true
  3. The private subnets will use the default value map_public_ip_on_launch = false

To deploy the infrastructure, change to the terraform/ directory, run:

$ terraform init
$ terraform apply -var="stage=dev"
... this will show all the changes and prompt you to type yes to continue...

Ultimately, if successful, the output should look something like this:

db_connection = postgres:randompassword@serverless-rds-internet-dev.cluster-randomawsid.us-east-2.rds.amazonaws.com:5432/awesome_project
lambda_sg_id = sg-0wnfio4hi3092jj390i3
ssh_cmd = ssh -i ~/.ssh/[key pair name] ec2-user@1.22.333.444 // auto assigned by AWS
subnet_private_1 = subnet-h392h38923eij034
subnet_private_2 = subnet-ui4hf489394h3434
subnet_private_3 = subnet-oj34893940j0934j

First, I’ll test the database access via the EC2 bastion host. I’ll run the handy ssh_cmd output in a terminal. At first, you’ll have to install the psql client, and then you can test the connection. If using the Amazon Linux 2 AMI, after ssh’ing into the instance, that could be accomplished by simply running:

$ sudo amazon-linux-extras install postgresql10 epel
... a bunch of output ...
$ psql -h serverless-rds-internet-dev.cluster-randomawsid.us-east-2.rds.amazonaws.com -p 5432 -U postgres
Password for user postgres:
psql (10.4, server 10.12)SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)Type "help" for help.postgres=>

If you see the above output, success! You can type \q to exit the connection.

Note: I’ve since added a Terraform Provisioner that does that when the EC2 instance is created. See here.

Deploying Serverless with Security Group and Subnet IDs

As part of deploying with Terraform, it will write a file called vpc.dev.js and a file called .env.dev in the src/ directory, which are used by the serverless.yml provider.vpc block:

src/serverless.yml
...
custom:
stage: ${opt:stage, self:provider.stage}
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: ${file(./vpc.${self:custom.stage}.js):region}
profile: geoffpersonal
versionFunctions: false
vpc:
securityGroupIds: ${file(./vpc.${self:custom.stage}.js):securityGroupIds}
subnetIds: ${file(./vpc.${self:custom.stage}.js):subnetIds}
...

First things first, let’s make sure we have access to the outside internet. In src/handler.js I’ll add some code to make a simple HTTP HEAD request:

const https = require('https')

module.exports.hello = async (event) => {
const responseHeaders = await new Promise((resolve, reject) => {
console.log('Testing outbound internet connection')
const req = https.request(
'https://www.google.com',
{ method: 'HEAD' },
(res) => {
console.log('Success!', res.headers)
resolve(res.headers)
}
)
req
.on('timeout', () => {
console.log('Timeout!!')
req.abort()
})
.on('error', (err) => {
console.log('Failed!')
reject(err)
})
.end()
})
return {
message: 'Very much success!!',
responseHeaders,
event
}
}

Then run npm run deploy followed by npm test and we want to see something like:

$ npm test> serverless-rds-aws-structure@1.0.0 test /Users/geoff/Projects/serverless-rds-aws-structure
> cd src && serverless invoke -f hello
{
"message": "Very much success!!",
"responseHeaders": {
"content-type": "text/html; charset=ISO-8859-1",
"p3p": "CP=\"This is not a P3P policy! See g.co/p3phelp for more info.\"",
"date": "Thu, 29 Oct 2020 21:24:25 GMT",
"server": "gws",
"x-xss-protection": "0",
"x-frame-options": "SAMEORIGIN",
"transfer-encoding": "chunked",
"expires": "Thu, 29 Oct 2020 21:24:25 GMT",
"cache-control": "private",
"set-cookie": [
"1P_JAR=2020-10-29-21; expires=Sat, 28-Nov-2020 21:24:25 GMT; path=/; domain=.google.com; Secure",
"NID=204=hGZ1FIA2P2a900QsSfHznQHG3tYyfXHY6Lt6Ob931nzNdYWuJGaBgLEedQ9QJFer5Pa_6aHeju--5cK18I2HwDmmX7lV-xVpUdJxHNDkHAhm19V8DS3h4wnOQyuYWHnjS6yYJCMP51_SUz8kPwsLv2fWpFmNtXQgULpYyExOnRg; expires=Fri, 30-Apr-2021 21:24:25 GMT; path=/; domain=.google.com; HttpOnly"
],
"alt-svc": "h3-Q050=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-T051=\":443\"; ma=2592000,h3-T050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"",
"connection": "close"
},
"event": {}
}

Now I’ll add some code to test the database connection to src/handler.js. First I’ll install pg in order to connect to the RDS instance. For simplicity, I’m going to use serverless-dotenv-plugin and store the expected pg environment variables in .env.dev which I will not commit to the repo:

// .env.dev
PGHOST=serverless-rds-internet-dev.cluster-randomawsid.us-east-2.rds.amazonaws.com
PGUSER=postgres
PGPASSWORD=[random password generated by Terraform above]
PGPORT=5432

Note: I added a package.json in the src/ directory and did npm install pg there so that Serverless includes it. There is probably a better solution, but that’s for another day.

The pg client will look for these in order to connect. Now I’ll add some more code to the handler function:

const https = require('https')
+ const { Client } = require('pg')
module.exports.hello = async (event) => {
const responseHeaders = await new Promise((resolve, reject) => {
console.log('Testing outbound internet connection')
const req = https.request(
'https://www.google.com',
{ method: 'HEAD' },
(res) => {
console.log('Success!', res.headers)
resolve(res.headers)
}
)
req
.on('timeout', () => {
console.log('Timeout!!')
req.abort()
})
.on('error', (err) => {
console.log('Failed!')
reject(err)
})
.end()
})
+ const client = new Client()
+ await client.connect()
+ const res = await client.query('SELECT $1::text as message', [
+ 'DB connection success!'
+ ])
+ const dbResponse = res.rows[0].message
+ await client.end()
return {
message: 'Very much success!!',
+ dbResponse,
responseHeaders,
event
}
}

This will do a really simple query to verify the lambda function has access to the RDS instance. Again, I’ll run npm run deploy followed by npm test . You may see a timeout error on the first attempt because the RDS Serverless cluster has to fire up after being paused. In order to avoid this, I just set the function timeout in serverless.yml to 30 seconds:

functions:
hello:
handler: handler.hello
+ timeout: 30

If everything works, the output should look like this now:

$ npm test> serverless-rds-aws-structure@1.0.0 test /Users/geoff/Projects/serverless-rds-aws-structure
> cd src && serverless invoke -f hello --stage dev
Serverless: DOTENV: Loading environment variables from .env.dev:
Serverless: - PGHOST
Serverless: - PGUSER
Serverless: - PGPASSWORD
Serverless: - PGPORT
{
"message": "Very much success!!",
"dbResponse": "DB connection success!",
"responseHeaders": {
"content-type": "text/html; charset=ISO-8859-1",
"p3p": "CP=\"This is not a P3P policy! See g.co/p3phelp for more info.\"",
"date": "Thu, 29 Oct 2020 21:55:19 GMT",
"server": "gws",
"x-xss-protection": "0",
"x-frame-options": "SAMEORIGIN",
"transfer-encoding": "chunked",
"expires": "Thu, 29 Oct 2020 21:55:19 GMT",
"cache-control": "private",
"set-cookie": [
"1P_JAR=2020-10-29-21; expires=Sat, 28-Nov-2020 21:55:19 GMT; path=/; domain=.google.com; Secure",
"NID=204=QDtHROYpMVmZYtoKhIf3MG6LVWaozHAdkpuYfMGZtSyOMeMYlAXv3aPDg20Av1QPLSk4Xfu8fMxN-Q_VQrHkqxuWjxXZuSL_X-6vn3nt1Zbrnkj5eXzFhoJOXoM3sdVEfFwo_b4hqNoDcJF5f_TRxKO2xY22mWCxb_7WOBmGPNA; expires=Fri, 30-Apr-2021 21:55:19 GMT; path=/; domain=.google.com; HttpOnly"
],
"alt-svc": "h3-Q050=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-T051=\":443\"; ma=2592000,h3-T050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"",
"connection": "close"
},
"event": {}
}

Hooray!

Lastly, don’t forget to tear everything down so that you aren’t charged.

Be sure to remove serverless first, and then the terraform infrastructure.

  1. In the src/ directory, run: serverless remove --stage dev
  2. In the terraform/ directory, run: terraform destroy -var="stage=dev"

Note: I’ve since added an npm script called destroy , so in the root directory of the project, you can simply run:

$ npm run destroy

Let me know if you have any suggestions to improve the structure.

--

--