How to handle and serve data from AWS S3 ’s private bucket with AWS STS in Node.js?

When I first started using AWS S3 with STS, it was quite confusing for me to implement these things. Although the documentation has all the things mentioned in it, as a newbie in AWS, it was still a difficult task to extract all the information out of it.

How to handle and serve data from AWS S3 ’s private bucket with AWS STS in Node.js?

When I first started using AWS S3 with STS, it was quite confusing for me to implement these things. Although the documentation has all the things mentioned in it, as a newbie in AWS, it was still a difficult task to extract all the information out of it as it is distributed over too many pages and there are very few examples.

So this blog is an ultimate guide to deploy a system to serve sensitive content from your S3 private bucket using AWS STS.

Why a private bucket?

A few months back, I started developing a product which was serving map tiles (images) with some sensitive information encoded in those images.

We had no choice but to encode our data in images because we couldn’t serve JSON explicitly because anyone can easily extract all the data out of it.

But even after encoding the data, the new issue was, if we make these images public, anyone can at least have all the tiles to make their own system and display them easily.

I know that if someone does so, we can take some legal actions against them. But as a startup, we didn’t want to take any risk, and taking some legal actions was the worst case scenario we could ever imagine at that time. I guess it was the best idea to make our content private and serve it to some restricted and trustworthy clients (as this product was not useful for normal people anyway). So after some research and comparison, we decided to go with AWS.


Technology Stack

AWS S3 (Simple Storage Service):

To store the data. We’ll create a bucket and make our dataprivate. I’m storing a single image for the demonstration.

AWS STS (Security Token Service):

It provides some temporary credentials to the users. So, we don’t have to create a new IAM user every time a new user gets registered.

AWS IAM (Identity and Access Management):

It is a good practice not to use the root user's credentials. For that, we’ll create a role which will only have the read-only access of a single bucket and the access to generate new temporary tokens for every client.

Node.js:

We’ll add aws-sdk and generate new credentials. Also, we’ll use these credentials to make a request and get our data from a private S3 bucket.

AWS EC2:

We’ll assign a role to this server and our node.js application will run on an EC2 server.


Creating an S3 Bucket:

Simply go to S3’s console and create a new bucket. The scope of the name of a bucket is global which means only 1 user can have the specific bucket name. So select a unique name and the region in which you expect a large number of your users to be, select the suitable configurations for your bucket and create one. Also, note your bucket’s ARN. It will help us to restrict our role to generate credentials for only this bucket.

I’ll upload an image to this bucket. You can upload any other document if you want. Just remember not to make your file public. Private is the default. Still, if you want to be sure, open the object URL. If it throws an error, we’re good to go(that means it is not publicly accessible).

Creating an IAM Role & Attaching policies

Select a service:

Go to your IAM console and create a new role. While creating a role, select EC2 as the service which will use this role.

Attach policies:

Hit Next button to add permissions and add an AmazonS3ReadOnlyAccess because if we give the full access of our S3 bucket, our clients will be able to manipulate the content inside our bucket unless you add some other policy while generating the credentials (Because it takes the intersection of both Role and provided policy’s access).
Though passing some custom policy is a very nice feature, I generally avoid this because if I’m able to make separate roles to perform other tasks, why should I take any risk?

Add tags:

Tags are optional. Add a key and its description if you want.

Review:

While reviewing your role, do check the policies attached to it. I’ve assigned s3-temp to the Role Name field.

Add inline policy:

After creating the role, open that role inside the console and you’ll see an option saying Add inline policy. Click it and search for STS and add it in your role. It will help to create temporary credentials.

After adding this inline policy, give it a suitable name. I’ve named it STS. You can also specify which resources you want your role to be able to generate temporary credentials for. You just have to specify the ARN of your S3 bucket if you don’t want this role to be able to generate credentials for some other buckets.

Attach this role to an EC2 instance

For the first time, I was stuck for a while because I didn’t know that I have to assign a role to a specific EC2 instance. It took me hours to figure this out that I have to specify a trust relationship between an EC2 instance and my role. This step increases the security as you can’t generate new credentials without assigning the role to your instance.

For a new EC2 instance

While creating a new EC2 instance, in Configure Instance Details option, select the IAM role which we created.

For an existing EC2 instance

Right click on the existing EC2 instance and select Attach/Replace IAM Role inside Instance Settings and assign the role.

Writing the code

I’m using node.js to fetch the data using aws-sdk. While writing this blog, the current version of this SDK is 2.393.0

First, we’ll write the code to generate credentials

Using a Callback:

Using async/await:

Using .then():

We’ve successfully generated the credentials. It is now good to store the data in a variable after some formatting to reuse them.

const accessparams = {
  accessKeyId: data.Credentials.AccessKeyId,
  secretAccessKey: data.Credentials.SecretAccessKey,
  sessionToken:
  data.Credentials.SessionToken,
};

Now before fetching the data, we’ll discuss the different options we have in aws-sdk to get the data.

  1. getSignedUrl: It takes the temporary credentials and generates a new signed URL each time you call this method because it takes the new timestamps every time to generate it.
  2. getObject: It also takes the temporary credentials but it doesn’t return a new signed URL. It returns the content of our file in the Body part of the response in the Buffer format.

What to choose

In my case, I had to generate new tiles URLs every time and I had no control over URLs to reuse them because the map generates new URLs every time we interact with it. Generally, in any map application, you cache the map tiles and reuse them every time after any interaction. But it is not possible using getSignedUrl. But if you have the control over the URLs and you can reuse them you should definitely go for getSignedUrl. These URLs are valid for the certain time period which you provide while generating them.

But there are some cases where URLs are of no use and you only need the object itself or if you don’t want to generate URLs frequently and directly cache the images, getObject is a good option.

PS: I ended up using AWS CloudFront because getObject was too slow for my application because generally in the applications like mine, I just have to specify <img src="image/url" and HTML will take care of it. I don’t have to wait until it loads the content of an image then display them using src="data:image/png;base64,…. While in getSignedUrl, it was updating the URLs so the map was trying to fetch the new image every time which made my application to flicker after the interactions.

Get Signed URL

Synchronously

Using a Callback

In getSignedUrl, we don’t have to wait until it resolves the promise. It directly sends us the URL.

Get Object

Using a Callback

Using .then()

Using async/await

Note: We didn’t print file in any getObject example because the content of our file is stored inside file.Body as a Buffer. You have to convert this buffer into a String.

Expiration Time

As you can see in the code, we didn’t specify any expiration time for our credentials. But that doesn’t make these temporary credentials permanent. It takes the default (and minimum: 1 hour) time specified in your role to get expired. You can also increase it upto 12 hours.


Code Reference

Here are the official docs if you want to explore more about the operations of S3 (Simple Storage Service) or any other AWS services.

https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html

Footnotes

Do let me know if you’ve found some better ideas or there is an error in my code or the description. If you have any doubts or not able to understand/implement this, leave a comment below and I’ll try to reply ASAP. Until next time!