Uploading data to Amazon S3 from Node.js

What can be simpler than uploading file to the server and saving it to hard drive, right? Open read stream, read bytes, open write stream, write bytes, done. Now let's explore the same question but from a high-load high-availability service perspective. You're designing new Instagram, where will you store the photos? Remember, at a bare minimum you need to make sure that a failure to a single node in a data center will not interrupt your service (you can't allow to lose those precious cat pictures that your users are uploading). That means that you have to replicate your photos to multiple locations, think about disaster recovery patterns, monitoring, alerting... This way of thinking might be a great exercise in dev-ops and datacenter planning, but it is way deeper that you expect when you plan your Instagram NextGen. Luckily, folks from Amazon have thought that through for their own datacenters and with the help of their S3 service you can be sure that your photos are secure, replicated and protected from HDD failures. S3 stands for Simple Scalable Storage - one of Amazon's AWS services that solve this small problem: storing files. In this article I'll show how to connect to S3 and upload files from your Node.js app. Notice: in order to follow the steps you will need to have AWS account and create S3 bucket. Bucket works like a folder: just a unit of organization for files. After you created the bucket, prepare your Node.js workspace: npm install --save aws-sdk aws-sdk is an official Amazon AWS API for Node.js. This module contains helper classes to work with most AWS services. S3 is also supported by aws-sdk. Now let's create a configuration for API. Create a file called aws.config.json and put the following lines there:
  1. {
  2.   "accessKeyId": "AAAAAAA",
  3.   "secretAccessKey": "uuudfufduufuf",
  4.   "region": "us-east-1"
  5. }
Make sure that you put your accessKey and secret instead of the one shown in the article, otherwise you won't be able to authenticate! Now we can create S3 object and upload the file:
  1. const fs = require('fs');
  2. const AWS = require('aws-sdk');
  3. AWS.config.loadFromPath('./aws.config.json');
  4.  
  5. const s3 = new AWS.S3();
  6.  
  7. const params = {
  8.   Bucket: 'your-bucket-name',
  9.   Key: 'uploaded-sample.jpg',
  10.   ACL: 'public-read',
  11.   Body: fs.createReadStream('sample.jpg'),
  12.   ContentType: 'image/jpeg'
  13. };
  14.  
  15. s3.upload(params, function(err, data) {
  16.   if (err) {
  17.     console.log(err);
  18.   } else {
  19.     console.log(data);
  20.   }
  21. });
This code will upload a file called 'sample.jpg' to Amazon S3 Bucket and make it readable by everyone (ACL - public-read). The file will be saved under the name uploaded-sample.jpg (Key - uploaded-sample.jpg). Finally, we set ContentType for this file to be 'image/jpeg'. This is not strictly necessary but it is a good practice to set proper MIME types to your files. You can now check your bucket. The file should be uploaded and ready for your viewers. Following the same pattern you can completely remove "uploads" folder from your server, but instead of uploading existing files to S3 you can pipe the file stream from request object straight to S3. But this will be a subject for another article. Happy uploading!

Add new comment