File uploads from a user’s machine to a server are stored in the server’s premises. But what happens in course of time is the data gets an inevitable growth and so their attachments. This leads to challenges like browser restrictions, machine speed, network speed, and server storage capacity. This can be solved by heading to a cloud-centric technical architecture so that we can provide a secure approach to upload large files.
So, down to this blog, let’s see how to use cloud services for bringing the quality factors like speed, vast and secure. Yes, there are many services, but for the best, let’s dive into the world’s largest forest AWS, the leading provider of cloud services.
Once you landed signing in at AWS, create an S3 bucket for your region following these directions.
You would feel warm, breezy pitching a bucket in AWS. Now, you can have a look at bucket policies here. And follow a basic example that AWS provides demonstrating how to upload a file to S3 using their SDK.
And, here comes the heart of the guide directing you to the secure way of uploading files.
- Pre-signed URLs
- Two Bucket Security Approach
Pre-signed URLs are used to enable your client/customer to upload an object to your bucket. This pre-signed URL makes the controls, such as providing only write permissions and lifespan of the URL, handy. You can generate a pre-signed URL
Thus this pre-signed URL is given to the client and the client uploads his file to the path of your bucket you provided. It’s now we have to handle the uploaded object.
First, we’ve to get an acknowledgement that the file’s been uploaded successfully. This is because, if we save the details of the storage without making sure of the upload action in the bucket, it would result in data inconsistency, if any problem occurred during the upload action. Thus acknowledgement is commendable. It can be accomplished through SQS.
Amazon SQS queue gets the notification when certain actions(which you could mention based on your need) happen on your Amazon S3 bucket. That is, the notifications are pushed to the SQS queue on account of certain actions on your S3 bucket. You could set up an SQS queue for your Amazon S3 bucket.
So that you could fetch the notifications by polling the queue for a period of time. Thus, we get acknowledged of a particular action to the bucket. Yes, now, we are safe to save the data of storage that is definitely consistent.
Two Bucket Security Approach:
The two bucket architecture establishes a strong security. The first bucket is a temporary loading dock that is externally exposed, allowing clients to write files while the second bucket is more secure and the files’ final burrow. External users don’t have access to it. OK, What happens to the files in the loading dock? Files can be automatically deleted via a lifecycle policy after a specified time limit.
So, now, let’s move the files from loading dock bucket to the permanent bucket. And finally, save the path of the file at a permanent bucket.
Store encrypted data:
For high security, store only encrypted data at storage. This can be done by encrypting the files through secret keys. You can look into KMS Keys.
- Pre-signed URL is generated and given to the client.
- The client uploads an object to the path provided.
- Meanwhile, SQS gets a notification, if the upload was successful.
- The server looks for the SQS data(i.e., in a queue) by polling.
- By that data, it is acknowledged that object is upload to the bucket.
- Move the object to a permanent bucket.
- Store the URL of permanent in the server.
Thus, we’d accomplished our mission, establishing a secure architecture for storing sensitive data, leaving no loopholes for any treasure hunts. Such dark, mysterious Amazon!