Storage Options on AWS - S3, EBS, Instance Storage & EFS

In this post, I want to talk to you about different storage options on AWS. AWS gives us following types of storage options. Let's understand quick difference between them first.

  1. EBS: Think of it like a disc of your server. It is Block-level storage and hence most suitable for very fast read and write operations at the same time. E.g. OS, databases etc. These are connected over network to the EC2 instances (like SAN). An EBS volume can be attached to only EC2 instance at a given time. 

  2. Instance Storage:This is again block level disc with your EC2 instances, but they are local on the physical host. These are non-persistent discs and hence any data on this gets lost if the instance is stopped and started. Data is not lost in case of EC2 reboot. 

  3. EFS: This is available as an equivalent of NFS. You can mount an EFS to multiple EC2 instances and use like a shared disk. This currently works only with Linux isntances and not Windows.

  4. Amazon S3: This is a highly available Object level storage. Objects (files) are accessed via their keys. This is suitable for storing files which can be accessed by an application via AWS APIs. Do not use this as a disc or mount it to EC2 instances. In following videos, you can learn about various features offered by S3 along with using S3 commands via #AWS #CLI

  5. Glacier: This is the archival service from AWS. Instead of doing archival on tapes (on-premises), you may choose to archive to Glacier. It is cheapest form of storage. A video on Glacier is coming soon. 
SUBSCRIBE to our blog by putting and verifying your email on right side. Also, please join our Linkedin group - 

SHARE the article with your friends if you are able to understand different storage options on AWS. 

Understanding ELB in depth - Query from Viewers - 003

Hi Guys,
I have got some good questions on ELB for the given tutorial --

1. you had mentioned that the ELB internally creates instances to manage traffic. Just to expand a little on this, So When this ELB creates the internal instance(i assume its not visible to us ) and it consumes the private IP's too right, Does the internal instance gets created per availability zone or per instance? Coz, I attach an ELB to the instance not to the A.Z ?right? I assume it's something like an agent which gets installed per instance and sort of polling the Domain LB? pls, correct my understanding? 
ANS: Yes, those instances won't be visible to users as ELB is a managed service. These instances do use ENIs (and hence Private IPs) from the respective subnets. ELB creates one instance in every subnet (you choose while ELB creation). There is no agent running on your instances (e.g. web server). Also, you register your instances to an ELB and hence tell ELB that it could send traffic to these instances.

2. Under Load balancers / instances, down the page under availability zone, ELB had created 2 AZ's. whereas I had all 3 of my instances in the same A.Z, though I can edit it and remove the other A.Z, why does it create an another AZ reference point when all my instances are in only one AZ? pls assist 
ANS: While creating ELB, you can clearly choose which subnets you want it to handle. It is recommended that you give 2 subnets in 2 different AZs (based on principle of HA). ELB will launch an internal instance in the subnets you choose while creating ELB.

3. Under health check, the ping target, I gave the path index.html. let's assume I have 2 instances and if I want the index.html to reside in a different location for all the 2 instance, let's assume for the 1st one, it's under var/www/HTML and for the second one if its under var/www ..does the ping target as the intelligence to check the file irrespective if its location?
ANS: Given the fact that all the instances behind an ELB are generally kept identical, the path of the file has to be same. Though, if you have some specific use-case for the above scenario, it could be achieved using Application Load Balancer. But, in Classic ELB, it has to be same path. 

KEEP practicing, keep LEARNING !!!

Understanding AWS Free Tier - Query from Viewers - 002

Most of us are pretty excited to learn and work on AWS. Thanks to AWS that there is Free Tier available to practice as well. But, most of the learners end up getting bill even after being cautious.

I have picked up following query from my YouTube channel for AWS. I have got many similar queries earlier as well.

I am following your all videos from the beginning on AWS sysops tutorials and its fantastic the way you explain. here i have a small doubt on free tier account. 750 hours of ec2 instance... how is that calculated? Is that only one instance limit for 750 hours or many instances multiplied with number of hours used..? Actually i have used 2 instances while practicing with 30GB of EBS each and i could see the amount in some dollars month to date in billing console. Is that amount i need to pay.. i mean am i crossed the free tier limits. kindly clarify me this please... i have tried contacting aws support but again that also need to subscribed it seems every where they are trying to grab from us :( Thanks in advance.

ANS: In order to understand above scenario, consider following points --
1. In case of an EC2, there are multiple cost factors - EC2 instance charges, EBS charge, Data-out charges, Elastic IP charges etc. 
2. Read Free Tier FAQ as well.  

As explained in FAQ, you can use 750 hours of Linux AND 750 hours of Windows EC2. You can have 2 EC2 Linux running 375 hours each as well. They would look at the aggregate hours only. 
But, in the above case the cost would have been incurred because of EBS usage. For EBS, AWS gives "30 GB of Amazon Elastic Block Storage in any combination of General Purpose (SSD) or Magnetic"

As the user has run 2 Windows instances, EBS usage would have gone more than Free Tier allowed limits (each Windows EC2 takes 30 GB minimum EBS). Also, remember that EBS continues to cost you after creation, till it is terminated (because your data is consuming the storage on Cloud, right!!). EC2 instance costs you only if it is running. There is no charge when it is in stopped state. 

Happy Learning AWS !!! Please SHARE with your friends on Facebook and LinkedIn.

Setting up your new AWS account - Important things to do

When we setup a new AWS account, we should do some of these important things:

1. Enable Billing for IAM users
By default, only ROOT user (one who signed up for the AWS account by putting his email and credit-card) can access the Billing related sections. Once this setting is enabled, you can give Billing access to IAM users as well. Remember, not all the users will have default access to Billing section after this setting. Only the IAM users whom you allow (via IAM Policies) would be able to access Billing section. 

2. Activate MFA for Root user / other privileged IAM users
At top click on your account name and go to "My Security Credentials". Read here if you have doubts. You can use your smartphone and install Google Authenticator app on that. 

3. Enable Detailed Billing
You can enable detailed billing reports (3rd option in picture) and receive the same in an S3 bucket. You should enable this as it helps you break down the cost according to tags on various resources. These files are updated in your S3 bucket every 20-30 minutes and give you a very granular level of breakup of your spending.

4. Enable Cost Allocation Tags
Once you convert a tag to Cost Allocation tag, it starts appearing in the detailed billing file as an additional column (user: Environment) and you can easily filter the cost in EXCEL for different values. E.g. You create a tag called Environment and it can have different values DEV, QA, UAT, PROD. Hence, no money leakage; you can easily attribute who has spent how much in a month. 

5. Free Tier Benefits on a new account
AWS gives some amount of free usage on every new account. Read the conditions in detail here and then start using. Watch this video to learn few more things quickly. 

Happy Learning AWS !! Like and SHARE this please. 

Query from KnowledgeIndia Viewers - 001

I got following questions from one of our viewers and I thought of answering it here --

Please SUBSCRIBE to this blog by entering and VERIFYING your email on right side.
1. Can we limit versioning, Meaning to ask, versioning if enabled keeps all the modified files, can I limit it to the last 3 modifications?
ANS: You can limit it by time. E.g. that versions of last 3 days would remain there. You can move older versions to different storage classes with help Lifecycle Policies. In another case, you can write your custom Lambda function which gets invoked every time an object gets created and it could check and keep last 3 versions of that object and delete anything older.

2.When I choose s3 from the console, it doesn't allow me to choose a region from the console, Whereas after going into S3, it allows me to choose a region? why is that?
ANS: Many people have got that confusion. In case of S3, you see all your buckets (irrespective of their region) in one UI. That's why you cannot choose region at top. But, an S3 bucket belongs to one region only, hence you need to choose that when you are creating an S3 bucket.
3.What is the need to mandate versioning in cross-region replication? Any theory behind it?
ANS: Few things are done based on Engineering choices done by the product development team. I guess this is one out of them. A very good read is this page --

4.When I give a user named trinity, full S3 admin access in IAM, and then I create a bucket named Skynet only for few users, will trinity(user) still have full access over Skynet(bucket) ?
ANS: How are you ensuring that Skynet is created for few users? When you create a resource, you don't create it for a user, rather you use IAM to control which user can access that resource. In the above case Trinity has full S3 access and hence she would be able to access Skynet. But, you can explore the opportunity of denying her access at Bucket Policy level (defined at Skynet bucket).

5. After enabling cross region replication , if i delete the content in the source or destination , that deletion is not getting replicated . i can still see the file intact in my destination folder ? [In my case, though i replicated only a part of the file from the source, [replication was successful] , But when i deleted the whole source bucket, the files still where intact in my destination ?

ANS: This is the behavior shown by S3.

What is VPC? Learn from scratch

VPC (Virtual Private Cloud) is an isolated area carved out on AWS for yourself. You control the size of this area and the Private IP used in this area. You also control the behavior of different sections of this VPC (which are called Subnets). 

If you don't have time to read, just watch this compiled video playlist.

A subnet could be Public or Private in nature. You can launch instances (EC2, RDS, Redshift etc.) by choosing a subnet and this decides the Private IP of the instance. In addition to this, the network-level behavior also gets decided based on subnet (in case of Public Subnet instance would be reachable from internet e.g. Web servers; in case of Private subnet, instance would not be reachable from internet e.g. DB servers). All the instances in a VPC can talk to each other using Private IP

With a new AWS account, you get a default VPC in every region. You should use this only for initial practice and quick instance launches. For any customer POC / implementation do not use Default VPC; rather create a new VPC based on customer (or project) requirements. Watch this video to learn VPC creation from scratch --

Based on the above video you can create a new VPC. While creating VPC and subnets, take care of their sizing as these cannot be modified once created (VPC / subnet needs to be deleted and re-created, there is no modification/extension). Talk to your customer and understand how many resources need to be placed in Public and Private subnets and size them accordingly.

The instances launched in Private subnet would not be reachable from Internet directly. In case, you need to change something on Private Instances, you will have to make use of Bastion Host (or Jumpbox). Bastion Host is just a small machine (Windows/Linux) which is launched in Public subnet (and given a public IP). We can first login (via RDP/SSH) to this machine and then connect to machines in Private Subnets via their Private IP. You can learn the same with a demo video here ---

A VPC exists at region level, a subnet exists at AZ level. Following entities exist with in a VPC:

  • Security Group (VPC level across subnets in that VPC) (learn SG here)
  • NACL (VPC level across subnets in that VPC)
  • ENI (at a subnet level) (learn ENI here)
At minimum, you should have your subnets in 2 different AZ to ensure high-availability. With in a VPC:- 
  1. A Security Group can be attached to multiple instances 
  2. An instance can have multiple Security Groups attached to it
  3. A subnet can have only one NACL attached to it
  4. An NACL can be attached to multiple subnets
Every VPC is an isolated network and 2 VPCs cannot talk to each other (except via Public network). If you want to enable communication between 2 VPCs over Amazon Private network, use option of VPC Peering ---

Happy Learning AWS. Please do share this post with your friends :) 

Based on your request

Hello Friends, 

We had a wonderful Live Session last sunday on SysOps case study and based on your request, i have created necessary pages on the blog. 

  1. This page allows you to share your AWS Certification experience. So, please go ahead and help others. 
  2. This page gives you a chance to talk about KnowledgeIndia and let others know the way it helps you
I am sure you will take out time and contribute here. Hoping to bring lot more tutorials very soon. 

Also, the calendar is updated. Please see on the right side and join us on the coming weekend. 

Happy learning AWS, practice more and more !!!

AWS Practical Exercises -- 001

Following exercises are given in order to test your skills and understanding. We will try to setup an environment and cover things along with. 

  1. In Oregon, create a VPC with CIDR
  2. Divide this VPC into 6 subnets across 2 AZ (e.g. a, b)
  3. Make 2 subnets as Public (namely 1-a, 1-b)
  4. Make 2 subnets as Private with outbound internet (namely 2-a, 2-b) 
  5. Make 2 subnets as Private with no outbound internet (namely 3-a, 3-b)
  6. Create a Public Classic ELB in 1-a and 1-b. It should accept traffic on port 80 from ANYWHERE. Create health checks for the instances in 2-a and 2-b.
  7. Create 2 Linux/Windows instances in 2-a and 2-b with web-server installed (Apache/IIS). These instances should accept traffic on port 80 only from ELB. Register these instances with the ELB.
  8. Create a multi-AZ MySQL RDS in 3-a and 3-b. This DB should accept traffic only on port 3306 from instances in 2-a and 2-b.
  9. Create a Jumpbox / Bastion host in 1-a or 1-b and verify the above connectivity.
  10. Ensure that Security Groups and NACLs are created properly. 
RESOURCES for help:

Getting started with Amazon EC2 (AWS Compute Service)

EC2 (Elastic Compute Cloud) is equivalent of servers (on-premises). There are many attributes we need to provide while creating an EC2; e.g. instance type, AMI id, subnet id, security group etc. This tutorial explains you above concepts in detail and shows you how to launch an EC2 in default VPC. (Click here to learn how to create a custom VPC)

Tenancy is a critical attribute to understand in case of EC2 instances. There are 3 tenancy models available:

  1. Shared (or Default)
  2. Dedicated Instances
  3. Dedicated Host
Many of the learners remain confused on the tenancy models of EC2. Learn them quickly in this video and understand when to use which Tenancy model. Please do read the description of the video. It is important to understand Tenancy as to satisfy any compliance standards your organisation might require.

An EC2 instance will always have a Private IP address. In addition to this, it can be allocated a Public IP address as well. But, the Public IP changes if instance is stopped and started. It does not change on reboot though. If you want a Public IP which should not change on every stop & start, you can make use of Elastic IP. Elastic IP costs you only when it is not attached to an instance, or when it is attached to a stopped instance. You should understand clearly the difference between 3 types of IP on AWS, please watch this video.

Along with EC2, an Elastic Network Interface (ENI) gets created automatically (this is called Primary ENI for the instance). This is virtualized form of Network Interface Card (NIC). Every ENI will have a Private IP, which is called the primary IP of that ENI. You can add more secondary IPs on an ENI as well. You can attach multiple ENIs to an EC2 instance as well. The primary ENI can never be detached from an EC2 instance. You should learn how to work with ENI in the following video:

At last, in terms of EC2 you should understand the different pricing models available. There are 6 pricing models with EC2 instances and understanding them saves you from all the future bill related surprises. All 6 different models are explained here concisely.

When it comes to AWS EC2 reservation, you need to learn different options which are available. AWS offers multiple options to suit your needs and save money for you. You can understand in following video, with examples, which type of reservation you should take for a given scenario.


AWS announced few new features on EC2 recently, you can understand those in quick 10-15 below.

Happy learning AWS !!! Please comment down for any doubts. Don't forget to SHARE this URL with your friends. 

SysOps on AWS - Case Study ASOP001 - April 09, 2017

We shall be joining this Sunday (April 09, 2017) via Live YouTube Session (link here) to discuss the SysOps on AWS case study. Please see the case study below and ensure to join the broadcast in time.

Your organization has got the final architecture for a new upcoming web solution. Read through the following and implement the same on AWS.
  1. The web solution makes use of and SQL Server. 
  2. It also has lot of static assets e.g. images, audio, video etc. 
  3. Region of choice is Oregon (US). 
  4. Implement High-availability with auto-scaling. 
  5. Instance types for Web-server to be m4.large with 200GB EBS SSD volumes. 
  6. Under normal conditions, 4 servers are enough. Implement auto-scaling to add more servers when CPU utilization is more than 80% for 15 mins. Maximum number of servers could be 8. 
  7. Make use of CDN (CloudFront) to distribute the static content across the world. 
  8. Create different IAM users and groups - 
    • Group1: Have access to start/stop EC2 instances and reboot RDS instances. 
    • Group2: Have full rights on S3. Enable Detailed Cloudwatch monitoring and create Cloudwatch alarms on these instances and subscribe it to respective organization DLs. 
  9. Setup the VPC to achieve above use-case. In the created VPC, keep EC2 and RDS private. ELB would be public-facing. Setup Security Groups and NACL as per best practices.
URL to join:

Architecting on AWS - Case Study ASA001 - April 08, 2017

We shall be joining this Saturday (April 08, 2017) via Live YouTube Session (link here) to discuss the Architecting on AWS case study. Please see the case study below and ensure to join the broadcast in time. 

A customer has given following problem in an RFP. Create the solution for the customer with applicable AWS services and explain the advantages. 
  • Company ABC is currently operating in USA for the past 10 years with all infrastructure in-house. 
  • They want to move their existing JAVA based website to Cloud now, because of increasing traffic. 
  • Their HQ is located at New Jersey. They receive traffic from all over the world. 
  • They would want to make use of existing Microsoft Active Directory (on-prem) to validate their employees on the website. 
  • Also, they would want the solution to be flexible enough to handle weekend spikes in traffic. 
  • In addition to this, they want to have an ETL server (Informatica) on cloud and the results of ETL should be shown on Tableau. These dashboards would be embedded on their website. 
  • Recommend a complete architecture for this RFP along with complete infrastructure details and pricing. 
  • Also, suggest the customer different pricing strategies to handle the DEV, TEST and PROD environment for the above setup on AWS.
URL to join:

Welcome to Learning AWS

I have talked to so many IT professionals who want to get started and learn AWS. There are many reasons behind this:
  • Professional growth
  • Hike needed :) 
  • Craving to learn any new platform
  • Being in love with AWS (because of its features maturity)
You may belong to any of the above, but if you are serious about learning AWS then it is very important to understand that it is vast and you will have to spend time and energy to learn it systematically.

AWS is really good in terms of providing wonderful documentation and Free-tier benefits (under a new account for 12 months) so that you can learn the platform without spending any money. I suggest watching this video to understand the same and visit AWS FREE TIER website to read to the finest details.

Once you have created your AWS account and you are ready to learn further, please choose a track which you would follow (in case you also want to get certified). It is perfectly fine to learn the stuff and not do any certification but most of the professionals do it as it gives them visibility during Job change. Following video helps you in choosing the track based on your skills/role:

Post this, you should ideally understand the services which form the building blocks of AWS and the additional services required for your AWS certification track. This video explains you the same in 10 minutes.

SUBSCRIBE to this blog and get wonderful content on AWS. Happy learning !!!

Selected videos!