Upcoming LIVE AWS Trainings in 2017

Hello friends, 

In continuation with the LIVE training efforts (which helps participants to learn continuously along with practicals and get their doubts clarified in a customized manner) I am happy to announce upcoming courses for 2017.

September:

In September, we would be doing our last ASSOCIATE level training. Yes, so if you want to learn AWS Solutions Architect or SysOps Administrator, please join this session. It would be happening on September 2, 3, 9, 10, 16, 17, 23 & 24. A complete 32-hours course with lot of practical labs, demos and industry examples and discussions. It would follow the same pattern as that of July and August batches. Read complete details here for this course.

October:

October month brings the much awaited Amazon Redshift course. This is the right course for people with ETL/BI background. This would be a 24-hours course happening on October 14, 15, 21, 22, 28 & 29. This would cover in detail about Creating and maintaining Redshift clusters, Designing Redshift schema, Encoding & Distribution techniques, tuning, management etc.  

November:

November is going to be very exciting month as we would be doing AWS Solutions Architect - Professional track. This would be a 24-hours course happening on weekends at convenient timings on November 4, 5, 11, 12, 18 & 19. I am sure many of you would have learned AWS Associate level and achieved certification by that time. This course is going to have lot of discussion-oriented teaching. I would probably do a small screening before taking a participant in this class :) Don't WORRY !!

December:

With last month of year, we would be targeting to finish the next PRO level course - DEVOPS Engineer. I would highly recommend that you should have completed SysOps (which is a requirement) & Solutions Architect before attending this. This is going to be a 32-hours course happening on December 2, 3, 9, 10, 16, 17, 23 & 24.  

This post was intended to give you a heads-up to plan your time if you are interested to learn any of the above tracks. There would be detailed posts about each of the upcoming courses before the respective course-start.

Please comment below with email to proceed with next steps of registration.  

AWS Live Training: Solutions Architect + SysOps Administrator (32 hours) | Sep 2017

Hello friends, 

As mentioned in my previous post, this is announcement for last AWS Associate level training. This would be conducted in month of September for 32 hours LIVE via WebEx



It covers 2 Associate tracks - Solutions Architect & SysOps Administrator. I am organizing these classes so that our serious learners could get more value for their investment and they could move forward comparatively faster. Also, this gives them (comparatively) more access to ask questions during the class.

Duration: 32 hours (via WebEx)
Timing: 7 to 11 PM (Indian Standard Time)
Dates: 2, 3, 9, 10, 16, 17, 23 & 24 September (4 hours each day) all weekends :)
Cost: INR 19,999/- (or equivalent in USD)

Please see the agenda below -- 


View our blog for free learning videos and articles -- https://aws-tutorials.blogspot.com/
 
You can comment below OR send me direct message on LinkedIn OR email me to block your seat and receive next steps. SHARE this post with a friend who is looking to learn AWS in a proper way :)
 

Learning AWS with KnowledgeIndia -- An IMPORTANT post

Hello friends, 

First of all, I want to thank all the viewers who have shared our videos and spread information about YouTube Channel. We just crossed 4,000 subscribers. This number is not very big compared to the appreciation which you people write in comments. 

Here are few important things for our serious learners:
  1.  Blog is the best form to combine different forms of content (video, audio, text etc.) in a logical order. Going forward, you will see more activity on the blog and I would recommend everyone to learn the topics from posts on the blog. So, please SUBSCRIBE to the blog now (see right side) and do the verification once email comes to your mailbox.  Check SPAM as well.
  2. If you are subscribed to YouTube Channel, its good. But, I would request to go and click the BELL icon so that you receive email alerts of all the new videos. An example is shown here --
  3. Facebook, Twitter & LinkedIn are great platforms to share smaller things like any good article or really good AWS post (with lots of examples). So, I would be sharing those items on these social platforms. Please get connected on these - Links are given at the top
  4. Many of the viewers post lot of custom doubts on YouTube comments. I spend considerable amount of time answering all. Sadly, once the query gets answered I don't even get an acknowledgement. May I request you to SHARE and LIKE if something is helpful to you. 
  5. I am also thinking to setup a FORUM where we all could discuss and comment. Suggestions welcome!!
  6. Please look at the AWS video playlists, the videos are put in a logical order in the playlists. 
  7. While posting doubts/questions, kindly look at the relevant video/blog post. Many of the times, your question would already be there in the comments (asked previously by someone else). 
  8. I am trying to cover the concepts and explain you with examples, but I also request you NOT to stop there. You should go ahead and read the AWS documentation for the specific topic and go deeper. 
  9. And at last, if you have got benefited from KnowledgeIndia videos, I request you to write a small testimonial on LinkedIn/Facebook. (Both the links are given at top).
ALL the best guys. Happy learning AWS.  
 

AWS Live Training: Solutions Architect + SysOps Administrator (32 hours) | August 2017

Hello all,

As the successful execution of July program goes on, I would like to announce the schedule of August month's AWS training.


It is going to be of 32 hours and covers 2 Associate tracks - Solutions Architect & SysOps Administrator. I am organizing these classes so that our serious learners could get more value for their investment and they could move forward comparatively faster. Also, this gives them (comparatively) more access to ask questions during the class.

I want to stress a point again here; I am not a supporter of gaining certification by preparing specifically for it. Rather, I recommend learning the platform enough that you don't have to prepare for certification - you should be able to clear it just with your sheer knowledge and experience. Upcoming training classes are going to be filled with demos and discussions. Please see the agenda below.



This time we are organising it at a time which is suitable for US, India, Gulf & Europe.

Duration: 32 hours (via WebEx)
Timing: 7 to 11 PM (IST)
Dates: 5, 6, 12, 13, 19, 20, 26, 27 August (4 hours each day) all weekends
Cost: INR 19,999/- (or equivalent in USD)

View our blog for free learning videos and articles -- https://aws-tutorials.blogspot.com/

You can send me direct message on LinkedIn or email me to block your seat and receive next steps. SHARE this post with a friend who is looking to learn AWS in a proper way :)


AWS Solutions Architect - Associate | Online Training

Hi all,

Hope your AWS learning is going well. I am happy to read some of the good comments from you all. I recently created the much asked Solutions Architect - Associate playlist.

Here is an update about the upcoming training classes. I am organizing these classes so that the serious learners could get more value and they could move forward comparatively faster. Also, this gives them (comparatively) more access to ask questions.


We are organizing SysOps and Solutions Architect classes in July 2017. Earlier, the plan was to do SysOps course online and Solutions Architect as classroom training at Bangalore (India). But, I have received more requests from people (because of location constraints) to do Solutions Architect as well via Online mode.



I want to stress a point again here; I am not a supporter of gaining certification by preparing specifically for it. Rather, I recommend learning the platform enough that you don't have to prepare for certification - you should be able to clear it just with your sheer knowledge and experience.

Upcoming training classes are going to be filled with demos and discussions. Of course, you are free to go through the video tutorials which are available on our AWS YouTube channel before attending the training.

Please look at following video to understand the common part between Solutions Architect & SysOps track --

Hence, for SysOps it is going to be of 24 hours totally and SA track would be of 32 hours. You can see complete details for SysOps here.

PLEASE SHARE this POST with your friends on LinkedIn/Facebook/Twitter.

For the Solutions Architect, here are the details:
Duration: 32 hours
Timing: 7 to 11 AM (IST)
Dates: 8, 9, 15, 16, 22, 23, 29, 30 July (4 hours each day) all weekends
Cost: Rs. 19,999/-

Agenda is given below --  

AWS SysOps Administrator - Associate | Online Training in July 2017

Hi guys,

I hope you have learned from the YouTube videos till now. I am organising an Online training for AWS SysOps Administrator - Associate level. Please let me know if you are interested.


Online Training starting in July 2017.
Duration: 24 hours
Timing: 7 to 11 AM (IST)
Dates: 8, 9, 15, 16, 22, 23 July (4 hours each day) all weekends
Cost: Rs. 16,999/-

SysOps Certification is the most practical oriented and hence you would find lots of #labs and #demos.

Agenda is attached below....
Please let me know if you are interested to join via comment or direct message.


AWS Solutions Architect - Associate | Classroom Training at Bangalore


The details of this training has been updated. Please click here to read current details.

Hi guys,

I hope you have learned from the Youtube videos till now. I am organising a 4 day Classroom training at Bangalore for AWS Solutions Architect - Associate level. Please let me know if you are interested.


Duration - 4 days
Track - AWS Solutions Architect - Associate
Location - Bangalore (27th to 30th July) #datesUpdated
Cost - Rs. 22,999/-
Course would have lots of #labs, case-studies and industry examples.

Please let me know if you are interested to join via comment or direct message. Agenda given below.



AWS ENI - Query from Viewers -- 005

Hi guys, 

I have got following doubts from one of our viewers on this video -- 


So by adding multiple ENI to AN INSTANCE 
1. Still its a single point of failure, Where in if the instance fails, all the attached ENI also fails right? 
ANS: Yes, if instance fails all the ENIs attached to that instance would not be of any use. Mutiple ENIs do not increase Availability or Bandwidth of the machine. Rather, they are there for the isolation purpose.

2. Normally in an ON-PREM INFRA, we used to have muti NIC for the webserver not to have a single point of failure in case of one NIC CARD fails? Where as in AWS its a virtual world and I assume its a managed service right? we don't have to worry about the Primary failing at all ? 
ANS: On AWS, we handle that scenario via Auto-scaling Group. So that if a machine becomes unreachable, another instance takes its place. 

3. Why should we disable the SOURCE/DESTINATION CHECK FLAG for the ENI? I think we disable the "SOURCE/DESTINATION" only when we create a NAT INSTANCE, that's right? 
ANS: That's correct. Other than NAT, you do not have to disable it. 

4. The primary ENI can't be detached, Becoz that's the one which makes the instance to have a public and/or private IP right? 
ANS: Not so. It is more of a restriction from AWS implementation perspective. In future, they might start allowing it (as you can detach ROOT EBS volume for a stopped machine).

5. So the best use case for multi eni would be, Where in a. In a single large instance with multi ENI, that one instance can be part of both private and public subnet b. Having said that, I can have web server facing the internet in the public ENI and the database also in the same instance in the private ENI and configure the security group accordingly. 
ANS: Sorry, Incorrect! When you create ENI, you can see that its scope is a Subnet (same way scope of an EC2 is also Subnet). Hence, an EC2 and an ENI would be there only in one subnet (not two). 
You can use multi ENI to give 2 different IPs to 2 different user groups. E.g. On an EC2 open port 8080 on IP1 and port 22 on IP2. Also, attach different security groups in above ENIs. Hence, the users would never know they are accessing the same machine. 

 6. So with multiNIC, I can have multiple websites on the same server, with each Nic attached to one website?
ANS: You can have multiple websites on one machine even without multi-NIC. Different webservers (like IIS) support the same. 

How to build a Highly Available environment on AWS

There are few important things to take care regarding High Availability.
  • Always, run your instances in 2 Availability Zone (that's minimum)
  • if you want 99.99% availability, then 2 AZ with in a region is enough. 
  • For 99.999% availability, AWS recommends implementing your infrastructure in 2 regions. 
We have talked about the usage of Route53 in detail in the following video.


Route53 supports 5 types of routing:
  1. Simple routing 
  2. Weighted round robin 
  3. Latency-based routing 
  4. Health check and DNS failover 
  5. Geo-location routing 

AWS RDS & Read Replica -- Query from Viewers - 004

Friends,
I got few questions related to RDS on this video tutorial


1. So when we create a read replica, can we have read replicas of different flavor from the primary, Mean to ask , If my primary is MYSQL , can i have one of the read replica's as aurora ? is that a good practice to have heterogeneous DBs like that ? 
ANS: It is not possible to have different flavor for all. Only for RDS MySQL they allow to create Aurora as read-replica (this is because AWS wants adoption of Aurora to be more and easier). Also, Aurora is a good engineered version of MySQL. Good practice? - NO

2. The whole intention of creating the read replica is to divert or spread the read traffic evenly, after creating a read replica , does amazon take care of doing the work of balancing the load, or should we connect the respective created read replicas with our instances? i created one read replica , but dint know where to connect the created one to the instance . 
ANS: When you create read replica you get a Endpoint for the same. You will have use this endpoint in your application to send all the READ traffic here. If you have multiple Read Replicas from the same RDS, you can use a custom Load Balancer in front of them as well. 

3. Why is it mandate to enable the back up for read replicas? 
ANS: There is no backup required for Read Replicas. All Read Replica DB instances are created as Single-AZ deployments with backups disabled. All other DB instance attributes (including DB security groups and DB parameter groups) are inherited from the source DB instance, except as specified below.


Important - The source DB instance must have backup retention enabled.

4. While creating a RDS DB, there is an option which says " Publicly accessible " yes or no , normally we keep the DB in the private subnet right ? can you help me with any case scenario where we will expose the DB to the public ?
ANS: "Publicly Accessible" attribute decides whether RDS would be accessible using a PUBLIC IP or not. There could be a case where you want to keep a DB in Public subnet and allow Publicly accessible so that one of your remote office location can upload data (there is no VPN suppose). In this case, in order to keep RDS secure, you will allow only one PUBLIC IP (of your remote office) in the Security Group of RDS.
_________________________________________________
If you have got benefited from KnowledgeIndia, please help us by SHARING this blog post on your Facebook, LinkedIn and Twitter. Thanks a lot.. Happy learning AWS !!!

Storage Options on AWS - S3, EBS, Instance Storage & EFS

Friends,
In this post, I want to talk to you about different storage options on AWS. AWS gives us following types of storage options. Let's understand quick difference between them first.


  1. EBS: Think of it like a disc of your server. It is Block-level storage and hence most suitable for very fast read and write operations at the same time. E.g. OS, databases etc. These are connected over network to the EC2 instances (like SAN). An EBS volume can be attached to only EC2 instance at a given time. 


  2. Instance Storage:This is again block level disc with your EC2 instances, but they are local on the physical host. These are non-persistent discs and hence any data on this gets lost if the instance is stopped and started. Data is not lost in case of EC2 reboot. 


  3. EFS: This is available as an equivalent of NFS. You can mount an EFS to multiple EC2 instances and use like a shared disk. This currently works only with Linux isntances and not Windows.

  4. Amazon S3: This is a highly available Object level storage. Objects (files) are accessed via their keys. This is suitable for storing files which can be accessed by an application via AWS APIs. Do not use this as a disc or mount it to EC2 instances. In following videos, you can learn about various features offered by S3 along with using S3 commands via #AWS #CLI


  5. Glacier: This is the archival service from AWS. Instead of doing archival on tapes (on-premises), you may choose to archive to Glacier. It is cheapest form of storage. A video on Glacier is coming soon. 
SUBSCRIBE to our blog by putting and verifying your email on right side. Also, please join our Linkedin group - https://www.linkedin.com/groups/10389754/ 

SHARE the article with your friends if you are able to understand different storage options on AWS. 

Understanding ELB in depth - Query from Viewers - 003

Hi Guys,
I have got some good questions on ELB for the given tutorial --


1. you had mentioned that the ELB internally creates instances to manage traffic. Just to expand a little on this, So When this ELB creates the internal instance(i assume its not visible to us ) and it consumes the private IP's too right, Does the internal instance gets created per availability zone or per instance? Coz, I attach an ELB to the instance not to the A.Z ?right? I assume it's something like an agent which gets installed per instance and sort of polling the Domain LB? pls, correct my understanding? 
ANS: Yes, those instances won't be visible to users as ELB is a managed service. These instances do use ENIs (and hence Private IPs) from the respective subnets. ELB creates one instance in every subnet (you choose while ELB creation). There is no agent running on your instances (e.g. web server). Also, you register your instances to an ELB and hence tell ELB that it could send traffic to these instances.

2. Under Load balancers / instances, down the page under availability zone, ELB had created 2 AZ's. whereas I had all 3 of my instances in the same A.Z, though I can edit it and remove the other A.Z, why does it create an another AZ reference point when all my instances are in only one AZ? pls assist 
ANS: While creating ELB, you can clearly choose which subnets you want it to handle. It is recommended that you give 2 subnets in 2 different AZs (based on principle of HA). ELB will launch an internal instance in the subnets you choose while creating ELB.

3. Under health check, the ping target, I gave the path index.html. let's assume I have 2 instances and if I want the index.html to reside in a different location for all the 2 instance, let's assume for the 1st one, it's under var/www/HTML and for the second one if its under var/www ..does the ping target as the intelligence to check the file irrespective if its location?
ANS: Given the fact that all the instances behind an ELB are generally kept identical, the path of the file has to be same. Though, if you have some specific use-case for the above scenario, it could be achieved using Application Load Balancer. But, in Classic ELB, it has to be same path. 

KEEP practicing, keep LEARNING !!!
 

Understanding AWS Free Tier - Query from Viewers - 002

Most of us are pretty excited to learn and work on AWS. Thanks to AWS that there is Free Tier available to practice as well. But, most of the learners end up getting bill even after being cautious.

I have picked up following query from my YouTube channel for AWS. I have got many similar queries earlier as well.

I am following your all videos from the beginning on AWS sysops tutorials and its fantastic the way you explain. here i have a small doubt on free tier account. 750 hours of ec2 instance... how is that calculated? Is that only one instance limit for 750 hours or many instances multiplied with number of hours used..? Actually i have used 2 instances while practicing with 30GB of EBS each and i could see the amount in some dollars month to date in billing console. Is that amount i need to pay.. i mean am i crossed the free tier limits. kindly clarify me this please... i have tried contacting aws support but again that also need to subscribed it seems every where they are trying to grab from us :( Thanks in advance.
___________________

ANS: In order to understand above scenario, consider following points --
1. In case of an EC2, there are multiple cost factors - EC2 instance charges, EBS charge, Data-out charges, Elastic IP charges etc. 
2. Read Free Tier FAQ as well.  

As explained in FAQ, you can use 750 hours of Linux AND 750 hours of Windows EC2. You can have 2 EC2 Linux running 375 hours each as well. They would look at the aggregate hours only. 
But, in the above case the cost would have been incurred because of EBS usage. For EBS, AWS gives "30 GB of Amazon Elastic Block Storage in any combination of General Purpose (SSD) or Magnetic"

As the user has run 2 Windows instances, EBS usage would have gone more than Free Tier allowed limits (each Windows EC2 takes 30 GB minimum EBS). Also, remember that EBS continues to cost you after creation, till it is terminated (because your data is consuming the storage on Cloud, right!!). EC2 instance costs you only if it is running. There is no charge when it is in stopped state. 

Happy Learning AWS !!! Please SHARE with your friends on Facebook and LinkedIn.

Setting up your new AWS account - Important things to do

When we setup a new AWS account, we should do some of these important things:

1. Enable Billing for IAM users
By default, only ROOT user (one who signed up for the AWS account by putting his email and credit-card) can access the Billing related sections. Once this setting is enabled, you can give Billing access to IAM users as well. Remember, not all the users will have default access to Billing section after this setting. Only the IAM users whom you allow (via IAM Policies) would be able to access Billing section. 


2. Activate MFA for Root user / other privileged IAM users
At top click on your account name and go to "My Security Credentials". Read here if you have doubts. You can use your smartphone and install Google Authenticator app on that. 


3. Enable Detailed Billing
You can enable detailed billing reports (3rd option in picture) and receive the same in an S3 bucket. You should enable this as it helps you break down the cost according to tags on various resources. These files are updated in your S3 bucket every 20-30 minutes and give you a very granular level of breakup of your spending.


4. Enable Cost Allocation Tags
Once you convert a tag to Cost Allocation tag, it starts appearing in the detailed billing file as an additional column (user: Environment) and you can easily filter the cost in EXCEL for different values. E.g. You create a tag called Environment and it can have different values DEV, QA, UAT, PROD. Hence, no money leakage; you can easily attribute who has spent how much in a month. 

5. Free Tier Benefits on a new account
AWS gives some amount of free usage on every new account. Read the conditions in detail here and then start using. Watch this video to learn few more things quickly. 



Happy Learning AWS !! Like and SHARE this please. 

Query from KnowledgeIndia Viewers - 001



I got following questions from one of our viewers and I thought of answering it here --

Please SUBSCRIBE to this blog by entering and VERIFYING your email on right side.
1. Can we limit versioning, Meaning to ask, versioning if enabled keeps all the modified files, can I limit it to the last 3 modifications?
ANS: You can limit it by time. E.g. that versions of last 3 days would remain there. You can move older versions to different storage classes with help Lifecycle Policies. In another case, you can write your custom Lambda function which gets invoked every time an object gets created and it could check and keep last 3 versions of that object and delete anything older.




2.When I choose s3 from the console, it doesn't allow me to choose a region from the console, Whereas after going into S3, it allows me to choose a region? why is that?
ANS: Many people have got that confusion. In case of S3, you see all your buckets (irrespective of their region) in one UI. That's why you cannot choose region at top. But, an S3 bucket belongs to one region only, hence you need to choose that when you are creating an S3 bucket.
3.What is the need to mandate versioning in cross-region replication? Any theory behind it?
ANS: Few things are done based on Engineering choices done by the product development team. I guess this is one out of them. A very good read is this page -- http://docs.aws.amazon.com/AmazonS3/latest/dev/crr-what-is-isnot-replicated.html



4.When I give a user named trinity, full S3 admin access in IAM, and then I create a bucket named Skynet only for few users, will trinity(user) still have full access over Skynet(bucket) ?
ANS: How are you ensuring that Skynet is created for few users? When you create a resource, you don't create it for a user, rather you use IAM to control which user can access that resource. In the above case Trinity has full S3 access and hence she would be able to access Skynet. But, you can explore the opportunity of denying her access at Bucket Policy level (defined at Skynet bucket).

5. After enabling cross region replication , if i delete the content in the source or destination , that deletion is not getting replicated . i can still see the file intact in my destination folder ? [In my case, though i replicated only a part of the file from the source, [replication was successful] , But when i deleted the whole source bucket, the files still where intact in my destination ?

ANS: This is the behavior shown by S3.

What is VPC? Learn from scratch

VPC (Virtual Private Cloud) is an isolated area carved out on AWS for yourself. You control the size of this area and the Private IP used in this area. You also control the behavior of different sections of this VPC (which are called Subnets). 

If you don't have time to read, just watch this compiled video playlist.

A subnet could be Public or Private in nature. You can launch instances (EC2, RDS, Redshift etc.) by choosing a subnet and this decides the Private IP of the instance. In addition to this, the network-level behavior also gets decided based on subnet (in case of Public Subnet instance would be reachable from internet e.g. Web servers; in case of Private subnet, instance would not be reachable from internet e.g. DB servers). All the instances in a VPC can talk to each other using Private IP

With a new AWS account, you get a default VPC in every region. You should use this only for initial practice and quick instance launches. For any customer POC / implementation do not use Default VPC; rather create a new VPC based on customer (or project) requirements. Watch this video to learn VPC creation from scratch --


Based on the above video you can create a new VPC. While creating VPC and subnets, take care of their sizing as these cannot be modified once created (VPC / subnet needs to be deleted and re-created, there is no modification/extension). Talk to your customer and understand how many resources need to be placed in Public and Private subnets and size them accordingly.

The instances launched in Private subnet would not be reachable from Internet directly. In case, you need to change something on Private Instances, you will have to make use of Bastion Host (or Jumpbox). Bastion Host is just a small machine (Windows/Linux) which is launched in Public subnet (and given a public IP). We can first login (via RDP/SSH) to this machine and then connect to machines in Private Subnets via their Private IP. You can learn the same with a demo video here ---


A VPC exists at region level, a subnet exists at AZ level. Following entities exist with in a VPC:

  • Security Group (VPC level across subnets in that VPC) (learn SG here)
  • NACL (VPC level across subnets in that VPC)
  • ENI (at a subnet level) (learn ENI here)
At minimum, you should have your subnets in 2 different AZ to ensure high-availability. With in a VPC:- 
  1. A Security Group can be attached to multiple instances 
  2. An instance can have multiple Security Groups attached to it
  3. A subnet can have only one NACL attached to it
  4. An NACL can be attached to multiple subnets
Every VPC is an isolated network and 2 VPCs cannot talk to each other (except via Public network). If you want to enable communication between 2 VPCs over Amazon Private network, use option of VPC Peering ---


Happy Learning AWS. Please do share this post with your friends :) 

Based on your request

Hello Friends, 

We had a wonderful Live Session last sunday on SysOps case study and based on your request, i have created necessary pages on the blog. 

  1. This page allows you to share your AWS Certification experience. So, please go ahead and help others. 
  2. This page gives you a chance to talk about KnowledgeIndia and let others know the way it helps you
I am sure you will take out time and contribute here. Hoping to bring lot more tutorials very soon. 

Also, the calendar is updated. Please see on the right side and join us on the coming weekend. 

Happy learning AWS, practice more and more !!!

AWS Practical Exercises -- 001

Following exercises are given in order to test your skills and understanding. We will try to setup an environment and cover things along with. 

  1. In Oregon, create a VPC with CIDR 10.0.0.0/24
  2. Divide this VPC into 6 subnets across 2 AZ (e.g. a, b)
  3. Make 2 subnets as Public (namely 1-a, 1-b)
  4. Make 2 subnets as Private with outbound internet (namely 2-a, 2-b) 
  5. Make 2 subnets as Private with no outbound internet (namely 3-a, 3-b)
  6. Create a Public Classic ELB in 1-a and 1-b. It should accept traffic on port 80 from ANYWHERE. Create health checks for the instances in 2-a and 2-b.
  7. Create 2 Linux/Windows instances in 2-a and 2-b with web-server installed (Apache/IIS). These instances should accept traffic on port 80 only from ELB. Register these instances with the ELB.
  8. Create a multi-AZ MySQL RDS in 3-a and 3-b. This DB should accept traffic only on port 3306 from instances in 2-a and 2-b.
  9. Create a Jumpbox / Bastion host in 1-a or 1-b and verify the above connectivity.
  10. Ensure that Security Groups and NACLs are created properly. 
RESOURCES for help:

Getting started with Amazon EC2 (AWS Compute Service)

EC2 (Elastic Compute Cloud) is equivalent of servers (on-premises). There are many attributes we need to provide while creating an EC2; e.g. instance type, AMI id, subnet id, security group etc. This tutorial explains you above concepts in detail and shows you how to launch an EC2 in default VPC. (Click here to learn how to create a custom VPC)


Tenancy is a critical attribute to understand in case of EC2 instances. There are 3 tenancy models available:

  1. Shared (or Default)
  2. Dedicated Instances
  3. Dedicated Host
Many of the learners remain confused on the tenancy models of EC2. Learn them quickly in this video and understand when to use which Tenancy model. Please do read the description of the video. It is important to understand Tenancy as to satisfy any compliance standards your organisation might require.


An EC2 instance will always have a Private IP address. In addition to this, it can be allocated a Public IP address as well. But, the Public IP changes if instance is stopped and started. It does not change on reboot though. If you want a Public IP which should not change on every stop & start, you can make use of Elastic IP. Elastic IP costs you only when it is not attached to an instance, or when it is attached to a stopped instance. You should understand clearly the difference between 3 types of IP on AWS, please watch this video.


Along with EC2, an Elastic Network Interface (ENI) gets created automatically (this is called Primary ENI for the instance). This is virtualized form of Network Interface Card (NIC). Every ENI will have a Private IP, which is called the primary IP of that ENI. You can add more secondary IPs on an ENI as well. You can attach multiple ENIs to an EC2 instance as well. The primary ENI can never be detached from an EC2 instance. You should learn how to work with ENI in the following video:


At last, in terms of EC2 you should understand the different pricing models available. There are 6 pricing models with EC2 instances and understanding them saves you from all the future bill related surprises. All 6 different models are explained here concisely.


When it comes to AWS EC2 reservation, you need to learn different options which are available. AWS offers multiple options to suit your needs and save money for you. You can understand in following video, with examples, which type of reservation you should take for a given scenario.


RECENT UPDATES

AWS announced few new features on EC2 recently, you can understand those in quick 10-15 below.




Happy learning AWS !!! Please comment down for any doubts. Don't forget to SHARE this URL with your friends. 

SysOps on AWS - Case Study ASOP001 - April 09, 2017

Friends, 
We shall be joining this Sunday (April 09, 2017) via Live YouTube Session (link here) to discuss the SysOps on AWS case study. Please see the case study below and ensure to join the broadcast in time.

Your organization has got the final architecture for a new upcoming web solution. Read through the following and implement the same on AWS.
  1. The web solution makes use of ASP.net and SQL Server. 
  2. It also has lot of static assets e.g. images, audio, video etc. 
  3. Region of choice is Oregon (US). 
  4. Implement High-availability with auto-scaling. 
  5. Instance types for Web-server to be m4.large with 200GB EBS SSD volumes. 
  6. Under normal conditions, 4 servers are enough. Implement auto-scaling to add more servers when CPU utilization is more than 80% for 15 mins. Maximum number of servers could be 8. 
  7. Make use of CDN (CloudFront) to distribute the static content across the world. 
  8. Create different IAM users and groups - 
    • Group1: Have access to start/stop EC2 instances and reboot RDS instances. 
    • Group2: Have full rights on S3. Enable Detailed Cloudwatch monitoring and create Cloudwatch alarms on these instances and subscribe it to respective organization DLs. 
  9. Setup the VPC to achieve above use-case. In the created VPC, keep EC2 and RDS private. ELB would be public-facing. Setup Security Groups and NACL as per best practices.
URL to join:


Architecting on AWS - Case Study ASA001 - April 08, 2017

Friends, 
We shall be joining this Saturday (April 08, 2017) via Live YouTube Session (link here) to discuss the Architecting on AWS case study. Please see the case study below and ensure to join the broadcast in time. 

A customer has given following problem in an RFP. Create the solution for the customer with applicable AWS services and explain the advantages. 
  • Company ABC is currently operating in USA for the past 10 years with all infrastructure in-house. 
  • They want to move their existing JAVA based website to Cloud now, because of increasing traffic. 
  • Their HQ is located at New Jersey. They receive traffic from all over the world. 
  • They would want to make use of existing Microsoft Active Directory (on-prem) to validate their employees on the website. 
  • Also, they would want the solution to be flexible enough to handle weekend spikes in traffic. 
  • In addition to this, they want to have an ETL server (Informatica) on cloud and the results of ETL should be shown on Tableau. These dashboards would be embedded on their website. 
  • Recommend a complete architecture for this RFP along with complete infrastructure details and pricing. 
  • Also, suggest the customer different pricing strategies to handle the DEV, TEST and PROD environment for the above setup on AWS.
URL to join:

Welcome to Learning AWS

I have talked to so many IT professionals who want to get started and learn AWS. There are many reasons behind this:
  • Professional growth
  • Hike needed :) 
  • Craving to learn any new platform
  • Being in love with AWS (because of its features maturity)
You may belong to any of the above, but if you are serious about learning AWS then it is very important to understand that it is vast and you will have to spend time and energy to learn it systematically.

AWS is really good in terms of providing wonderful documentation and Free-tier benefits (under a new account for 12 months) so that you can learn the platform without spending any money. I suggest watching this video to understand the same and visit AWS FREE TIER website to read to the finest details.

Once you have created your AWS account and you are ready to learn further, please choose a track which you would follow (in case you also want to get certified). It is perfectly fine to learn the stuff and not do any certification but most of the professionals do it as it gives them visibility during Job change. Following video helps you in choosing the track based on your skills/role:

Post this, you should ideally understand the services which form the building blocks of AWS and the additional services required for your AWS certification track. This video explains you the same in 10 minutes.

SUBSCRIBE to this blog and get wonderful content on AWS. Happy learning !!!

Selected videos!