0% found this document useful (0 votes)
51 views

Cloud Computing Notes

This document provides an overview of Unix and Linux operating systems. It discusses the main flavors of Unix like Solaris, HP-UX, and Linux distributions like Ubuntu, Fedora, and Debian. The core concepts of Unix architecture including the kernel, shell, commands, and file organization are explained. Common commands for navigating the file system, copying/moving files, creating directories and changing permissions are also summarized.

Uploaded by

Amrit dwibedy
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Cloud Computing Notes

This document provides an overview of Unix and Linux operating systems. It discusses the main flavors of Unix like Solaris, HP-UX, and Linux distributions like Ubuntu, Fedora, and Debian. The core concepts of Unix architecture including the kernel, shell, commands, and file organization are explained. Common commands for navigating the file system, copying/moving files, creating directories and changing permissions are also summarized.

Uploaded by

Amrit dwibedy
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Cloud computing Notes

Unix:-
UNIX is an operating system which was first developed in the 1960s, and has been
under constant development ever since. By operating system, we mean the suite of
programs which make the computer work. It is a stable, multi-user, multi-tasking
system for servers, desktops and laptops.

Flavours of unix:
● HP-UX.
● SunOS and Solaris.
● IRIX.
● Digital UNIX (formerly OSF/1)
● AIX.
● NeXTSTEP and OpenStep.
● SCO Unix.
● Linux.
And many more….
Flavours of Linux:
There are more than 600 Linux distributions..
Some famous recognized distros are :
● Android
● Arch Linux
● Centos
● Debian
● Elementary OS
● Fedora
● Gentoo Linux
● Kali Linux
● Linux Mint
● Ubuntu
● SUSE
● Redhat
Unix Architecture:
The main concept that unites all the versions of Unix is the following four basics −
● Kernel − The kernel is the heart of the operating system. It interacts with the hardware and
most of the tasks like memory management, task scheduling and file management.
● Shell − The shell is the utility that processes your requests. When you type in a command at
your terminal, the shell interprets the command and calls the program that you want. The shell
uses standard syntax for all commands. C Shell, Bourne Shell and Korn Shell are the most
famous shells which are available with most of the Unix variants.
● Commands and Utilities − There are various commands and utilities which you can make
use of in your day to day activities. cp, mv, cat and grep, etc. are few examples of commands
and utilities. There are over 250 standard commands plus numerous others provided through
3rd party software. All the commands come along with various options.
● Files and Directories − All the data of Unix is organized into files. All files are then
organized into directories. These directories are further organized into a tree-like structure
called the filesystem

Commands:
● ls - command to list out all the files or directories available in a directory.
$ ls -l

option description

ls -a list all files including hidden file starting with '.'

ls -- colored list [=always/never/auto]


color

ls -d list directories - with ' */'

ls -F add one char of */=>@| to enteries

ls -i list file's inode index number

ls -l list with long format - show permissions

ls -la list long format including hidden files

ls -lh list long format with readable file size

ls -ls list with long format with file size

ls -r list in reverse order

ls -R list recursively directory tree

ls -s list file size

ls -S sort by file size


ls -t sort by time & date

ls -X sort by extension name

● whoami - To know who logged in the system.


● ‘ls -R’- to shows all the files not only in directories but also subdirectories.
● ‘ls -al’ gives detailed information of the files in Columnar format.

1st Column File type and access permissions

2nd Column # of HardLinks to the File

3rd Column Owner and the creator of the file

4th Column Group of the owner

5th Column File size in Bytes

6th Column Date and Time

7th Column Directory or File name

● cat-To create a new file, use the command


1. cat > filename
2. Add content
3. Press ‘ctrl + d’ to return to command prompt.
● The syntax to combine 2 files is –
cat file1 file2 > newfilename

option description

cat - add line numbers to non blank lines


b
cat - add line numbers to all lines
n

cat - squeeze blank lines to one line


s

cat - show $ at the end of line


E

cat - show ^I instead of tabs


T

● ‘rm’ command removes files from the system without confirmation.

rm filename
● mv-To move a file, use the command.
mv filename new_file_location
mv sample2 /home/guru99/Documents
For renaming file:
mv filename newfilename
The ‘mv’ (move) command (covered earlier) can also be used for renaming
directories.
mv directoryname newdirectoryname

● mkdir-Creating Directories
mkdir directoryname
If you want to create a directory in a different location other than ‘Home directory’,
you could use the following command –
mkdir /tmp/MUSIC

● To remove a directory, use the command – rm


rmdir directoryname
● To get help on any command that you do not understand, you can type
man
● For copying, the text from a source, you would use Ctrl + c, but for pasting it on the
Terminal, you need to use Ctrl + Shift + p. You can also try Shift + Insert or select
Edit>Paste on the menu
● cd command in Linux/Unix

cd is a Linux command to change the directory/folder of the terminal's shell.

You can press the tab button in order to auto complete the directory name.

cd syntax

$ cd [directory]

cd command examples

Change to home directory (determined by $HOME environment variable):

$ cd

Also change to home directory:

$ cd ~

Change to root directory:

$ cd /

Change to parent directory:

$ cd ..

Change to subdirectory Documents:

$ cd Documents

Change to subdirectory Documents/Books:

$ cd Documents/Books
Change to directory with absolute path /home/user/Desktop:

$ cd /home/user/Desktop

Change to directory name with white space - My Images:

$ cd My\ Images

Or

$ cd "My Images"

Or

$ cd 'My Images'

● pwd - print working directory, is a Linux command to get the current


working directory.
● cp is a Linux shell command to copy files and directories.

option description

cp -a archive files

cp -f force copy by removing the destination file if needed

cp -i interactive - ask before overwrite

cp -l link files instead of copy

cp -L follow symbolic links

cp -n no file overwrite

cp -R recursive copy (including hidden files)

cp -u update - copy when source is newer than dest

cp -v verbose - print informative messages


● ZIP is a compression and file packaging utility for Unix. Each file is stored in single
.zip {.zip-filename} file with the extension .zip.
$zip myfile.zip filename.txt
● Unzip will list, test, or extract files from a ZIP archive, commonly found on Unix
systems. The default behavior (with no options) is to extract into the current directory
(and sub-directories below it) all files from the specified ZIP archive.
Syntax :
$unzip myfile.zip
● The Linux ‘tar’ stands for tape archive, is used to create Archive and extract the
Archive files. tar command in Linux is one of the important command which provides
archiving functionality in Linux. We can use Linux tar command to create
compressed or uncompressed Archive files and also maintain and modify them.
Syntax:
tar [options] [archive-file] [file or directory to be
archived]
Options:
-c : Creates Archive
-x : Extract the archive
-f : creates archive with given filename
-t : displays or lists files in archived file
-u : archives and adds to an existing archive file
-v : Displays Verbose Information
-A : Concatenates the archive files
-z : zip, tells tar command that creates tar file using gzip
-j : filter archive tar file using tbzip
-W : Verify a archive file
-r : update or add file or directory in already existed .tar file
Creating an uncompressed tar Archive using option -cvf :
$ tar cvf file.tar *.c
Extracting files from Archive using option -xvf : This command extracts files from
Archives.
$ tar xvf file.tar
gzip compression on the tar Archive, using option -z : This command creates a
tar file called file.tar.gz which is the Archive of .c files.
$ tar cvzf file.tar.gz *.c
● chmod command is used to change the access mode of a file.
Syntax :
chmod [reference][operator][mode] file..
r Permission to read the file.
w Permission to write (or delete) the file.
x Permission to execute the file, or, in
the case of a directory, search it.
● chmod +rwx filename to add permissions.
● chmod -rwx directoryname to remove permissions.
● chmod +x filename to allow executable permissions.
● chmod -wx filename to take out write and executable
permissions.
● The command for changing directory permissions for group
owners is similar, but add a “g” for group or “o” for users:
● chmod g+w filename
● chmod g-wx filename
● chmod o+w filename
● chmod o-rwx foldername

You may need to know how to change permissions in numeric code in


Linux, so to do this you use numbers instead of “r”, “w”, or “x”.

● 0 = No Permission
● 1 = Execute
● 2 = Write
● 4 = Read

Basically, you add up the numbers depending on the level of


permission you want to give.

Permission numbers are:

● 0 = ---
● 1 = --x
● 2 = -w-
● 3 = -wx
● 4 = r-
● 5 = r-x
● 6 = rw-
● 7 = rwx
● chmod 777 foldername will give read, write, and execute
permissions for everyone.
● chmod 700 foldername will give read, write, and execute
permissions for the user only.
● chmod 327 foldername will give write and execute (3)
permission for the user, w (2) for the group, and read, write,
and execute for the users.

Linux text Editor :

1. Cat - cat filename


2. More- more filename
3. Less- less filename
4. Mcas- mcas filename
5. Vi- vi filename
6. Vim-vim filename
Vi editor is mostly used editor.

Vi editor :

The VI editor is the most popular and classic text editor in the Linux family. Below, are some
reasons which make it a widely used editor –
1) It is available in almost all Linux Distributions
2) It works the same across different platforms and Distributions
3) It is user-friendly. Hence, millions of Linux users love it and use it for their editing needs

vi Command mode:

● The vi editor opens in this mode, and it only understands commands


● In this mode, you can, move the cursor and cut, copy, paste the text
● This mode also saves the changes you have made to the file
● Commands are case sensitive.
vi Editor Insert mode:
● This mode is for inserting text in the file.
● You can switch to the Insert mode from the command mode by pressing ‘i’ on the
keyboard
● Once you are in Insert mode, any key would be taken as an input for the file on which
you are currently working.
● To return to the command mode and save the changes you have made you need to
press the Esc key.

To launch the VI Editor -Open the Terminal (CLI) and type


vi <filename_NEW> or <filename_EXISTING>

VI Editing commands:

● i – Insert at cursor (goes into insert mode)


● a – Write after cursor (goes into insert mode)
● A – Write at the end of line (goes into insert mode)
● ESC – Terminate insert mode
● u – Undo last change
● U – Undo all changes to the entire line
● o – Open a new line (goes into insert mode)
● dd – Delete line
● 3dd – Delete 3 lines.
● D – Delete contents of line after the cursor
● C – Delete contents of a line after the cursor and insert new text. Press ESC key to
end insertion.
● dw – Delete word
● 4dw – Delete 4 words
● cw – Change word
● x – Delete character at the cursor
● r – Replace character
● R – Overwrite characters from cursor onward
● s – Substitute one character under cursor continue to insert
● S – Substitute entire line and begin to insert at the beginning of the line
● ~ – Change case of individual character

Moving within a file:

● k – Move cursor up
● j – Move cursor down
● h – Move cursor left
● l – Move cursor right

Saving and Closing the file:

● Shift+zz – Save the file and quit


● :w – Save the file but keep it open
● :q – Quit without saving
● :wq – Save the file and quit

grep command:
The grep filter searches a file for a particular pattern of characters, and displays all lines that
contain that pattern. The pattern that is searched in the file is referred to as the regular
expression (grep stands for globally search for regular expression and print out).

Syntax:
grep [options] pattern [files]

Options Description
-c : This prints only a count of the lines that match a pattern
-h : Display the matched lines, but do not display the filenames.
-i : Ignores, case for matching
-l : Displays list of a filenames only.
-n : Display the matched lines and their line numbers.
-v : This prints out all the lines that do not matches the pattern
-e exp : Specifies expression with this option. Can use multiple
times.
-f file : Takes patterns from file, one per line.
-E : Treats pattern as an extended regular expression (ERE)
-w : Match whole word
-o : Print only the matched parts of a matching line,
with each such part on a separate output line.

-A n : Prints searched line and nlines after the result.


-B n : Prints searched line and n line before the result.
-C n : Prints searched line and n lines after before the result.

find command:
The find command in UNIX is a command line utility for walking a file hierarchy. It can be
used to find files and directories and perform subsequent operations on them. It supports
searching by file, folder, name, creation date, modification date, owner and permissions. By
using the ‘-exec’ other UNIX commands can be executed on files or folders found.
Syntax :
$ find [where to start searching from]
[expression determines what to find] [-options] [what to
find]

Options :
● -exec CMD: The file being searched which meets the above criteria and returns 0
for as its exit status for successful command execution.
● -ok CMD : It works same as -exec except the user is prompted first.
● -inum N : Search for files with inode number ‘N’.
● -links N : Search for files with ‘N’ links.
● -name demo : Search for files that are specified by ‘demo’.
● -newer file : Search for files that were modified/created after ‘file’.
● -perm octal : Search for the file if permission is ‘octal’.
● -print : Display the path name of the files found by using the rest of the criteria.
● -empty : Search for empty files and directories.
● -size +N/-N : Search for files of ‘N’ blocks; ‘N’ followed by ‘c’can be used to
measure size in characters; ‘+N’ means size > ‘N’ blocks and ‘-N’ means size <
'N' blocks.
● -user name : Search for files owned by user name or ID ‘name’.
● \(expr \) : True if ‘expr’ is true; used for grouping criteria combined with OR or
AND.
● ! expr : True if ‘expr’ is false.
Example-

*Locate a regular file,link is 2,inum 56728,perm 666.

find | -typef -links2 -inum 56728 -perm 666

ps command:

As we all know Linux is a multitasking and multi-user systems. So, it allows multiple
processes to operate simultaneously without interfering with each other. Process is one of
the important fundamental concept of the Linux OS. A process is an executing instance of a
program and carry out different tasks within the operating system.
Linux provides us a utility called ps for viewing information related with the processes on a
system which stands as abbreviation for “Process Status”. ps command is used to list the
currently running processes and their PIDs along with some other information depends on
different options. It reads the process information from the virtual files in /proc file-system.
/proc contains virtual files, this is the reason it’s referred as a virtual file system.
ps provides numerous options for manipulating the output according to our need.
Syntax –
ps [options]
Options for ps Command :
● Simple process selection : Shows the processes for the current shell –

[root@rhel7 ~]# ps
PID TTY TIME CMD
12330 pts/0 00:00:00 bash
21621 pts/0 00:00:00 ps

PID – the unique process ID


TTY – terminal type that the user is logged into
TIME – amount of CPU in minutes and seconds that the process has been running
CMD – name of the command that launched the process.
● View Processes : View all the running processes use either of the following option
with ps –
[root@rhel7 ~]# ps -A
[root@rhel7 ~]# ps -e
● View Processes not associated with a terminal : View all processes except both
session leaders and processes not associated with a terminal.
[root@rhel7 ~]# ps -a
PID TTY TIME CMD
27011 pts/0 00:00:00 man
27016 pts/0 00:00:00 less
27499 pts/1 00:00:00 ps

● View all the processes except session leaders :


[root@rhel7 ~]# ps -d
● View all processes associated with this terminal :
[root@rhel7 ~]# ps -T
● View all the running processes :
[root@rhel7 ~]# ps -r
● fg command in linux used to put a background job in foreground.

Syntax: fg [job_spec]

● bg command in linux is used to place foreground jobs in background.

Syntax: bg [job_spec ...]

● Nice command used for setting up job priority.


● Syntax: nice -jobpriority(1-25) job no
● df command that displays the amount of disk space available on the file system
containing each file name argument.
● By default, df shows the disk space in 1 K blocks.

df Syntax :
df [OPTION]...[FILE]...
OPTION : to the options compatible with df command
FILE : specific filename in case you want to know the disk
space usage of a particular file system only
Add user:
useradd -u UID -g GID -d/home/username -s/bin/shell no -m username

userdel- Delete user


usermod- modifying user

Types of Shell:
● The C Shell –Denoted as csh
Command full-path name is /bin/csh,
Non-root user default prompt is hostname %,
Root user default prompt is hostname #.

● The Bourne Shell –Denoted as sh


Command full-path name is /bin/sh and /sbin/sh,
Non-root user default prompt is $,
Root user default prompt is #.

● The Korn Shell-It is denoted as ksh


Command full-path name is /bin/ksh,
Non-root user default prompt is $,
Root user default prompt is #.

● GNU Bourne-Again Shell –Denoted as bash


Command full-path name is /bin/bash,
Default prompt for a non-root user is bash-g.gg$
(g.ggindicates the shell version number like bash-3.50$),
Root user default prompt is bash-g.gg#.

● Head command- The head command, as the name implies, print the top N number
of data of the given input. By default, it prints the first 10 lines of the specified files.

$ head -n 5(no of lines) sample.txt

Tail command- it print the last N number of data of given input.


$ tail -n 5(no of lines) sample.txt
AWS CLOUD
What is cloud computing?

Cloud computing is the on-demand delivery of IT resources over the Internet with
pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data
centers and servers, you can access technology services, such as computing power,
storage, and databases, on an as-needed basis from a cloud provider like Amazon
Web Services (AWS).

What is AWS?

The full form of AWS is Amazon Web Services. It is a platform that offers flexible, reliable,
scalable, easy-to-use and, cost-effective cloud computing solutions.

Important AWS Services

Amazon Web Services offers a wide range of different business purpose global cloud-based
products. The products include storage, databases, analytics, networking, mobile,
development tools, enterprise applications, with a pay-as-you-go pricing model.

There are 4 main types of cloud computing: private clouds, public clouds, hybrid
clouds, and multiclouds.

Public clouds
Public clouds are cloud environments typically created from IT infrastructure not owned by
the end user. Some of the largest public cloud providers include
Alibaba Cloud, Amazon Web Services (AWS), Google Cloud, IBM Cloud, and Microsoft
Azure.
Traditional public clouds always ran off-premises, but today's public cloud providers have
started offering cloud services on clients’ on-premise data centers. This has made location
and ownership distinctions obsolete.

Private clouds
Private clouds are loosely defined as cloud environments solely dedicated to a single end
user or group, where the environment usually runs behind that user or group's firewall. All
clouds become private clouds when the underlying IT infrastructure is dedicated to a single
customer with completely isolated access.
But private clouds no longer have to be sourced from on-prem IT infrastructure.
Organizations are now building private clouds on rented, vendor-owned data centers located
off-premises, which makes any location and ownership rules obsolete

Hybrid clouds
A hybrid cloud is a seemingly single IT environment created from multiple environments
connected through local area networks (LANs), wide area networks (WANs), virtual private
networks (VPNs), and/or APIs.
The characteristics of hybrid clouds are complex and the requirements can differ, depending
on whom you ask. For example, a hybrid cloud may need to include:
● At least 1 private cloud and at least 1 public cloud
● 2 or more private clouds
● 2 or more public clouds
● A bare-metal or virtual environment connected to at least 1 public cloud or private
cloud
Cloud services
IaaS
IaaS means a cloud service provider manages the infrastructure for you—the actual servers,
network, virtualization, and data storage—through an internet connection. The user has
access through an API or dashboard, and essentially rents the infrastructure.

PaaS
PaaS means the hardware and an application-software platform are provided and managed
by an outside cloud service provider, but the user handles the apps running on top of the
platform and the data the app relies on. Primarily for developers and programmers, PaaS
gives users a shared cloud platform for application

SaaS
SaaS is a service that delivers a software application—which the cloud service provider
manages—to its users. Typically, SaaS apps are web applications or mobile apps that users
can access via a web browser. Software updates, bug fixes, and other general software
maintenance are taken care of for the user, and they connect to the cloud applications via a
dashboard or API.

AWS Product Services-


1. Analytics
2. Application Integration
3. Business Productivity
4. Compute
5. Machine learning
6. AWS cost management
7. Containers
8. Devops tools
9. Storage
10. Networking and content management
11. Database
12. End user computing
13. Security identity & compliance
14. Game tech
AWS Network Services-
1. VPC
2. Cloud front
3. Route 53
4. API gateway
5. Storage S3 service
6. EFS
7. EC2-Image builder
8. Lambda
9. Code Commit
10. Code Build
11. Code deploy
12. Code pipeline
13. Load balancer
14. Auto scaling
15. Cloud watch
16. Cloud trail
17. Cloud formation
18. RDS
19. Identity Access Management
20. Cognito Secrate manager
21. Database migration service
VPC-
1. Subnet
2. Route table
3. Internet gateway
4. Nat gateway
5. VPC peering
6. VPC flowlog
7. Security group
8. Network access controlist

Network :
An interconnection of multiple devices, also known as hosts, that are connected using
multiple paths for the purpose of sending/receiving data or media. Computer networks can
also include multiple devices/mediums which help in the communication between two
different devices; these are known as Network devices and include things such as routers,
switches, hubs, and bridges.
IP Address (Internet Protocol address):
Also known as the Logical Address, the IP Address is the network address of the system
across the network.
To identify each device in the world-wide-web, the Internet Assigned Numbers Authority
(IANA) assigns an IPV4 (Version 4) address as a unique identifier to each device on the
Internet.
The length of an IPv4 address is 32-bits, hence, we have 232 IP addresses available. The
length of an IPv6 address is 128-bits.
MAC Address (Media Access Control address):
Also known as physical address, the MAC Address is the unique identifier of each host and
is associated with its NIC (Network Interface Card).
Port:
A port can be referred to as a logical channel through which data can be sent/received to an
application. Any host may have multiple applications running, and each of these applications
is identified using the port number on which they are running.
A port number is a 16-bit integer, hence, we have 216 ports available which are categorized
as shown below:
Number of ports: 65,536
Range: 0 – 65535
Most used in AWS-
SSH-22
FTP-21
TelNet-23
SMTP-25
HTTP-80
HTTPS-443
SFTP-22

IP address structure:
IP addresses are displayed as a set of four digits- the default address maybe 192.158.1.38.
Each number on the set may range from 0 to 255. Therefore, the total IP address range
ranges from 0.0.0.0 to 255.255.255.255.

IP address is basically divided into two parts: X1. X2. X3. X4


1. [X1. X2. X3] is the Network ID
2. [X4] is the Host ID

Network ID–
It is the part of the left-hand IP address that identifies the specific network where the device
is located. In the normal home network, where the device has an IP address 192.168.1.32,
the 192.168.1 part of the address will be the network ID. It is customary to fill in the last part
that is not zero, so we can say that the device’s network ID is 192.168.1.0.
Hosting ID–
The host ID is part of the IP address that was not taken by the network ID. Identifies a
specific device (in the TCP / IP world, we call devices “host”) in that network. Continuing with
our example of the IP address 192.168.1.32, the host ID will be 32- the unique host ID on
the 192.168.1.0 network.
IP Address Types:
There are 4 types of IP Addresses- Public, Private, Fixed, and Dynamic. Among them,
public and private addresses are derived from their local network location, which should be
used within the network while public IP is used offline.
Public IP address–

A public IP address is an Internet Protocol address, encrypted by various servers/devices.


That’s when you connect these devices with your internet connection. This is the same IP
address we show on our homepage. A public Internet Protocol address is an Internet
Protocol address accessed over the Internet. Like the postal address used to deliver mail to
your home, the public Internet Protocol address is a different international Internet Protocol
address assigned to a computer.

Private IP address–

Everything that connects to your Internet network has a private IP address. This includes
computers, smartphones, and tablets but also any Bluetooth-enabled devices such as
speakers, printers, or smart TVs. Your router needs a way to identify these things
separately, and most things need a way to get to know each other. Therefore, your router
generates private IP addresses that are unique identifiers for each device that separates the
network.

unicast IP addresses – an address of a single interface. The IP addresses of this type are
used for one-to-one communication. Unicast IP addresses are used to direct packets to a
specific host. Here is an example:In the picture below you can see that the host wants to
communicate with the server. It uses the (unicast) IP address of the server (192.168.0.150)
to do so.

multicast IP addresses – used for one-to-many communication. Multicast messages are


sent to IP multicast group addresses. Routers forward copies of the packet out to every
interface that has hosts subscribed to that group address. Only the hosts that need to
receive the message will process the packets. All other hosts on the LAN will discard them.
Here is an example: example:

R1 has sent a multicast packet destined for 224.0.0.9. This is an RIPv2 packet, and only
routers on the network should read it. R2 will receive the packet and read it. All other hosts
on the LAN will discard the packet.

broadcast IP addresses – used to send data to all possible destinations in the broadcast
domain (the one-to-everybody communication). The broadcast address for a network has all
host bits on. For example, for the network 192.168.30.0 255.255.255.0 the broadcast
address would be 192.168.30.255*. Also, the IP address of all 1’s (255.255.255.255) can be
used for local broadcast. Here’s an example:
R1 wants to communicate with all hosts on the network and has sent a broadcast packet to
the broadcast IP address of 192.168.30.255. All hosts in the same broadcast domain will
receive and process the packet.

IPv4 Address Format


IPv4 addresses are expressed as a set of four numbers in decimal format, and each set is
separated by a dot. Thus, the term ‘dotted decimal format.’ Each set is called an ‘octet’
because a set is composed of 8 bits. The figure below shows the binary format of each octet
in the 192.168.10.100 IP address:

A number in an octet can range from 0 to 255. Therefore, the full IPv4 address space goes
from 0.0.0.0 to 255.255.255.255. The IPv4 address has two parts, the network part and the
host part. A subnet mask is used to identify these parts.
Network Part
The network part of the IPv4 address is on the left-hand side of the IP address. It specifies
the particular network to where the IPv4 address belongs. The network portion of the
address also identifies the IP address class of the IPv4 address.
For example, we have the IPv4 address 192.168.10.100 and a /24 subnet mask. /24 simply
means that the first 24 bits, starting from the left side, is the network portion of the IPv4
address. The 8 remaining bits of the 32 bits will be the host portion..
Host Part
The host portion of the IPv4 address uniquely identifies the device or the interface on your
network. Hosts that have the same network portion can communicate with one another
directly, without the need for the traffic to be routed.
Subnet mask

An IP address is divided into two parts: network and host parts. For example, an IP class A
address consists of 8 bits identifying the network and 24 bits identifying the host. This is
because the default subnet mask for a class A IP address is 8 bits long. (or, written in dotted
decimal notation, 255.0.0.0). What does it mean? Well, like an IP address, a subnet mask
also consists of 32 bits. Computers use it to determine the network part and the host part of
an address. The 1s in the subnet mask represent a network part, the 0s a host part.

Computers works only with bits. The math used to determine a network range is binary AND.
et’s say that we have the IP address of 10.0.0.1 with the default subnet mask of 8 bits
(255.0.0.0).
First, we need to convert the IP address to binary:
IP address: 10.0.0.1 = 00001010.00000000.00000000.00000001
Subnet mask 255.0.0.0 = 11111111.00000000.00000000.0000000
Computers then use the AND operation to determine the network number:
Slash Notation
Aside from the dotted decimal format, we can also write the subnet mask in slash notation. It
is a slash ‘/’ then followed by the subnet mask bits. To determine the slash notation of the
subnet mask, convert the dotted decimal format into binary, count the series of 1s, and add a
slash on the start.
For example, we have the dotted decimal subnet mask of 255.0.0.0. In binary, it is
11111111.00000000.00000000.0000000. The number of succeeding 1s are 8, therefore the
slash notation of 255.0.0.0 is /8.
Classes of IP addresses

TCP/IP defines five classes of IP addresses: class A, B, C, D, and E. Each class has a
range of valid IP addresses. The value of the first octet determines the class. IP addresses
from the first three classes (A, B and C) can be used for host addresses. The other two
classes are used for other purposes – class D for multicast and class E for experimental
purposes.

The system of IP address classes was developed for the purpose of Internet IP addresses
assignment. The classes created were based on the network size. For example, for the
small number of networks with a very large number of hosts, the Class A was created. The
Class C was created for numerous networks with small number of hosts.

Classes of IP addresses are:

For the IP addresses from Class A, the first 8 bits (the first decimal number) represent the
network part, while the remaining 24 bits represent the host part. For Class B, the first 16
bits (the first two numbers) represent the network part, while the remaining 16 bits represent
the host part. For Class C, the first 24 bits represent the network part, while the remaining 8
bits represent the host part.
Special IP address ranges that are used for special purposes are:

● 0.0.0.0/8 – addresses used to communicate with the local network


● 127.0.0.0/8 – loopback addresses
● 169.254.0.0/16 – link-local addresses (APIPA)

CIDR (Classless inter-domain routing):

CIDR (Classless inter-domain routing) is a method of public IP address assignment. It


was introduced in 1993 by Internet Engineering Task Force with the following goals:

● to deal with the IPv4 address exhaustion problem


● to slow down the growth of routing tables on Internet routers

Before CIDR, public IP addresses were assigned based on the class boundaries:

● Class A – the classful subnet mask is /8. The number of possible IP addresses is
16,777,216 (2 to the power of 24).
● Class B – the classful subnet mask is /16. The number of addresses is 65,536

● Class C – the classful subnet mask is /24. Only 256 addresses available.

Some organizations were known to have gotten an entire Class A public IP address (for
example, IBM got all the addresses in the 9.0.0.0/8 range). Since these addresses can’t be
assigned to other companies, there was a shortage of available IPv4 addresses. Also, since
IBM probably didn’t need more than 16 million IP addresses, a lot of addresses were
unused.

To combat this, the classful network scheme of allocating the IP address was abandoned.
The new system was classsless – a classful network was split into multiple smaller networks.
For example, if a company needs 12 public IP addresses, it would get something like this:
190.5.4.16/28.

VPC(Virtual Private Cloud):


What is Amazon VPC?
Amazon Virtual Private Cloud is just a virtual network that is private to you within the AWS

environment. In simpler terms, it is a cloud within the Amazon cloud.

AWS VPC contains gateways, subnets, route tables, and security groups to help you get

added security for your data. You can even control the incoming traffic to the cloud.

What is the Use of Amazon VPC?


Any service in a public cloud is open for the entire world to access or see. This brings on the

risk of hacks and fraudulent attacks. In fact, 3,800 incidents of data security breaches were

reported in the first six months of 2019 alone!

Would you want to risk keeping your private data on a public cloud?

To prevent this unwanted breach of data, it is advisable that you store it in a private cloud,

ergo a VPC.

Components of Amazon VPC


1. Subnets
It is a subdivision of a VPC. Breaking the network down into smaller networks (subnets) is

called subnetting.

Using subnet means dealing with IP addresses, which are a unique set of strings given to

each computer. You can say that it is an identification number of a device on which you can

use the internet.

2. Internet Gateway
As mentioned above, you can use an internet gateway to make a subnet public. This is done

by giving a route of that subnet to the internet.

And in this way, a user can access the resources in that subnet via the defined gateway on

the internet.

3. NAT Gateway
When you want only a certain set of resources to be allowed publicly on the internet, you can

use a NAT gateway.

NAT is short for Network Address Translation, which means that it translates private IP

addresses to public IPs. This is done right before the traffic is routed to the internet.

4. Route Table
As you already know by now, an AWS VPC lets you control the incoming traffic.

For this, you need to use route tables. These contain a set of rules describing how and

where the traffic is to be directed in the network.

You can connect one route table to multiple subnets in a network.

● Create VPC
Step 1: Use the link – https://console.aws.amazon.com/vpc/ to open the Amazon VPC

console.

Step 2: Choose the option – Creating the VPC – on the right of the navigation bar.

Step 3: Click Start VPC. Now click VPC With a Single Public Subnet option on the left.

Step 4: After the configuration page opens, fill in the required details – VPC name and

subnet name. Leave the other boxes as default and click Create VPC.

Step 5: Wait for the progress-showing dialog box to complete and click OK.

You will see a page showing a list of available VPCs. You can change the settings of your

VPC here.

● Create Additional Subnets


Step 1: Open the Amazon VPC console: https://console.aws.amazon.com/vpc/.

Step 2: Choose VPC Dashboard, then click on Subnets and finally click on Create Subnet.

Step 3: Enter the following values in the dialog box:

● Name tag: Tutorial private 2

● VPC: Select the VPC that you created above.


Example: vpc-identifier (10.0.0.0/16) | FirstVPC

● Availability Zone: us-west-2b


Choose a different Availability Zone from the one that you chose for the first private subnet.

● IPv4 CIDR block: 10.0.2.0/24


Step 4: Next, click on Create and Close on the confirmation page.

Step 5: For maintaining the same route table for both subnets, click on VPC Dashboard and

choose Subnets. Now click on your first subnet.

Step 6: Click on the Route Table tab and note its value.

Step 7: Deselect the first subnet that you created in the list of subnets. Now pick the second

subnet and click on Route Table.

Step 8: Click on Edit Route Table Association and paste the route table value that you had

copied earlier. Finally, click on Save.

To create a custom route table using the console

1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.


2. In the navigation pane, choose Route Tables.
3. Choose Create route table.
4. (Optional) For Name tag, enter a name for your route table.
5. For VPC, choose your VPC.
6. (Optional) Add or remove a tag.
[Add a tag] Choose Add tag and do the following:
● For Key, enter the key name.
● For Value, enter the key value.

7. [Remove a tag] Choose the Delete button ("X") to the right of the tag’s Key and Value.
8. Choose Create.

Create and attach an internet gateway


After you create an internet gateway, attach it to your VPC.
To create an internet gateway and attach it to your VPC
1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
2. In the navigation pane, choose Internet Gateways, and then choose Create
internet gateway.
3. Optionally name your internet gateway.
4. Optionally add or remove a tag.
[Add a tag] Choose Add tag and do the following:
● For Key, enter the key name.
● For Value, enter the key value.

5. [Remove a tag] Choose Remove to the right of the tag’s Key and Value.
6. Choose Create internet gateway.
7. Select the internet gateway that you just created, and then choose Actions, Attach
to VPC.
8. Select your VPC from the list, and then choose Attach internet gateway.
To create a security group and associate it with your instances

1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.


2. In the navigation pane, choose Security Groups, and then choose Create Security
Group.
3. In the Create Security Group dialog box, specify a name for the security group and
a description. Select the ID of your VPC from the VPC list, and then choose Yes,
Create.
4. Select the security group. The details pane displays the details for the security group,
plus tabs for working with its inbound rules and outbound rules.
5. On the Inbound Rules tab, choose Edit. Choose Add Rule, and complete the
required information. For example, select HTTP or HTTPS from the Type list, and
enter the Source as 0.0.0.0/0 for IPv4 traffic, or ::/0 for IPv6 traffic. Choose Save
when you're done.
6. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
7. In the navigation pane, choose Instances.
8. Select the instance, choose Actions, then Networking, and then select Change
Security Groups.
9. In the Change Security Groups dialog box, clear the check box for the currently
selected security group, and select the new one. Choose Assign Security Groups.
What is Amazon EC2 Instance?
An EC2 instance is nothing but a virtual server in Amazon Web services terminology. It

stands for Elastic Compute Cloud. It is a web service where an AWS subscriber can

request and provision a compute server in AWS cloud.

An on-demand EC2 instance is an offering from AWS where the subscriber/user can rent

the virtual server per hour and use it to deploy his/her own applications.The instance will be

charged per hour with different rates based on the type of the instance chosen. AWS

provides multiple instance types for the respective business needs of the user.

Thus, you can rent an instance based on your own CPU and memory requirements and use

it as long as you want. You can terminate the instance when it’s no more used and save on

costs.

Amazon EC2 Instance types:

General Purpose

General purpose instances provide a balance of compute, memory and networking

resources, and can be used for a variety of diverse workloads. These instances are ideal for

applications that use these resources in equal proportions such as web servers and code

repositories.

Use Cases

Developing, building, testing, and signing iOS, iPadOS, macOS, WatchOS, and tvOS

applications on the Xcode IDE

Compute Optimized

Compute Optimized instances are ideal for compute bound applications that benefit from

high performance processors. Instances belonging to this family are well suited for batch

processing workloads, media transcoding, high performance web servers, high performance

computing (HPC), scientific modeling, dedicated gaming servers and ad server engines,

machine learning inference and other compute intensive applications.

Use Cases

High performance computing (HPC), batch processing, ad serving, video encoding, gaming,

scientific modelling, distributed analytics, and CPU-based machine learning inference.


Memory Optimized

Memory optimized instances are designed to deliver fast performance for workloads that

process large data sets in memory.

Use Cases

Memory-intensive applications such as open-source databases, in-memory caches, and real

time big data analytics

Accelerated Computing

Accelerated computing instances use hardware accelerators, or co-processors, to perform

functions, such as floating point number calculations, graphics processing, or data pattern

matching, more efficiently than is possible in software running on CPUs.

Use Cases

Machine learning, high performance computing, computational fluid dynamics, computational

finance, seismic analysis, speech recognition, autonomous vehicles, and drug discovery.

Storage Optimized

Storage optimized instances are designed for workloads that require high, sequential read

and write access to very large data sets on local storage. They are optimized to deliver tens

of thousands of low-latency, random I/O operations per second (IOPS) to applications.

Use Cases

NoSQL databases (e.g. Cassandra, MongoDB, Redis), in-memory databases (e.g.

Aerospike), scale-out transactional databases, data warehousing, Elasticsearch, analytics

workloads.

Spot instance:

An AWS EC2 Spot Instance is an unused EC2 instance which is available for less than

the On-Demand price. Spot instances are up to 90% cheaper than On-Demand instances,

which can significantly reduce your EC2 costs. A Spot Price is the hourly rate for a Spot

instance.

Reserved Instance:
An Amazon Reserved Instance (RI) is a billing discount that allows you to save on your

Amazon EC2 usage costs. When you purchase a Reserved Instance, you can set

attributes such as instance type, platform, tenancy, Region, or Availability Zone (optional).

Dedicated Instances:
These are Amazon EC2 instances that run in a VPC on hardware that's dedicated to a

single customer. Dedicated Hosts give you additional visibility and control over how

instances are placed on a physical server, and you can reliably use the same physical server

over time.

On-Demand Instances:

AWS On-Demand Instances (Amazon Web Services On-Demand Instances) are virtual

servers that run in AWS Elastic Compute Cloud (EC2) or AWS Relational Database

Service (RDS) and are purchased at a fixed rate per hour. ... They are also suitable for use

during testing and development of applications on EC2.

EBS volume:

An Amazon EBS volume is a durable, block-level storage device that you can attach to

your instances. After you attach a volume to an instance, you can use it as you would use a

physical hard drive. EBS volumes are flexible. ... EBS volumes persist independently from

the running life of an EC2 instance.

EFS volume:

Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use

with your Amazon ECS tasks. With Amazon EFS, storage capacity is elastic, growing and

shrinking automatically as you add and remove files. Your applications can have the storage

they need, when they need it.

Login and access to AWS services

Step 1) In this step,

● Login to your AWS account and go to the AWS Services

● In the search area search for EC2.


● On the top right corner of the EC2 dashboard, choose the AWS Region in which you
want to provision the EC2 server.AWS provide 25 regions globally.

● Once your desired Region is selected, come back to the EC2 Dashboard.

● Click on ‘Launch Instance’ button in the section of Create Instance


Choose AMI

● You will be asked to choose an AMI of your choice. (An AMI is an Amazon Machine
Image. It is a template basically of an Operating System platform which you can use

as a base to create your instance). Once you launch an EC2 instance from your

preferred AMI, the instance will automatically be booted with the desired OS.

Choose EC2 Instance Types

In the next step, you have to choose the type of instance you require based on your

business needs.

1. We will choose t2.micro instance type, which is a 1vCPU and 1GB memory server
offered by AWS.

2. Click on “Configure Instance Details” for further configurations


No. of instances- you can provision up to 20 instances at a time. Here we are launching one

instance.

Next, we have to configure some basic networking details for our EC2 server.

● You have to decide here, in which VPC (Virtual Private Cloud) you want to launch
your instance and under which subnets inside your VPC. It is better to determine and

plan this prior to launching the instance. Your AWS architecture set-up should

include IP ranges for your subnets etc. pre-planned for better management. (We will

see how to create a new VPC in Networking section of the tutorial.

● Subnetting should also be pre-planned. E.g.: If it’s a web server you should place it in
the public subnet and if it’s a DB server, you should place it in a private subnet all

inside your VPC.


Add Storage

Step 1) In this step we do following things,

● In the Add Storage step, you’ll see that the instance has been automatically
provisioned a General Purpose SSD root volume of 8GB. ( Maximum volume size we

can give to a General Purpose volume is 16GB)

● You can change your volume size, add new volumes, change the volume type, etc.

● AWS provides 3 types of EBS volumes- Magnetic, General Purpose SSD,


Provisioned IOPs. You can choose a volume type based on your application’s IOPs

needs.

Tag Instance

Step 1) In this step

● you can tag your instance with a key-value pair. This gives visibility to the AWS
account administrator when there are lot number of instances.

● The instances should be tagged based on their department, environment like


Dev/SIT/Prod. Etc. this gives a clear view of the costing on the instances under one

common tag.

Configure Security Groups

Step 1) In this next step of configuring Security Groups, you can restrict traffic on your

instance ports. This is an added firewall mechanism provided by AWS apart from your

instance’s OS firewall.

Review Instances

Step 1) In this step, we will review all our choices and parameters and go ahead to launch

our instance.

Step 2) In the next step you will be asked to create a key pair to login to you an instance. A

key pair is a set of public-private keys.


AWS stores the private key in the instance, and you are asked to download the private key.

Make sure you download the key and keep it safe and secured; if it is lost you cannot

download it again.

1. Create a new key pair

2. Give a name to your key

3. Download and save it in your secured folder


Installation of Tomcat on AWS ec2 linux & integration with Jenkins:

OpenJDK Installation

sudo yum install wget


yum install java-1.8.0-openjdk

Configure Java HOME

● Check the java path on your instance first



export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09–
2.amzn2.0.1.x86_64
export PATH
sudo alternatives --config java

[I]. Ensuring that we have right version of Java:

[ec2-user@ip-xxx-xx-xx-xx]$ java -version

//By default Amazon Linux has JAVA version 1.7

//Lets download latest version JAVA 1.8

[II]. Downloading latest version from here

Select link -> Copy link address -> put that link below if new version comes and also change
the jdk path if ther version is different that (jdk-8u171)

Install wget, if not already installed

sudo yum install wget

$ wget --header "Cookie: oraclelicense=accept-securebackup-cookie"


http://download.oracle.com/otn-pub/java/jdk/8u181-
b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.rpm
wget : is a tool/program which we use to download packages from the internet. It retrieves
content from web servers and is a part of GNU project.

To install an rpm package use:


$ sudo yum localinstall jdk-8u181-linux-x64.rpm

Set environment variables:


First need to find where JAVA is. In Linux, we can recursively run the following two
commands to locate the JAVA installation spot:
$ file
$(which java)
a). Set JAVA _HOME variable
$ export JAVA_HOME=/usr/java/jdk1.8.0_181/
b). Set JRE_HOME variable
$ export JRE_HOME=/usr/java/jdk1.8.0_181/jre
c). Append the path variable
$ PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
$ export PATH

Change the default path


If now you will check java version ($ java -version), you will still get java 1.7
In order to change the default version to java 1.8 :
$ sudo alternatives --config java
Type 2 and hit enter. Now if you would check the java version which your system is pointing
to then it will be java 1.8
Now, let’s add the Jenkins GPG key to our trusted keys, so that we will be able to
verify/trust the files that are being sourced (while installing Jenkins )are from trusted
site.
[ec2-user@ip-xxx-xx-xx-xx]$ sudo rpm --import http://pkg.jenkins-
ci.org/redhat/jenkins-ci.org.key
As now, the environment has been prepared and has resolved the required dependencies,
lets install Jenkins.
[ec2-user@ip-xxx-xx-xx-xx]$ sudo yum install jenkins
Jenkins services needs to be started, with the following command:
[ec2-user@ip-xxx-xx-xx-xx]$ sudo service jenkins start
Make sure to open port 8080 (default port to which Jenkins listen):

Download Tomcat package


Go to browser –> https://tomcat.apache.org/download-90.cgi (to download tomcat9) –> Copy

tar.gz from core section.

https://dlcdn.apache.org/tomcat/tomcat-9/v9.0.55/bin/apache-tomcat-9.0.55.tar.gz

Download Tomcat package


Go to browser –> https://tomcat.apache.org/download-90.cgi (to download tomcat9) –> Copy

tar.gz from core section.


[root@ip-xxx-xx-xx-xx]$ tar -zvxf apache-tomcat-9.0.10.tar.gz

Start Tomcat service


Under Apache Tomcat folder, there exists two files, namely; startup.sh and shutdown.sh

● Browse to the bin folder


[root@ip-xxx-xx-xx-xx bin]$ ls -ltr
//to check the status of the startup services
[root@ip-xxx-xx-xx-xx bin]$ chmod +x startup.sh

[root@ip-xxx-xx-xx-xx bin]$ chmod +x shutdown.sh

//For all users to execute this script


//Now lets start tomcat service
[root@ip-xxx-xx-xx-xx bin]$ ./startup.sh

Change port number from 8080 to 8090 (as Our Jenkins on AWS is also listening to
the port 8080)
Browse to conf sub-directory under Tomcat directory and open server.xml file for editing

using ‘nano’ command (vi command can also be used).


[root@ip-xxx-xx-xx-xx conf]$ nano server.xml
Restart the tomcat service (browse to the bin folder)
[root@ip-xxx-xx-xx-xx bin]$ ./shutdown.sh

[root@ip-xxx-xx-xx-xx conf]$ ./startup.sh

What is AWS NAT Gateway?

NAT Gateway, also known as Network Address Translation Gateway, is used to enable

instances present in a private subnet to help connect to the internet or AWS services. In

addition to this, the gateway makes sure that the internet doesn’t initiate a connection with

the instances. NAT Gateway service is a fully managed service by Amazon, that doesn’t

require any efforts from the administrator.


They don’t support IPV4 traffic. In the case of IPV4 traffic, an egress-only internet gateway

needs to be used (which is another service).

Problem
So what if we need to install/update/upgrade software, utilities or OS on EC2 Instances

running in a private subnet? one option is to manually FTP to the box and install it but

sometimes is not feasible.

For scenarios like these AWS provides us NAT Gateways (previously NAT Instances which

are going to obsolete soon).

Solution
To configure NAT gateway follow these steps

1. Make sure you have Internet Gateway route defined in Routing Table

2. Get the Public Subnet ID where your NAT gateway would be deployed

3. Create NAT Gateway

4. Test the Internet connectivity

DIAGRAM COURTESY OF AWS DOCUMENTATION


In my example, I have two EC2 Instances running one (web-tier) in the Public subnet

and other (app-server) in the Private subnet as shown in the slide

Note: In my example, I am trying to install a git on my EC2 instances in private subnet. The

command will fail due to no internet connectivity.

Verify Routing Table for Internet Gateway Route


Verify in your public subnet you have internet gateway route defined as shown in the slide
Create NAT Gateway
1. Go to VPC > NAT Gateways and click Create NAT Gateways

2. Select Public subnet where your NAT Gateway is going to deploy

3. Select existing EIP or click Create Allocate Elastic IP (this will create a new EIP
and assign to NAT)

4. Wait for NAT Gateway Status to become available

Define NAT Gateway Routing in Private Subnet


1. Make sure NAT Gateway is up and running

2. Click on Routing Table and select private subnet where you want to enable
internet access

3. Create Edit and enter 0.0.0.0/0 in the source and select your NAT from the list

4. Click Save

Verify EC2 Instances

Once these steps are done you can connect to your instance running in the private subnet

and install updates


What is VPC peering?
A VPC peering connection is a networking connection between two VPCs that
enables you to route traffic between them using private IPv4 addresses or IPv6
addresses. Instances in either VPC can communicate with each other as if they are
within the same network. You can create a VPC peering connection between your
own VPCs, or with a VPC in another AWS account. The VPCs can be in different
regions (also known as an inter-region VPC peering connection).

AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it
is neither a gateway nor a VPN connection, and does not rely on a separate piece of
physical hardware. There is no single point of failure for communication or a
bandwidth bottleneck.

VPC peering basics


To establish a VPC peering connection, you do the following:

1. The owner of the requester VPC sends a request to the owner of the accepter VPC
to create the VPC peering connection. The accepter VPC can be owned by you, or
another AWS account, and cannot have a CIDR block that overlaps with the
requester VPC's CIDR block.
2. The owner of the accepter VPC accepts the VPC peering connection request to
activate the VPC peering connection.
3. To enable the flow of traffic between the VPCs using private IP addresses, the owner
of each VPC in the VPC peering connection must manually add a route to one or
more of their VPC route tables that points to the IP address range of the other VPC
(the peer VPC).
4. If required, update the security group rules that are associated with your instance to
ensure that traffic to and from the peer VPC is not restricted. If both VPCs are in the
same region, you can reference a security group from the peer VPC as a source or
destination for ingress or egress rules in your security group rules.
5. With the default VPC peering connection options, if EC2 instances on either side of a
VPC peering connection address each other using a public DNS hostname, the
hostname resolves to the public IP address of the instance. To change this behavior,
enable DNS hostname resolution for your VPC connection. After enabling DNS
hostname resolution, if instances on either side of the VPC peering connection
address each other using a public DNS hostname, the hostname resolves to the
private IP address of the instance.
VPC peering connection lifecycle
A VPC peering connection goes through various stages starting from when the
request is initiated. At each stage, there may be actions that you can take, and at the
end of its lifecycle, the VPC peering connection remains visible in the Amazon VPC
console and API or command line output for a period of time.

● Initiating-request: A request for a VPC peering connection has been initiated. At this
stage, the peering connection can fail, or can go to pending-acceptance.
● Failed: The request for the VPC peering connection has failed. While in this state, it
cannot be accepted, rejected, or deleted. The failed VPC peering connection remains
visible to the requester for 2 hours.
● Pending-acceptance: The VPC peering connection request is awaiting acceptance
from the owner of the accepter VPC. During this state, the owner of the requester
VPC can delete the request, and the owner of the accepter VPC can accept or reject
the request. If no action is taken on the request, it expires after 7 days.
● Expired: The VPC peering connection request has expired, and no action can be
taken on it by either VPC owner. The expired VPC peering connection remains
visible to both VPC owners for 2 days.
● Rejected: The owner of the accepter VPC has rejected a pending-acceptance VPC
peering connection request. While in this state, the request cannot be accepted. The
rejected VPC peering connection remains visible to the owner of the requester VPC
for 2 days, and visible to the owner of the accepter VPC for 2 hours. If the request
was created within the same AWS account, the rejected request remains visible for 2
hours.
● Provisioning: The VPC peering connection request has been accepted, and will
soon be in the active state.
● Active: The VPC peering connection is active, and traffic can flow between the VPCs
(provided that your security groups and route tables allow the flow of traffic). While in
this state, either of the VPC owners can delete the VPC peering connection, but
cannot reject it.
● Deleting: Applies to an inter-region VPC peering connection that is in the process of
being deleted. The owner of either VPC has submitted a request to delete an active
VPC peering connection, or the owner of the requester VPC has submitted a request
to delete a pending-acceptance VPC peering connection request.
● Deleted: An active VPC peering connection has been deleted by either of the VPC
owners, or a pending-acceptance VPC peering connection request has been deleted
by the owner of the requester VPC. While in this state, the VPC peering connection
cannot be accepted or rejected. The VPC peering connection remains visible to the
party that deleted it for 2 hours, and visible to the other party for 2 days. If the VPC
peering connection was created within the same AWS account, the deleted request
remains visible for 2 hours.

Pricing for a VPC peering connection

If the VPCs in the VPC peering connection are within the same region, the charges for
transferring data within the VPC peering connection are the same as the charges for
transferring data across Availability Zones. If the VPCs are in different regions, inter-region
data transfer costs apply.

Procedure
1. Determine whether all partnership requirements are met, as described in IP
partnership requirements.
The CIDR block of the two VPCs that you are connecting must not overlap, or VPC
peering cannot be established.
2. Create a peering connection between the two VPCs that you want them to be
connected.
You need the VPC-related user permission under your account.
● Log in to the AWS management console with the IAM default user profile.
● Select VPC to open the VPC Dashboard and select Peering Connection >
Create Peering.
● Choose the VPC that your primary cluster resides as VPC (Requester), and
select the VPC that your secondary cluster resides as VPC (Accepter).
If they are under different account or region, select correct account and region
to locate the VPC (Accepter).
● Enter a name for the peering connection and click Create Peering
Connection.
● Switch AWS console to the account and region where your secondary cluster
resides in, open the VPC Dashboard and select Peering Connection >
Create Peering > .
● Find the peering connection request that you created, and choose Actions ->
Accept Request.
3. Update route tables in the two VPC that you want them to be connected.
You need the VPC-related user permission under your account.
● Log in to the AWS management console with the IAM default user profile.
● Select EC2 > Instance and select the primary instance that contains the
configuration.
● On the Description tab, select the link that is associated to the Subnet ID
field.
The VPC Dashboard > Subnets page opens.
● On the Description tab, select the link that is associated to the Route Table
field
The VPC Dashboard > Route Tables page opens.
● On the Routes tab, select Edit routes > Add route.
● In the Targetfield, select Peering Connection and select the peering
connection that you created in step 2.
● In the Destination field, enter the CIDR block of the VPC where your
secondary configuration resides.
● Click Save Routes.
● Repeat from step a to step g on your secondary cluster's route table.
For example, if the CIDR block of your VPC of primary cluster is
172.16.0.0/16, the CIDR block of your VPC of secondary cluster is
10.0.0.0/16, and you've already created a peering connection named pcx-
11112222. After making these updates, the route table looks like the following
example:

Route table Destination Target

VPC A 172.16.0.0/16 Local

10.0.0.0/16 pcx-11112222

VPC B 10.0.0.0/16 Local

● 172.16.0.0/16 pcx-11112222
4. Update security groups of the instances that you want to connect. To do this, you
need security group-related user permission under your account.
● Log in to the AWS management console with the IAM default user profile.
● Select EC2 > Instance and select the primary instance that contains the
configuration.
● On the Description tab, select the link that is associated to the Security
Groups field
The VPC Dashboard > Security Groups page opens.
● On the Inbound tab, select Edit > Add Rule.
● Under Type, select Custom TPC Rule and enter 3260 in Port Range.
● Under Source, select Custom and enter the CIDR block of the VPC where
your secondary cluster resides.
● Click Add Rule and repeat step 4.d but enter 3265 .
● Click Save.
● Repeat from step a to step g on your secondary cluster's security group.

What is VPC flow logs?

VPC Flow Logs is a feature that enables you to capture information about the IP traffic
going to and from network interfaces in your VPC. Flow log data can be published to
Amazon CloudWatch Logs or Amazon S3. After you create a flow log, you can retrieve and
view its data in the chosen destination.

What Is Amazon CloudWatch?


Amazon CloudWatch is the component of Amazon Web Services that provides real-time
monitoring of AWS resources and customer applications running on Amazon infrastructure.

● Enables robust monitoring of resources like :

1. Virtual instances hosted in Amazon EC2


2. Databases located in Amazon RDS
3. Data stored in Amazon S3
4. Elastic Load Balancer
5. Auto-Scaling Groups
6. Other resources

● Monitors, stores and provides access to system and application log files
● Provides a catalog of standard reports that you can use to analyze trends and
monitor system performance
● Provides various alert capabilities, including rules and triggers high resolutions
alarms and sends notifications
● Collects and provides a real-time presentation of operational data in form of key
metrics like CPU utilization, disk storage etc.

Metrics

○ Metrics represents a time-ordered set of data points that are published to


CloudWatch
○ You can relate metric to a variable that is being monitored and data points to the
value of that variable over time
○ Metrics are uniquely defined by a name, a namespace, and zero or more dimensions
○ Each data point has a time-stamp.

Dimensions

○ A dimension is a name/value pair that uniquely identifies a metric


○ Dimensions can be considered as categories of characteristics that describe a metric
○ Because dimensions are unique identifiers for a metric, whenever you add a unique
name/value pair to one of your metrics, you are creating a new variation of that
metric.
Statistics

○ Statistics are metric data aggregations over specified periods of time


○ Aggregations are made using the namespace, metric name, dimensions within the
time period you specify
○ Few available statistics are maximum, minimum, sum, average and sample count.

Alarm

○ An alarm can be used to automatically initiate actions on your behalf


○ It watches a single metric over a specified time period and performs one or more
specified actions
○ The action is a simply a notification that is sent to Amazon SNS topic.

What is IAM?
● IAM stands for Identity Access Management.
● IAM allows you to manage users and their level of access to the aws console.
● It is used to set users, permissions and roles. It allows you to grant access to the
different parts of the aws platform.
● AWS Identity and Access Management is a web service that enables Amazon Web
Services (AWS) customers to manage users and user permissions in AWS.
● With IAM, Organizations can centrally manage users, security credentials such as
access keys, and permissions that control which AWS resources users can access.

Features of IAM
● Centralised control of your AWS account: You can control creation, rotation, and
cancellation of each user's security credentials. You can also control what data in the
aws system users can access and how they can access.
● Shared Access to your AWS account: Users can share the resources for the
collaborative projects.
● Granular permissions: It is used to set a permission that user can use a particular
service but not other services.
● Identity Federation: An Identity Federation means that we can use Facebook, Active
Directory, LinkedIn, etc with IAM. Users can log in to the AWS Console with same
username and password as we log in with the Active Directory, Facebook, etc.
● Multifactor Authentication: An AWS provides multifactor authentication as we need to
enter the username, password, and security check code to log in to the AWS
Management Console.
● Permissions based on Organizational groups: Users can be restricted to the AWS
access based on their job duties, for example, admin, developer, etc.
● Networking controls: IAM also ensures that the users can access the AWS resources
within the organization's corporate network.
● Provide temporary access for users/devices and services where necessary: If you
are using a mobile app and storing the data in AWS account, you can do this only
when you are using temporary access.
● Integrates with many different aws services: IAM is integrated with many different
aws services.
● Supports PCI DSS Compliance: PCI DSS (Payment Card Industry Data Security
Standard) is a compliance framework. If you are taking credit card information, then
you need to pay for compliance with the framework.
● Eventually Consistent: IAM service is eventually consistent as it achieves high
availability by replicating the data across multiple servers within the Amazon's data
center around the world.
● Free to use: AWS IAM is a feature of AWS account which is offered at no additional
charge. You will be charged only when you access other AWS services by using IAM
user.

Creating a Role for a service using the AWS Management Console.


● In the navigation pane of the console, click Roles and then click on "Create Role".
The screen appears shown below on clicking Create Role button.
● Choose the service that you want to use with the role.
● Select the managed policy that attaches the permissions to the service.
● In a role name box, enter the role name that describes the role of the service, and
then click on "Create role".

What is S3?
● S3 is a safe place to store the files.
● It is Object-based storage, i.e., you can store the images, word files, pdf files, etc.
● The files which are stored in S3 can be from 0 Bytes to 5 TB.
● It has unlimited storage means that you can store the data as much you want.
● Files are stored in Bucket. A bucket is like a folder available in S3 that stores the
files.
● S3 is a universal namespace, i.e., the names must be unique globally. Bucket
contains a DNS address. Therefore, the bucket must contain a unique name to
generate a unique DNS address.

If you create a bucket, URL look like:

● If you upload a file to S3 bucket, then you will receive an HTTP 200 code means that
the uploading of a file is successful.

S3 is a simple key-value store


S3 is object-based. Objects consist of the following:
● Key: It is simply the name of the object. For example, hello.txt, spreadsheet.xlsx, etc.
You can use the key to retrieve the object.
● Value: It is simply the data which is made up of a sequence of bytes. It is actually a
data inside the file.
● Version ID: Version ID uniquely identifies the object. It is a string generated by S3
when you add an object to the S3 bucket.
● Metadata: It is the data about data that you are storing. A set of a name-value pair
with which you can store the information regarding an object. Metadata can be
assigned to the objects in Amazon S3 bucket.
● Subresources: Subresource mechanism is used to store object-specific information.
● Access control information: You can put the permissions individually on your files.

Creating an S3 Bucket
● Sign in to the AWS Management console. After sign in,Move to the S3 services.
● To create an S3 bucket, click on the "Create bucket".
● Enter the bucket name which should look like DNS address, and it should be
resolvable. A bucket is like a folder that stores the objects. A bucket name should be
unique. A bucket name should start with the lowercase letter, must not contain any
invalid characters. It should be 3 to 63 characters long.

● Click on the "Create" button.


S3 contains four types of storage classes:
● S3 Standard
● S3 Standard IA
● S3 one zone-infrequent access
● S3 Glacier

S3 Standard
● Standard storage class stores the data redundantly across multiple devices in
multiple facilities.
● It is designed to sustain the loss of 2 facilities concurrently.
● Standard is a default storage class if none of the storage class is specified during
upload.
● It provides low latency and high throughput performance.
● It designed for 99.99% availability and 99.999999999% durability

S3 Standard IA
● IA stands for infrequently accessed.
● Standard IA storage class is used when data is accessed less frequently but requires
rapid access when needed.
● It has a lower fee than S3, but you will be charged for a retrieval fee.
● It is designed to sustain the loss of 2 facilities concurrently.
● It is mainly used for larger objects greater than 128 KB kept for atleast 30 days.
● It provides low latency and high throughput performance.
● It designed for 99.99% availability and 99.999999999% durability

S3 one zone-infrequent access


● S3 one zone-infrequent access storage class is used when data is accessed less
frequently but requires rapid access when needed.
● It stores the data in a single availability zone while other storage classes store the
data in a minimum of three availability zones. Due to this reason, its cost is 20% less
than Standard IA storage class.
● It is an optimal choice for the less frequently accessed data but does not require the
availability of Standard or Standard IA storage class.
● It is a good choice for storing the backup data.
● It is cost-effective storage which is replicated from other AWS region using S3 Cross
Region replication.
● It has the same durability, high performance, and low latency, with a low storage
price and low retrieval fee.
● It designed for 99.5% availability and 99.999999999% durability of objects in a single
availability zone.
● It provides lifecycle management for the automatic migration of objects to other S3
storage classes.
● The data can be lost at the time of the destruction of an availability zone as it stores
the data in a single availability zone.

S3 Glacier
● S3 Glacier storage class is the cheapest storage class, but it can be used for archive
only.
● You can store any amount of data at a lower cost than other storage classes.
● S3 Glacier provides three types of models:

○ Expedited: In this model, data is stored for a few minutes, and it has a very
higher fee.
○ Standard: The retrieval time of the standard model is 3 to 5 hours.
○ Bulk: The retrieval time of the bulk model is 5 to 12 hours.

● You can upload the objects directly to the S3 Glacier.


● It is designed for 99.999999999% durability of objects across multiple availability
zones.

Performance across the Storage classes

S3 Standard S3 Standard S3 One S3 Glacier


IA Zone-IA

Designed for 99.99999999 99.99999999 99.99999999 99.99999999%


durability % % %

Designed for 99.99% 99.9% 99.5% N/A


availability

Availability SLA 99.9% 99% 99% N/A

Availability zones >=3 >=3 1 >=3


Minimum capacity N/A 128KB 128KB 40KB
charge per object

Minimum storage N/A 30 days 30 days 90 days


duration charge

Retrieval fee N/A per GB per GB per GB retrieved


retrieved retrieved

First byte latency milliseconds milliseconds milliseconds Select minutes


or hours

Storage type Object Object Object Object

Lifecycle transitions Yes Yes Yes Yes

Versioning
Versioning is a means of keeping the multiple forms of an object in the same S3
bucket. Versioning can be used to retrieve, preserve and restore every version of an
object in S3 bucket.
For example, bucket consists of two objects with the same key but with different
version ID's such as photo.jpg (version ID is 11) and photo.jpg (version ID is 12).
Versioning-enabled buckets allow you to recover the objects from the deletion or
overwrite. It serves two purposes:
● If you delete an object, instead of deleting the object permanently, it creates a
delete marker which becomes a current version of an object.
● If you overwrite an object, it creates a new version of the object and also
restores the previous version of the object.

Cross Region Replication


● Cross Region Replication is a feature that replicates the data from one bucket to
another bucket which could be in a different region.
● It provides asynchronous copying of objects across buckets. Suppose X is a source
bucket and Y is a destination bucket. If X wants to copy its objects to Y bucket, then
the objects are not copied immediately.

Some points to be remembered for Cross Region Replication


● Create two buckets: Create two buckets within AWS Management Console, where
one bucket is a source bucket, and other is a destination bucket.
● Enable versioning: Cross Region Replication can be implemented only when the
versioning of both the buckets is enabled.
● Amazon S3 encrypts the data in transit across AWS regions using SSL: It also
provides security when data traverse across the different regions.
● Already uploaded objects will not be replicated: If any kind of data already exists
in the bucket, then that data will not be replicated when you perform the cross region
replication.

Use cases of Cross Region Replication


● Compliance Requirements
By default, Amazon S3 stores the data across different geographical regions or
availability zone to have the availability of data. Sometimes there could be
compliance requirements that you want to store the data in some specific region.
Cross Region Replication allows you to replicate the data at some specific region to
satisfy the requirements.
● Minimize Latency
Suppose your customers are in two geographical regions. To minimize latency, you
need to maintain the copies of data in AWS region that are geographically closer to
your users.
● Maintain object copies under different ownership: Regardless of who owns the
source bucket, you can tell to Amazon S3 to change the ownership to AWS account
user that owns the destination bucket. This is referred to as an owner override option.

S3 Transfer Acceleration
● S3 Transfer Acceleration utilizes the CloudFront Edge Network to accelerate uploads
to S3.
● Instead of directly uploading the file to S3 bucket, you will get a distinct URL that will
upload the data to the nearest edge location which in turn transfer the file to S3
bucket. The distinct URL would look like:
acloudguru.s3-accelerate.amazonaws.com
where, acloudguru is a bucket name.

We got an S3 bucket hosted outside the Ireland region, and we have different users all
around the world. If users try to upload the file to S3 bucket, it would be done through an
internet connection.
Transfer Acceleration utilizes the local edge location, and they use the distinct URL that we
saw earlier will upload the file to their nearest edge location. The edge location will then send
the file up to the S3 bucket. Therefore, we can say that Amazon optimizes the process by
using the Transfer Acceleration service.

What is Load Balancer?

Load Balancer is a virtual machine or appliance that balances your web application load that

could be Http or Https traffic that you are getting in. It balances a load of multiple web

servers so that no web server gets overwhelmed.


Application Load Balancer

● An Amazon Web Services (AWS) launched a new load balancer known as an


Application load balancer (ALB) on August 11, 2016.
● It is used to direct user traffic to the public AWS cloud.
● It identifies the incoming traffic and forwards it to the right resources. For example, if
a URL has /API extensions, then it is routed to the appropriate application resources.
● It is operated at Layer 7 of the OSI Model.
● It is best suited for load balancing of HTTP and HTTPs traffic.
● Application load balancers are intelligent, sending specific requests to specific web
servers.
● If we take an example of TESLA. We have three models of TESLA, i.e., TESLA
Model X, TESLA Model S, and TESLA Model 3 and TESLAs have onboard
computing facility. You will have a group of web servers that serve the Model X, a
group of web servers that serve the Model S, and similarly for Model 3. We have one
Load balance that checks whether the incoming traffic comes from either Model X,
Model S or Model 3, and then sends it to the intended froup of servers.

Network Load Balancer

● It is operated at the Layer 4 of the OSI model.


● It makes routing decisions at the transport layer (TCP/SSL), and it can handle
millions of requests per second.
● When a load balancer receives a connection, it then selects a target from the target
group by using a flow hash routing algorithm. It opens the TCP connection to the
selected target of the port and forwards the request without modifying the headers.
● It is best suited for load balancing the TCP traffic when high performance is required.

Amazon Route 53

● Amazon Route 53 is a highly available and scalable cloud Domain Name System
(DNS) web service. It is designed to give developers and businesses an extremely
reliable and cost effective way to route end users to Internet applications by
translating names like www.example.com into the numeric IP addresses like
192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully
compliant with IPv6 as well.
● Amazon Route 53 effectively connects user requests to infrastructure running in
AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers, or
Amazon S3 buckets – and can also be used to route users to infrastructure outside of
AWS. You can use Amazon Route 53 to configure DNS health checks, then
continuously monitor your applications’ ability to recover from failures and control
application recovery with Route 53 Application Recovery Controller.
● Amazon Route 53 Traffic Flow makes it easy for you to manage traffic globally

through a variety of routing types, including Latency Based Routing, Geo

DNS, Geoproximity, and Weighted Round Robin—all of which can be

combined with DNS Failover in order to enable a variety of low-latency, fault-

tolerant architectures. Using Amazon Route 53 Traffic Flow’s simple visual

editor, you can easily manage how your end-users are routed to your

application’s endpoints—whether in a single AWS region or distributed around

the globe. Amazon Route 53 also offers Domain Name Registration – you can

purchase and manage domain names such as example.com and Amazon

Route 53 will automatically configure DNS settings for your domains.

● Benefits
❖ Highly available and reliable
❖ Flexible
❖ Designed for use with other Amazon Web Services
❖ Simple
❖ Fast
❖ Cost-effective
❖ Secure
❖ Scalable
❖ Simplify the hybrid cloud

What is the work of Route 53?

➢ DNS management

➢ Traffic Management

➢ Availability Monitoring

➢ Domain Registration

What is SNS?

● SNS stands for Simple Notification Service.


● SNS is a push based service.
● It is a web service which makes it easy to set up, operate, and send a notification
from the cloud.
● It provides developers with the highly scalable, cost-effective, and flexible capability
to publish messages from an application and sends them to other applications.
● It is a way of sending messages. When you are using AutoScaling, it triggers an SNS
service which will email you that "your EC2 instance is growing".
● SNS can also send the messages to devices by sending push notifications to Apple,
Google, Fire OS, and Windows devices, as well as Android devices in China with
Baidu Cloud Push.
● Besides sending the push notifications to the mobile devices, Amazon SNS sends
the notifications through SMS or email to an Amazon Simple Queue Service (SQS),
or to an HTTP endpoint.
● SNS notifications can also trigger the Lambda function. When a message is
published to an SNS topic that has a Lambda function associated with it, Lambda
function is invoked with the payload of the message. Therefore, we can say that the
Lambda function is invoked with a message payload as an input parameter and
manipulate the information in the message and then sends the message to other
SNS topics or other AWS services.
● Amazon SNS allows you to group multiple recipients using topics ,where the topic
is a logical access point that sends the identical copies of the same message
to the subscribe recipients.
● Amazon SNS supports multiple endpoint types. For example, you can group together
IOS, Android and SMS recipients. Once you publish the message to the topic, SNS
delivers the formatted copies of your message to the subscribers.
● To prevent the loss of data, all messages published to SNS are stored redundantly
across multiple availability zones.

Amazon SNS is a web service that manages sending messages to the subscribing endpoint.

There are two clients of SNS:

● Publishers
Publishers are also known as producers that produce and send the message to the SNS

which is a logical access point.

● Subscribers
Subscribers such as web servers, email addresses, Amazon SQS queues, AWS Lambda

functions receive the message or notification from the SNS over one of the supported

protocols (Amazon SQS, email, Lambda, HTTP, SMS).

What is SQS?

● SQS stands for Simple Queue Service.


● SQS is a Pull based service.
● SQS was the first service available in AWS.
● Amazon SQS is a web service that gives you access to a message queue that can
be used to store messages while waiting for a computer to process them.
● Amazon SQS is a distributed queue system that enables web service applications to
quickly and reliably queue messages that one component in the application
generates to be consumed by another component where a queue is a temporary
repository for messages that are awaiting processing.
● With the help of SQS, you can send, store and receive messages between software
components at any volume without losing messages.
● Using Amazon sqs, you can separate the components of an application so that they
can run independently, easing message management between components.
● Any component of a distributed application can store the messages in the queue.
● Messages can contain up to 256 KB of text in any format such as json, xml, etc.
● Any component of an application can later retrieve the messages programmatically
using the Amazon SQS API.
● The queue acts as a buffer between the component producing and saving data, and
the component receives the data for processing. This means that the queue resolves
issues that arise if the producer is producing work faster than the consumer can
process it, or if the producer or consumer is only intermittently connected to the
network.
● If you got two EC2 instances which are pulling the SQS Queue. You can configure
the autoscaling group if a number of messages go over a certain limit. Suppose the
number of messages exceeds 10, then you can add additional EC2 instance to
process the job faster. In this way, SQS provides elasticity.

Let's look at an example of SQS, i.e., Travel Website.


Suppose the user wants to look for a package holiday and wants to look at the best possible

flight. AUser types a query in a browser, it then hits the EC2 instance. An EC2 instance

looks "What the user is looking for?", it then puts the message in a queue to the SQS. An

EC2 instance pulls queue. An EC2 instance continuously pulling the queue and looking for

the jobs to do. Once it gets the job, it then processes it. It interrogates the Airline service to

get all the best possible flights. It sends the result to the web server, and the web server

sends back the result to the user. A User then selects the best flight according to his or her

budget.

There are two types of Queue:

● Standard Queues (default)


● SQS offers a standard queue as the default queue type.
● It allows you to have an unlimited number of transactions per second.
● It guarantees that a message is delivered at least once. However, sometime, more
than one copy of a message might be delivered out of order.
● It provides best-effort ordering which ensures that messages are generally delivered
in the same order as they are sent but it does not provide a guarantee.
● FIFO Queues (First-In-First-Out)
● The FIFO Queue complements the standard Queue.
● It guarantees ordering, i.e., the order in which they are sent is also received in the
same order.
● The most important features of a queue are FIFO Queue and exactly-once
processing, i.e., a message is delivered once and remains available until consumer
processes and deletes it.
● FIFO Queue does not allow duplicates to be introduced into the Queue.
● It also supports message groups that allow multiple ordered message groups within a
single Queue.
● FIFO Queues are limited to 300 transactions per second but have all the capabilities
of standard queues.

SQS Visibility Timeout


● The visibility timeout is the amount of time that the message is invisible in the SQS
Queue after a reader picks up that message.
● If the provided job is processed before the visibility time out expires, the message will
then be deleted from the Queue. If the job is not processed within that time, the
message will become visible again and another reader will process it. This could
result in the same message being delivered twice.
● The Default Visibility Timeout is 30 seconds.
● Visibility Timeout can be increased if your task takes more than 30 seconds.
● The maximum Visibility Timeout is 12 hours.

Important points to remember:

● SQS is pull-based, not push-based.


● Messages are 256 KB in size.
● Messages are kept in a queue from 1 minute to 14 days.
● The default retention period is 4 days.
● It guarantees that your messages will be processed at least once.

VPC Endpoint
● A VPC endpoint allows you to privately connect your VPC to supported AWS
services and VPC endpoint services powered by PrivateLink without requiring an
internet gateway, NAT device, VPN Connection, or AWS Direct Connect connection.
● Instances in your VPC do not require public addresses to communicate with the
resources in the service. Traffic between your VPC and the other service does not
leave the Amazon network.
● VPC endpoints are virtual devices.
● VPC Endpoints are horizontally scaled, redundant and highly available VPC
components that allow communication between instances in your VPC and services
without imposing availability risks or bandwidth constraints on your network traffic.

Types of VPC Endpoints

● Interface Endpoints
● Gateway Endpoints

Interface Endpoints

● Interface Endpoint is an Elastic Network Interface with a private IP address which will
act as an entry point for the traffic destined to a particular service.
● An interface endpoint supports services such as Amazon CloudWatch, Amazon SNS,
etc.
Gateway Endpoints

● Gateway Endpoint is a gateway which is targetted for a specific route in your route
table.
● It can be used to route the traffic to a destined service.
● Amazon S3 and DynamoDB are the only services which are supported by Gateway
Endpoints.

Let's look at the architecture of VPC without VPC Endpoints.

Let's look at the architecture of VPC that includes VPC Endpoint.


What is a VPC FlowLog?
● VPC FlowLog is a feature of aws that captures the information about the IP traffic
going to or from the network interfaces in a VPC.
● Amazon FlowLog data can be either stored either by using the Amazon
CloudWatchLogs or Amazon S3 bucket.
● After you have created a FlowLog, you can view and retrieve the data from the
Amazon CloudWatch Logs.
● In short, we can say that VPC FlowLog is a way of storing the traffic going in a VPC.
● FlowLogs serve a number of purposes:

○ Troubleshoot the problem "why specific traffic is not reaching an instance".


○ VPC FlowLog can also be used as a security tool to monitor the traffic which
is reaching your instance.

Limitations of VPC FlowLog:


● You cannot enable the flowlog of VPC that are peered with your VPC unless it has
peered with the VPC in the same account.
● While creating a flowlog, you cannot tag a flowlog.
● Once you have created the flowlog, you cannot change its configuration. For
example, if you associate an IAM role to the flowlog then you cannot change the IAM
role. In such cases, you need to delete the flowlog and create the new flowlog with
the desired configuration.
VPC FlowLogs can be created at three levels:
● VPC
● Subnet
● Network Interface Level

What is AWS Auto Scaling?


AWS Auto Scaling enables you to configure automatic scaling for the scalable resources that
are part of your application in a matter of minutes. The AWS Auto Scaling console provides
a single user interface to use the auto scaling features of multiple services in the AWS
Cloud. You can configure automatic scaling for individual resources or for whole
applications.
With AWS Auto Scaling, you configure and manage scaling for your resources through a
scaling plan. The scaling plan uses dynamic scaling and predictive scaling to automatically
scale your application's resources. This ensures that you add the required computing power
to handle the load on your application and then remove it when it's no longer required. The
scaling plan lets you choose scaling strategies to define how to optimize your resource
utilization. You can optimize for availability, for cost, or a balance of both. Alternatively, you
can create custom scaling strategies.

AWS Auto Scaling is useful for applications that experience daily or weekly variations in
traffic flow, including the following:

● Cyclical traffic such as high use of resources during regular business hours and low
use of resources overnight
● On and off workload patterns, such as batch processing, testing, or periodic analysis
● Variable traffic patterns, such as marketing campaigns with periods of spiky growth

AWS CloudTrail

It is an AWS service that helps you enable governance, compliance, and operational and
risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are
recorded as events in CloudTrail. Events include actions taken in the AWS Management
Console, AWS Command Line Interface, and AWS SDKs and APIs.

CloudTrail is enabled on your AWS account when you create it. When activity occurs in your
AWS account, that activity is recorded in a CloudTrail event. You can easily view recent
events in the CloudTrail console by going to Event history.

AWS CloudFormation: Concepts, Templates, EC2 Use Case & More:


AWS CloudFormation provides you with a simple way to create and manage a
collection of AWS resources by provisioning and updating them in an orderly and
predictable way. In simple terms, it allows you to create and model your
infrastructure and applications without having to perform actions manually.

AWS CloudFormation enables you to manage your complete infrastructure or AWS


resources in a text file, or template. A collection of AWS resources is called a stack.
AWS resources can be created or updated by using a stack.

All the resources you require in an application can be deployed easily using
templates. Also, you can reuse your templates to replicate your infrastructure in
multiple environments. To make templates reusable, use the parameters, mappings
and conditions sections in the template so that you can customize your stacks when
you create them.

● Create a new template or use an existing CloudFormation template using


the JSON or YAML format.
● Save your code template locally or in an S3 bucket.
● Use AWS CloudFormation to build a stack on your template.
● AWS CloudFormation constructs and configures the stack resources that
you have specified in your template.

AWS CloudFormation Concepts


An AWS CloudFormation template is a formatted text file in JSON or YAML language that
describes your AWS infrastructure. To create, view and modify templates, you can use AWS
CloudFormation Designer or any text editor tool. An AWS CloudFormation template consists
of nine main objects:
1. Format version: Format version defines the capability of a template.
2. Description: Any comments about your template can be specified in
the description.
3. Metadata: Metadata can be used in the template to provide further
information using JSON or YAML objects.
4. Parameters: Templates can be customized using parameters. Each
time you create or update your stack, parameters help you give your
template custom values at runtime.
5. Mappings: Mapping enables you to map keys to a corresponding
named value that you specify in a conditional parameter. Also, you can
retrieve values in a map by using the “Fn:: FindInMap” intrinsic function.
6. Conditions: In a template, conditions define whether certain resources
are created or when resource properties are assigned to a value during
stack creation or updating. Conditions can be used when you want to
reuse the templates by creating resources in different contexts. You
can use intrinsic functions to define conditions.
7. Resources: Using this section, you can declare the AWS resource that
you want to create and specify in the stack, such as an Amazon S3
bucket or AWS Lambda.
8. Output: In a template, the output section describes the output values
that you can import into other stacks or the values that are returned
when you view your own stack properties. For example, for an S3
bucket name, you can declare an output and use the “Description-
stacks” command from the AWS CloudFormation service to make the
bucket name easier to find.

You might also like