0% found this document useful (0 votes)
26 views67 pages

Linux Reference Document

Uploaded by

Rajkumar Vemula
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
26 views67 pages

Linux Reference Document

Uploaded by

Rajkumar Vemula
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 67

Installation and Initialization

Introduction:
Linux is a community of open-source Unix like operating systems that are based on the Linux
Kernel. It was initially released by Linus Torvalds on September 17, 1991. It is a free and
open-source operating system, and the source code can be modified and distributed to
anyone commercially or noncommercially under the GNU General Public License.
Initially, Linux was created for personal computers and gradually it was used in other
machines like servers, mainframe computers, supercomputers, etc. Nowadays, Linux is also
used in embedded systems like routers, automation controls, televisions, digital video
recorders, video game consoles, smartwatches, etc. The biggest success of Linux is Android
(operating system) it is based on the Linux kernel that is running on smartphones and
tablets. Due to android Linux has the largest installed base of all general-purpose operating
systems. Linux is generally packaged in a Linux distribution.

Installation Of Linux OS:

Features of RHEL:
• XFS File system supports copy-on-write of file extents
• Introduction of Stratis filesystem, Buildah, Podman, and Skopeo
• Yum utility is based on DNF
• Chrony replace NTP.
• Cockpit is the default Web Console tool for Server management.
• OpenSSL 1.1.1 & TLS 1.3 support
• PHP 7.2
• iptables replaced by nftables

Minimum System Requirements for RHEL 8:


• 4 GB RAM
• 20 GB unallocated disk space
• 64-bit x86 or ARM System
Note: RHEL 8 supports the following architectures:
• AMD or Intel x86 64-bit
• 64-bit ARM
• IBM Power Systems, Little Endian & IBM

Step:1) Download RHEL 8.0 ISO file


Download RHEL 8 iso file from its official web site,
https://access.redhat.com/downloads/

Step:2) Create Installation bootable media (USB or DVD)


Once you have downloaded RHEL 8 ISO file, make it bootable by burning it either into a USB
drive or DVD. Reboot the target system where you want to install RHEL 8 and then go to its
bios settings and set the boot medium as USB or DVD

Step:3) Choose “Install Red Hat Enterprise Linux 8.0” option


When the system boots up with installation media (USB or DVD), we will get the following
screen, choose “Install Red Hat Enterprise Linux 8.0” and hit enter
Step:4) Choose your preferred language for RHEL 8 installation
In this step, you need to choose a language that you want to use for RHEL 8 installation, so
select those suits to your setup.
Step:5) Preparing RHEL 8 Installation
In this step we will decide the installation destination for RHEL 8, apart from this we can
configure the followings:
• Time Zone
• Kdump (enabled/disabled)
• Software Selection (Packages)
• Networking and Hostname
• Security Policies & System purpose
By default, installer will automatically pick time zone and will enable the kdump, if wish to
change the time zone then click on “Time & Date” option and set your preferred time zone
and then click on Done.

To configure IP address and Hostname click on “Network & Hostname” option from
installation summary screen,
If your system is connected to any switch or modem, then it will try to get IP from DHCP
server otherwise we can configure IP manually.
Mention the hostname that you want to set and then click on “Apply”. Once you are done
with IP address and hostname configuration click on “Done”
To define the installation disk and partition scheme for RHEL 8, click on “Installation
Destination” option

Click on Done
As we can see I have around 60 GB free disk space on sda drive, I will be creating following
customize lvm based partitions on this disk,
• /boot = 2GB (xfs file system)
• / = 20 GB (xfs file system)
• /var = 10 GB (xfs file system)
• /home = 15 GB (xfs file system)
• /tmp = 5 GB (xfs file system)
• Swap = 2 GB (xfs file system)

Note: If you don’t want to create manual partitions then select “Automatic” option from
Storage Configuration Tab
Let’s create our first partition as /boot of size 2 GB, Select LVM as mount point partitioning
scheme and then click on + “plus” symbol,

Click on “Add mount point”


To create next partition as / of size 20 GB, click on + symbol and specify the details as shown
below

Click on add mount point


As we can see installer has created the Volume group as “rhel_rhel8“, if you want to change
this name then click on Modify option and specify the desired name and then click on Save

Now onward all partitions will be part of Volume Group “VolGrp”


Similarly create next three partitions /home, /var and /tmp of size 15GB, 10 GB and 5 GB
respectivel
/home partition:
/var partition

/tmp partition:
Now finally create last partition as swap of size of 2 GB
Click on “Add mount point”
Once you are done with partition creations, click on Done on Next screen, example is shown
below

In the next window, choose “Accept Changes”


Step:6) Select Software Packages and Choose Security Policy and System purpose
After accepting the changes in above step, we will be redirected to installation summary
window.
By default, installer will select “Server with GUI” as software packages and if you want to
change it then click on “Software Selection” option and choose your preferred “Basic
Environment”

Click on Done
If you want to set the security policies during the installation, the choose the required
profile from Security polices option else you can leave as it is.
From “System Purpose” option specify the Role, Red Hat Service Level Agreement and
Usage. Though You can leave this option as it is.
Click on Done to proceed further.
Step:7) Choose “Begin Installation” option to start installation
From the Installation summary window click on “Begin Installation” option to start the
installation,

As we can see below RHEL 8 Installation is started & is in progress


Set the root password,

Specify the local user details like its Full Name, user name and its password,

Once the installation is completed, installer will prompt us to reboot the system,
Click on “Reboot” to restart your system and don’t forget to change boot medium from bios
settings so that system boots up with hard disk.
Step:8) Initial Setup after installation
When the system is rebooted first time after the successful installation then we will get
below window there we need to accept the license (EULA),

Click on Done,
In the next Screen click on “Finish Configuration”
Step:8) Login Screen of RHEL 8 Server after Installation
As we have installed RHEL 8 Server with GUI, so we will get below login screen, use the same
username and password that we created during the installation

After the login we will get couple of Welcome Screen and follow the screen instructions and
then finally we will get the following screen,
Click on “Start Using Red Hat Enterprise Linux”

This confirms that we have successfully installed RHEL 8


Architecture of Linux

Linux architecture has the following components:

1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the common
hardware resources of the computer to provide each process with its virtual resources. This
makes the process seem as if it is the sole process running on the machine. The kernel is also
responsible for preventing and mitigating conflicts between different processes. Different
types of the kernel are:

• Monolithic Kernel

• Hybrid kernels

• Exo kernels

• Micro kernels

2. System Library: Isthe special types of functions that are used to implement the functionality
of the operating system.

3. Shell: It is an interface to the kernel which hides the complexity of the kernel’s functions
from the users. It takes commands from the user and executes the kernel’s functions.

4. Hardware Layer: This layer consists all peripheral devices like RAM/ HDD/ CPU etc.

5. System Utility: It provides the functionalities of an operating system to the user.


Boot Process in LINUX
The following are the 6 high level stages of a typical Linux boot process.

1. BIOS
▪ BIOS stands for Basic Input/Output System
▪ Performs some system integrity checks
▪ Searches, loads, and executes the boot loader program.
▪ It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically
F12 of F2, but it depends on your system) during the BIOS startup to change the boot
sequence.
▪ Once the boot loader program is detected and loaded into the memory, BIOS gives
the control to it.
▪ So, in simple terms BIOS loads and executes the MBR boot loader.

2. MBR
▪ MBR stands for Master Boot Record.
▪ It is located in the 1st sector of the bootable disk. Typically, /dev/hda, or /dev/sda
▪ MBR is less than 512 bytes in size. This has three components 1) primary boot loader
info in 1st 446 bytes 2) partition table info in next 64 bytes 3) mbr validation check in
last 2 bytes.
▪ It contains information about GRUB (or LILO in old systems).
▪ So, in simple terms MBR loads and executes the GRUB boot loader.

3. GRUB
▪ GRUB stands for Grand Unified Bootloader.
▪ If you have multiple kernel images installed on your system, you can choose which
one to be executed.
▪ GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it
loads the default kernel image as specified in the grub configuration file.
▪ GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t
understand filesystem).
▪ Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this).
▪ The following is sample grub.conf of CentOS.
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.el5PAE)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/
initrd /boot/initrd-2.6.18-194.el5PAE.img

4. Kernel
▪ Mounts the root file system as specified in the “root=” in grub.conf
▪ Kernel executes the /sbin/init program
▪ Since init was the 1st program to be executed by Linux Kernel, it has the process id
(PID) of 1. Do a ‘ps -ef | grep init’ and check the pid.
▪ initrd stands for Initial RAM Disk.
▪ initrd is used by kernel as temporary root file system until kernel is booted and the
real root file system is mounted. It also contains necessary drivers compiled inside,
which helps it to access the hard drive partitions, and other hardware.

5. Init
▪ Looks at the /etc/inittab file to decide the Linux run level.
▪ Following are the available run levels
▪ 0 – halt
▪ 1 – Single user mode
▪ 2 – Multiuser, without NFS
▪ 3 – Full multiuser mode
▪ 4 – unused
▪ 5 – X11
▪ 6 – reboot
▪ Init identifies the default initlevel from /etc/inittab and uses that to load all
appropriate program.
▪ Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level
▪ If you want to get into trouble, you can set the default run level to 0 or 6. Since you
know what 0 and 6 means, probably you might not do that.
▪ Typically, you would set the default run level to either 3 or 5.

6. Runlevel programs
▪ When the Linux system is booting up, you might see various services getting started.
For example, it might say “starting sendmail …. OK”. Those are the runlevel
programs, executed from the run level directory as defined by your run level.
▪ Depending on your default init level setting, the system will execute the programs
from one of the following directories.
▪ Run level 0 – /etc/rc.d/rc0.d/
▪ Run level 1 – /etc/rc.d/rc1.d/
▪ Run level 2 – /etc/rc.d/rc2.d/
▪ Run level 3 – /etc/rc.d/rc3.d/
▪ Run level 4 – /etc/rc.d/rc4.d/
▪ Run level 5 – /etc/rc.d/rc5.d/
▪ Run level 6 – /etc/rc.d/rc6.d/
▪ Please note that there are also symbolic links available for these directory under /etc
directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.
▪ Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and
K.
▪ Programs starts with S are used during startup. S for startup.
▪ Programs starts with K are used during shutdown. K for kill.
▪ There are numbers right next to S and K in the program names. Those are the
sequence number in which the programs should be started or killed.
▪ For example, S12syslog is to start the syslog deamon, which has the sequence
number of 12. S80sendmail is to start the sendmail daemon, which has the sequence
number of 80. So, syslog program will be started before sendmail.

The Kernel of Linux:


The main purpose of a computer is to run a predefined sequence of instructions, known as
a program. A program under execution is often referred to as a process. Now, most special
purpose computers are meant to run a single process, but in a sophisticated system such a
general-purpose computer, are intended to run many processes simultaneously. Any kind of
process requires hardware resources such as Memory, Processor time, Storage space, etc. In
a General-Purpose Computer running many processes simultaneously, we need a middle
layer to manage the distribution of the hardware resources of the computer efficiently and
fairly among all the various processes running on the computer. This middle layer is referred
to as the kernel. Basically, the kernel virtualizes the common hardware resources of the
computer to provide each process with its own virtual resources. This makes the process
seem as it is the sole process running on the machine. The kernel is also responsible for
preventing and mitigating conflicts between different processes. This schematically
represented below:
The Core Subsystems of the Linux Kernel are as follows:
1. The Process Scheduler
2. The Memory Management Unit (MMU)
3. The Virtual File System (VFS)
4. The Networking Unit
5. Inter-Process Communication Un
The basic functioning of each of the 1st three subsystems is elaborated below:
• The Process Scheduler: This kernel subsystem is responsible for fairly distributing
the CPU time among all the processes running on the system simultaneously.
• The Memory Management Unit: This kernel sub-unit is responsible for proper
distribution of the memory resources among the various processes running on the
system. The MMU does more than just simply provide separate virtual address
spaces for each of the processes.
• The Virtual File System: This subsystem is responsible for providing a unified
interface to access stored data across different filesystems and physical storage
media.

GRUB (Grand Unified Boot Loader)


It is the system bootloader for Linux OS that can boot and load an operating system kernel
as well as the default bootloader, although it runs at first when a machine is turned on, it is
not always available to regular users.
The user must edit the /etc/default/grub file as a superuser or root before using this
method to set a specific kernel. To edit the line, use the GRUB_DEFAULT=0. It is necessary to
set this line to the desired setting (see below) before updating the GRUB 2 configuration file
using the following command: sudo update-grub.

LINUX GUI
Linux GUI (Graphical User Interface) is defined as a utility or feature which supports an
interface for users and allows users to interact with the system and takes help from
windows, icons, graphics, etc., and responds to the manipulation of mouse and keyboard.
Components of GUI :
• Window Manager: This is the first component that builds the Linux desktop
environment, and it provides the options on how applications need to be presented
to users. They are broadly classified into 3 categories listed below:
o Compositing: This is the most widely used one where different windowpane
appears on the top of each other but snap side by side and makes it pleasing
to the eye. This contains the best of both worlds, Stacking and Tiling.
o Stacking: This is a bit old fashioned where panes stack exactly on top of each
other.
o Tiling: In this category, panes are put side by side without overlapping.
• Panels: In Linux, there is the possibility of multiple panels on the screen and contains
items like the menu, quick launch items, applications which were minimized, or area
for notification.
• Menu: Menu in Linux is like a list containing various categories and gives the
flexibility to users to select the option as per requirement. This component also
provides a feature to search for application!
• System Tray: This component is generally attached to the panel and gives access to
the user to key settings like audio, network, power, etc.
• Icons: Icons in Linux is for the convenience of users to have instant access to
applications. It can be thought of as a visual representation that executes an
application.
• Widgets: This component provides utility for showing useful information on the
desktop itself. Some examples are the clock, weather, etc.
• Launcher: This is specific to some environments like Unity & GNOME, where a
customizable list of quick launch items is provided to users for easy access.
• Dashboards: This is also specific to some environments like Unity & GNOME, where
a dash type interface is provided for easy user interaction.
• File Manager: As the name suggests, this component helps users manage files by
providing utility like move, edit, rename, copy, etc.
• Terminal Emulator: This component will be of interest to those who would prefer to
work in the command line within the Linux GUI.
• Text Editor: This component allows users to create text files and a utility to edit
configuration files when needed.
• Display Manager: This component is the screen which allows the user to log in to the
system.
• Configuration Tools: This component is mostly to make aesthetic changes, viz. the
look and feels, to the desktop environment in use.

Command Line Interface (CLI):


The Command Line Interface (CLI), is a non-graphical, text-based interface to the computer
system, where the user types in a command and the computer then successfully executes it.
The Terminal is the platform or the IDE that provides the command line interface (CLI)
environment to the user.
The CLI terminal accepts the commands that the user types and passes to a shell. The shell
then receives and interprets what the user has typed into the instructions that can be
executed by the OS (Operating System). If the output is produced by the specific command,
then this text is displayed in the terminal. If any of the problems with the commands are
found, then some error message is displayed.
Graphical and the non-Graphic Interface:
Linux has two approaches: graphically and non-graphically. In graphical mode, the actual
applications live in windows that we can resize and move around according to our needs.
we have the menu and tools to help us find what we’re looking for. This is the point where
we’ll use a required web browser, our graphics editing tools, and our emails. Here we can
see some examples of the graphical desktop, with a menu bar of popular applications to the
left.
In Graphical Mode (GUI), we can have many shells open, it is a good thing when we are
performing some tasks on multiple/remote computers. We can even log in with our
username/id and password/keys through the GUI.
After successfully logging in, we are taken to the OS desktop where we can use the installed
applications.
Non-graphical mode starts off with a text-based login, As shown below. We are generally
prompted for our username/ID and after entering that, we are then prompted for our
password. If the login is successful, then we are taken straight to an execution shell.
In command line interface or the CLI, there are none of the windows present to move
around. Even though we have specific text editors, dedicated web browsers, and email
clients, they are basically just texts. This is how UNIX got its start before the graphical
environments became the norm. Most servers will be running in command line mode (CLI)
too because a GUI is a waste of resources and dataspace.

Package Management and Process Monitoring


Linux Single User Mode
Single user mode is the one of the Run levels in the Linux operating system, Linux operating
system has 6 run levels that are used for different requirement or situation. Single user
mode mainly used for doing administrative task such as cleaning the file system, Managing
the quotas, Recovering the file system and recover the lost root password. In this mode
services won’t start, none of the users are allowed to login except root and system won’t
ask for password to login.
Single user mode can be activated by command or inittab file or editing the argument while
booting, first two mode requires root password because you have to login for entering the
command or editing the inittab file whereas argument method does not require any
password.
Command Method:
Command mode is the very simple method, just need to enter the following command by
login as root. This is very useful when you are enabling the quotas for the user, because it
will not restart the server; it only stops the services which are running at the current run
level.Init is the command to change the run level where as 1 is the mode of run level.
Login as root.
Su -l
Init 1
Inittab File Method:
Inittab is the flat configuration file that handles the mode of system starting, can be found
under /etc directory. While system boot’s kernel reads this file and starts the
init process according to the entry made on that file. Inittab file entry ranges from 0 to 6 run
levels, we require only single user mode.
0 – Halt
1- Single user mode
2 – Single user mode with networking
3 – Multi user mode
4 – Reserved
5 – Graphical mode
6 – Reboot
So open up the inittab file.

vi /etc/inittab
change id:5:initdefault: to id:1:initdefault:

Once you made the changes on the file, reboot your machine. The machine will start with
single user mode, do all the administrative task; remember to undo your changes to normal
mode after finishes (otherwise machine will start always in single user mode).

RPM
Red Hat Package Manager or RPM is a free and open-source package management system
for Linux. The RPM files use the .rpm file format. RPM package manager was created to use
with Red Hat Linux, but now it is supported by multiple Linux distributions such as Fedora,
OpenSUSE, Ubuntu, etc.
RPM packages can be cryptographically verified with GFG and MD5. They support automatic
build-time dependency evaluation.
In this article, we are going to discuss How to install RPM packages on Linux.

Installation
Step 1: First, you need to download the installation file. This file can be downloaded using a
browser or wget.
To download it using wget, the terminal command is
wget http://example.com/downloads/mypackage.rpm

Step 2: Next, install the package using the following command in the terminal window –
sudo rpm -i mypackage.rpm
if the package is already installed, then you can use the following command to upgrade it –
sudo rpm -U mypackage.rpm

Another process is to use yum for the installation of RPM packages.


To install using yum, use the following command in terminal –
sudo yum install packagename

Removal of RPM Packages


If you want to remove or uninstall RPM packages from the system, then you can use the
following terminal command –
sudo rpm -e packagename

Another process to uninstall RPM packages is using yum.


To uninstall using yum, the terminal command will be –
sudo yum remove packagename
Processes in LINUX
A program/command when executed, a special instance is provided by the system to the
process. This instance consists of all the services/resources that may be utilized by the
process under execution.
• Whenever a command is issued in Unix/Linux, it creates/starts a new process. For
example, pwd when issued which is used to list the current directory location the
user is in, a process starts.
• Through a 5 digit ID number Unix/Linux keeps an account of the processes, this
number is called process ID or PID. Each process in the system has a unique PID.
• Used up pid’s can be used in again for a newer process since all the possible
combinations are used.
• At any point of time, no two processes with the same pid exist in the system because
it is the pid that Unix uses to track each process.

Initializing a process
A process can be run in two ways:
Method 1: Foreground Process : Every process when started runs in foreground by default,
receives input from the keyboard, and sends output to the screen. When issuing pwd
command
$ ls pwd
Output:
$ /home/geeksforgeeks/root
When a command/process is running in the foreground and is taking a lot of time, no other
processes can be run or started because the prompt would not be available until the
program finishes processing and comes out.

Method 2: Background Process: It runs in the background without keyboard input and waits
till keyboard input is required. Thus, other processes can be done in parallel with the
process running in the background since they do not have to wait for the previous process
to be completed.
Adding & along with the command starts it as a background process

$ pwd &
Since pwd does not want any input from the keyboard, it goes to the stop state until moved
to the foreground and given any data input. Thus, on pressing Enter:
Output:
[1] + Done pwd
$
That first line contains information about the background process – the job number and the
process ID. It tells you that the ls command background process finishes successfully. The
second is a prompt for another command.
Tracking ongoing processes
ps (Process status) can be used to see/list all the running processes.
$ ps

PID TTY TIME CMD


19 pts/1 00:00:00 sh
24 pts/1 00:00:00 ps

For more information -f (full) can be used along with ps


$ ps –f
UID PID PPID C STIME TTY TIME CMD
52471 19 1 0 07:20 pts/1 00:00:00f sh
52471 25 19 0 08:04 pts/1 00:00:00 ps -f

For single-process information, ps along with process id is used


$ ps 19
PID TTY TIME CMD
19 pts/1 00:00:00 sh
For a running program (named process) Pidof finds the process id’s (pids)
Fields described by ps are described as:
• UID: User ID that this process belongs to (the person running it)

• PID: Process ID

• PPID: Parent process ID (the ID of the process that started it)

• C: CPU utilization of process

• STIME: Process start time

• TTY: Terminal type associated with the process

• TIME: CPU time is taken by the process

• CMD: The command that started this process

There are other options which can be used along with ps command :
• -a: Shows information about all users
• -x: Shows information about processes without terminals
• -u: Shows additional information like -f option
• -e: Displays extended information

Stopping a process:
When running in foreground, hitting Ctrl + c (interrupt character) will exit the command. For
processes running in background kill command can be used if it’s pid is known.
$ ps –f
UID PID PPID C STIME TTY TIME CMD
52471 19 1 0 07:20 pts/1 00:00:00 sh
52471 25 19 0 08:04 pts/1 00:00:00 ps –f

$ kill 19
Terminated

If a process ignores a regular kill command, you can use kill -9 followed by the process ID.
$ kill -9 19
Terminated

Other process commands:


bg: A job control command that resumes suspended jobs while keeping them running in the
background
Syntax:

bg [ job ]
For example:
bg %19

fg: It continues a stopped job by running it in the foreground.


Syntax:

fg [ %job_id ]
For example
fg 19

top: This command is used to show all the running processes within the working
environment of Linux.
Syntax:
top

nice: It starts a new process (job) and assigns it a priority (nice) value at the same time.
Syntax:
nice [-nice value]
nice value ranges from -20 to 19, where -20 is of the highest priority.

renice : To change the priority of an already running process renice is used.


Syntax:
renice [-nice value] [process id]
df: It shows the amount of available disk space being used by file systems
Syntax:
df
Output:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/loop0 18761008 15246876 2554440 86% /
none 4 0 4 0% /sys/fs/cgroup
udev 493812 4 493808 1% /dev
tmpfs 100672 1364 99308 2% /run
none 5120 0 5120 0% /run/lock
none 503352 1764 501588 1% /run/shm
none 102400 20 102380 1% /run/user
/dev/sda3 174766076 164417964 10348112 95% /host

free: It shows the total amount of free and used physical and swap memory in the system,
as well as the buffers used by the kernel
Syntax:
free
Output:
total used free shared buffers cached
Mem: 1006708 935872 70836 0 148244 346656
-/+ buffers/cache: 440972 565736
Swap: 262140 130084 132056

Types of Processes
1. Parent and Child process : The 2nd and 3rd column of the ps –f command shows
process id and parent’s process id number. For each user process, there’s a parent
process in the system, with most of the commands having shell as their parent.
2. Zombie and Orphan process : After completing its execution a child process is
terminated or killed and SIGCHLD updates the parent process about the termination
and thus can continue the task assigned to it. But at times when the parent process
is killed before the termination of the child process, the child processes become
orphan processes, with the parent of all processes “init” process, becomes their new
pid.
A process which is killed but still shows its entry in the process status or the process
table is called a zombie process, they are dead and are not used.
3. Daemon process : They are system-related background processes that often run
with the permissions of root and services requests from other processes, they most
of the time run in the background and wait for processes it can work along with for
ex print daemon.
When ps –ef is executed, the process with ? in the tty field are daemon processes.

Important Files, Directories and Utilities


Control service and daemon
1. systemd
1.1 Systemd Introduction
• systemdIt is a system and service manager dedicated to the Linux operating system,
and its purpose is to replace the INIT system that has been in the UNIX era.
• systemdIs the first application of user space, namely /sbin/init
The type of init program:
• SYSV style: init (centos5), when implementing system initialization, the final initial
operation is by means of script
o Features:
▪ The script contains a lot of commands, each command must start a
process, and the command will terminate this process after the
command is executed. As a result, the system will create a lot of
creating processes, destroy processes, and work efficiency will be very
low when the system is initialized.
▪ There may be dependencies between services, and you must start the
service in a certain order. The front service does not start the latter
service. Can't do parallel
o Profile: / etc / inittab

• Upstart style: init (CentOS6), is developed by Ubuntu, working through the bus form
to work close to in parallel, efficiency than SYSV high
o Features:
▪ A bus based on the bus can make the process to communicate
between the processes
▪ Don't have a service startup, you can return your status to other
processes as soon as it is initialized.
o Profile: / etc / inittab, / etc / init / *. Conf
• SystemD style: Systemd (CentOS7)
o Features: Start speed is faster than sysv and upstart
▪ You don't need to start the service through any script, the systemd
itself can start the service. Its itself is a powerful interpreter. Do not
participate in SH / BASH participation when starting the service.
▪ Systemd does not really start any service when the system is
initialized
▪ As long as the service is useless, it tells you that it is actually
started. Only start the service only when you go to access
o Profile: / usr / lib / systemd / system, / etc / systemd / system

1.2 New Features of Systemd


• When the system is guided, the service is connected in parallel.
• Activate processes on demand
• System status snapshot
• Definition service control logic based on dependency

1.3 Sytemd core concept Unit


SystemD uses Unit concept to manage services, which behave as a configuration file.
SYSTEMD is identified and configured for these profiles to meet the management service:
• These Unit files mainly include system services, listening to Socket, saving system
snapshots
• And other information related to init to the following directory:
/usr/lib/systemd/system
/run/systemd/system
/etc/systemd/system

Unit key features


• Socket-based activation mechanism:
o Socket is separated from the service program, and the service will be truly
launched when someone goes to access. To achieve the parallel launch of the
on-demand activation process and service
• BUS-based activation mechanism:
o All services that use DBUS implementation process, you can activate on
demand when you are first accessed
• Device-based activation mechanism:
o Support for Device activation-based system services, you can activate the
services you need at need when specific types of hardware access to the
system.
• PATH-based activation mechanism:
o A service is active when a file path is available, or a service is activated when
a new file appears.
• System Snapshot:
o Saving the current status information of each Unit is automatically loaded in
a persistent storage device.
o Compatible with Sysv init script backwards

Incompatibility
• The systemctl command is fixed
• Service that is not started by SYSTEMD, systemctl cannot communicate with it
• Only STOP is executed when the service already started at level, before CentOS6 is
the service of all START, all K started the service all STOP
• System services do not read any data streams from standard input
• Unit operations of each service are subject to 5 minutes timeout time limit

2. Use SystemCTL management services


• Syntax: Systemctl Command Name [.service | .target]
• Common Command:
o Start name.service // Start service
o Stop name.Service // Stop service
o Restart name.Service // Restart service
o Status name.Service // View service status
o Enable name.Service // Set a service boot automatically start
o Disable name.Service // Prohibition of service boot automatically star

AWK Command
The awk command is used for text processing in Linux. Although, the sed command is also
used for text processing, but it has some limitations, so the awk command becomes a handy
option for text processing. It provides powerful control to the data.
The Awk is a powerful scripting language used for text scripting. It searches and replaces the
texts and sorts, validates, and indexes the database.
It is one of the most widely used tools for the programmer, as they write the scaled-down
effective program in the form of a statement to define the text patterns and designs.
It acts as a filter in Linux. It is also referred as gawk (GNU awk) In Linux

The Awk command is used as follows:


awk options 'selection _criteria {action }' input-file > output-file
The options can be:
o -f program files: It reads the source code of the script written on the awk command
o -F fs: It is used as the input field separator.

Built-in variables in AWK command


Awk command supports many built-in variables, which include $1, $2, and so on, that break
the file content into individual segments.
NR: It is used to show the current count of the lines. The awk command performs action
once for each line. These lines are said as records.
NF: It is used to count the number of fields within the current database.
FS: It is used to create a field separator character to divide fields into the input lines.
OFS: It is used to store the output field separator. It separates the output fields.
ORS: It is used to store the output record separator. It separates the output records. It
prints the content of the ORS command automatically.

Linux sed Command


Linux 'sed' command stands for stream editor. It is used to edit streams (files) using regular
expressions. But this editing is not permanent. It remains only in display, but in actual, file
content remains the same.
Primarily, it is used for text substitution; additionally, it can be used for other text
manipulation operations like insert, delete, search, and more. The sed command allows us
to edit files without opening them. Regular expression support makes it a more powerful
text manipulation tool.
Syntax:
sed [OPTION]... {script-only-if-no-other-script} [input-file] ...
The following are some command line options of the sed command:
-n, --quiet, --silent: It forcefully allows us to print of pattern space.
-e script, --expression=script: It is used to add the script to the commands to be executed.
-f script-file, --file=script-file: It is used to add the contents of script-file to the commands to
be executed.
--follow-symlinks: it is used to follow symlinks when processing in place.
-i[SUFFIX], --in-place[=SUFFIX]: it is used to edit files in place (creates backup if SUFFIX
option is supplied).
-l N, --line-length=N: It is used to specify the desired line-wrap length for the `l' command.
--posix: it is used to disable all GNU extensions.
-E, -r, --regexp-extended: It allows us to use the extended regular expressions in the script
(for portability use POSIX -E).
-s, --separate: it is used for considering files as separate rather than as a single and
continues the long stream.
--sandbox: It is used to operate in sandbox mode.
-u, --unbuffered: It is used for loading the minimal amounts of data from the input files and
flushes the output buffers more often.
-z, --null-data: It is used to separate lines by NUL characters.
--help: it is used to display the help manual.
--version: It is used to display version information.
GREP Commnad
The 'grep' command stands for "global regular expression print". grep command filters the
content of a file which makes our search easy.
grep with pipe
The 'grep' command is generally used with pipe (|).
command | grep <searchWord>

Linux Archive Tools (tar, star, gzip, bzip2, zip, cpio)


• tar
• star
• gzip
• bzip2
• zip
• cpio
All the commands in this article have many options in addition to the basic ones being used
here. Please check the man pages for each command. The examples will use the following
files.
mkdir -p /tmp/test-dir/subdir1
mkdir -p /tmp/test-dir/subdir2
mkdir -p /tmp/test-dir/subdir3
mkdir -p /tmp/extract-dir
touch /tmp/test-dir/subdir1/file1.txt
touch /tmp/test-dir/subdir1/file2.txt
touch /tmp/test-dir/subdir2/file3.txt
touch /tmp/test-dir/subdir2/file4.txt
touch /tmp/test-dir/subdir3/file5.txt
touch /tmp/test-dir/subdir3/file6.txt
Extracts assume the "/tmp/extract-dir" directory is empty.
tar
Create an archive.
# cd /tmp
# tar -cvf archive1.tar test-dir

Check the contents.


# tar -tvf /tmp/archive1.tar

Extract it.
# cd /tmp/extract-dir
# tar -xvf /tmp/archive1.tar

star
The star command may not be installed by default, but you can install it with the following
command.
# yum install star

Create an archive.
# cd /tmp
# star -cv f=archive2.star test-dir

Check the contents.


# star -tv f=/tmp/archive2.star
Extract it.
# cd /tmp/extract-dir
# star -xv f=/tmp/archive2.star

gzip
The gzip command compresses the specified files, giving them a ".gz" extension. In this case
we will use it to compress a ".tar" file.
# cd /tmp
# tar -cvf archive3.tar test-dir
# gzip archive3.tar
The "-z" option of the tar command allows you to do this directly.
# cd /tmp
# tar -cvzf archive3.tar.gz test-dir
The files are uncompressed using the gunzip command.
# gunzip archive3.tar.gz
The "-z" option of the tar command allows you to directly ungzip and extract a ".tar.gz" file.
# cd /tmp/extract-dir
# tar -xvzf /tmp/archive3.tar.gz
bzip2
The bzip2 command is similar to the gzip command. It compresses the specified files, giving
them a ".bz2" extension. In this case we will use it to compress a ".tar" file.
# cd /tmp
# tar -cvf archive4.tar test-dir
# bzip2 archive4.tar
The "-j" option of the tar command allows you to do this directly.
# cd /tmp
# tar -cvjf archive4.tar.bz2 test-dir
The files are uncompressed using the bunzip2 command.
# bunzip2 archive4.tar.bz2
The "-j" option of the tar command allows you to directly bunzip2 and extract a ".tar.bz2"
file.
# cd /tmp/extract-dir
# tar -xvjf /tmp/archive4.tar.bz2
zip
Create an archive.
# cd /tmp
# zip -r archive5.zip test-dir
Check the contents.
# unzip -l archive5.zip
Extract it.
# cd /tmp/extract-dir
# unzip /tmp/archive5.zip
cpio
Create an archive.
# cd /tmp
# find test-dir | cpio -ov > archive6.cpio
Check the contents.
# cpio -t < /tmp/archive6.cpio
Extract it.
# cd /tmp/extract-dir
# cpio -idmv < /tmp/archive6.cpio
System Services

NTP Protocol
Network Time Protocol (NTP) is a protocol that helps the computers clock times to be
synchronized in a network. This protocol is an application protocol that is responsible for the
synchronization of hosts on a TCP/IP network. This is required in a communication
mechanism so that a seamless connection is present between the computers.
Features of NTP :
Some features of NTP are –
• NTP servers have access to highly precise atomic clocks and GPU clocks
• It uses Coordinated Universal Time (UTC) to synchronize CPU clock time.
• Avoids even having a fraction of vulnerabilities in information exchange
communication.
• Provides consistent timekeeping for file servers
Working of NTP :
NTP is a protocol that works over the application layer, it uses a hierarchical system of time
resources and provides synchronization within the stratum servers. First, at the topmost
level, there is highly accurate time resources’ ex. atomic or GPS clocks. These clock
resources are called stratum 0 servers, and they are linked to the below NTP server called
Stratum 1,2 or 3 and so on. These servers then provide the accurate date and time so that
communicating hosts are synced to each other.

Architecture of Network Time Protocol :


Applications of NTP :
• Used in a production system where the live sound is recorded.
• Used in the development of Broadcasting infrastructures.
• Used where file system updates needed to be carried out across multiple computers
depending on synchronized clock times.
• Used to implement security mechanism which depend on consistent time keeping
over the network.
• Used in network acceleration systems which rely on timestamp accuracy to calculate
performance.
Advantages of NTP :
• It provides internet synchronization between the devices.
• It provides enhanced security within the premises.
• It is used in the authentication systems like Kerberos.
• It provides network acceleration which helps in troubleshooting problems.
• Used in file systems that are difficult in network synchronization.
Disadvantages of NTP :
• When the servers are down the sync time is affected across a running
communication.
• Servers are prone to error due to various time zones and conflict may occur.
• Minimal reduction of time accuracy.
• When NTP packets are increased synchronization is conflicted.
• Manipulation can be done in synchronization.
SSH
ssh stands for “Secure Shell”. It is a protocol used to securely connect to a remote
server/system. ssh is secure in the sense that it transfers the data in encrypted form
between the host and the client. It transfers inputs from the client to the host and relays
back the output. ssh runs at TCP/IP port 22.
Syntax:
ssh user_name@host(IP/Domain_name)

ssh command consists of 3 different parts:


• ssh command instructs the system to establish an encrypted secure connection with
the host machine.
• user_name represents the account that is being accessed on the host.
• host refers to the machine which can be a computer or a router that is being
accessed. It can be an IP address (e.g. 192.168.1.24) or domain name (e.g.
www.domainname.com).

SSH is significantly more secure than the other protocols such as telnet because of the
encryption of the data. There are three major encryption techniques used by SSH:
• Symmetrical encryption: This encryption works on the principle of the generation of
a single key for encrypting as well as decrypting the data. The secret key generated is
distributed among the clients and the hosts for a secure connection. Symmetrical
encryption is the most basic encryption and performs best when data is encrypted
and decrypted on a single machine.
• Asymmetrical encryption: This encryption is more secure because it generates two
different keys: Public and Private key. A public key is distributed to different host
machines while the private key is kept securely on the client machine. A secure
connection is established using this public-private key pair.
• Hashing: One-way hashing is an authentication technique which ensures that the
received data is unaltered and comes from a genuine sender. A hash function is used
to generate a hash code from the data. It is impossible to regenerate the data from
the hash value. The hash value is calculated at the sender as well as the receiver’s
end. If the hash values match, the data is authentic.
CRONTAB

The crontab is a list of commands that you want to run on a regular schedule, and also the
name of the command used to manage that list. Crontab stands for “cron table, ” because it
uses the job scheduler cron to execute tasks; cron itself is named after “chronos, ” the Greek
word for time.cron is the system process which will automatically perform tasks for you
according to a set schedule. The schedule is called the crontab, which is also the name of
the program used to edit that schedule. Linux Crontab Format
MIN HOUR DOM MOD DOW CMD

Crontab Fields and Allowed Ranges (Linux Crontab Syntax)


Field Description Allowed Value
MIN Minute field 0 to 59
HOUR Hour field 0 to 23
DOM Day of Month 1-31
MOD Month field 1-12
DOW Day Of Week 0-6
CMD Command Any command to be executed.

1. Scheduling a Job For a Specific Time The basic usage of cron is to execute a job in a
specific time as shown below. This will execute the Full backup shell script (full-backup) on
10th June 08:30 AM. The time field uses 24 hours format. So, for 8 AM use 8, and for 8 PM
use 20.
30 08 10 06 * /home/maverick/full-backup
30 – 30th Minute 08 – 08 AM 10 – 10th Day 06 – 6th Month (June) * – Every day of the
week

2.To view the Crontab entries


View Current Logged-In User’s Crontab entries : To view your crontab entries type crontab -l
from your unix account.
3.To edit Crontab Entries Edit Current Logged-In User’s Crontab entries.To edit a crontab
entries, use crontab -e. By default this will edit the current logged-in users crontab.

4.To schedule a job for every minute using Cron. Ideally you may not have a requirement to
schedule a job every minute. But understanding this example will help you understand the
other examples.
* * * * * CMD
The * means all the possible unit — i.e every minute of every hour through out the year.
More than using this * directly, you will find it very useful in the following cases. When you
specify */5 in minute field means every 5 minutes. When you specify 0-10/2 in minute field
mean every 2 minutes in the first 10 minute. Thus the above convention can be used for all
the other 4 fields.

5.To schedule a job for more than one time (e.g. Twice a Day) The following script take a
incremental backup twice a day every day. This example executes the specified incremental
backup shell script (incremental-backup) at 11:00 and 16:00 on every day. The comma
separated value in a field specifies that the command needs to be executed in all the
mentioned time.
00 11, 16 * * * /home/maverick/bin/incremental-backup
00 – 0th Minute (Top of the hour) 11, 16 – 11 AM and 4 PM * – Every day * – Every month *
– Every day of the week

6.To schedule a job for certain range of time (e.g. Only on Weekdays) If you wanted a job
to be scheduled for every hour with in a specific range of time then use the following.
• Cron Job everyday during working hours : This example checks the status of the
database everyday (including weekends) during the working hours 9 a.m – 6 p.m
00 09-18 * * * /home/maverick/bin/check-db-status
• 00 – 0th Minute (Top of the hour) 09-18 – 9 am, 10 am, 11 am, 12 am, 1 pm, 2 pm, 3
pm, 4 pm, 5 pm, 6 pm * – Every day * – Every month * – Every day of the week
• Cron Job every weekday during working hours : This example checks the status of
the database every weekday (i.e excluding Sat and Sun) during the working hours 9
a.m – 6 p.m.
00 09-18 * * 1-5 /home/maverick/bin/check-db-status
• 00 – 0th Minute (Top of the hour) 09-18 – 9 am, 10 am, 11 am, 12 am, 1 pm, 2 pm, 3
pm, 4 pm, 5 pm, 6 pm * – Every day * – Every month 1-5 -Mon, Tue, Wed, Thu and
Fri (Every Weekday)
7.To schedule a background Cron job for every 10 minutes.
Use the following, if you want to check the disk space every 10 minutes.
*/10 * * * * /home/maverick/check-disk-space
It executes the specified command check-disk-space every 10 minutes through out the year.
But you may have a requirement of executing the command only during certain hours or
vice versa. The above examples shows how to do those things.Instead of specifying values in
the 5 fields, we can specify it using a single keyword as mentioned below. There are special
cases in which instead of the above 5 fields you can use @ followed by a keyword — such as
reboot, midnight, yearly, hourly.
Cron special keywords and its meaning
Keyword Equivalent
@yearly 0 0 1 1 *
@daily 00***
@hourly 0 * * * *
@reboot Run at startup.

8.To schedule a job for first minute of every year using @yearly If you want a job to be
executed on the first minute of every year, then you can use the @yearly cron keyword as
shown below.This will execute the system annual maintenance using annual-maintenance
shell script at 00:00 on Jan 1st for every year.
@yearly /home/maverick/bin/annual-maintenance

9.To schedule a Cron job beginning of every month using @monthly It is as similar as the
@yearly as above. But executes the command monthly once using @monthly cron
keyword.This will execute the shell script tape-backup at 00:00 on 1st of every month.
@monthly /home/maverick/bin/tape-backup

10.To schedule a background job every day using @daily Using the @daily cron keyword,
this will do a daily log file cleanup using cleanup-logs shell script at 00:00 on every day.
@daily /home/maverick/bin/cleanup-logs "day started"
11.To execute a linux command after every reboot using @reboot Using the @reboot cron
keyword, this will execute the specified command once after the machine got booted every
time. @reboot CMD
ANACRON
anacron command is used to execute commands periodically with a frequency specified in
days. Its main advantage over cron is that it can be used on a machine which is not running
continuously. In cron if a machine is not running on time of a scheduled job then it will skip
it, but anacron is a bit different as it first checks for timestamp of the job then decides
whether to run it or not and if its timestamp is >=n(n is defined number of days) then runs it
after a specified time delay.
It mainly constitutes of two important Files:
1] /etc/anacrontab : It contains specifications of job.
2] /var/spool/anacron : This directory is used by Anacron for storing timestamp files. It
represents timestamp for different category of jobs i.e. daily, weekly, monthly, etc.

Options:
• f : Used to force execution of the jobs, ignoring the timestamps.
• u : Only update the timestamps of the jobs, to the current date, but don’t run
anything.
• s : Serialize execution of jobs. Anacron will not start a new job before the previous
one finished.
• n : Run jobs now.Ignore any delay.
• d : Don’t fork to the background. In this mode, Anacron will output informational
messages to standard error, as well as to syslog. The output of jobs is mailed as
usual.
• q : Suppress messages to standard error. Only applicable with -d.
• V (Use specified anacrontab) : Print version information and exit.
• h (Use specified anacrontab) : Print short usage message, and exit.
User Administration

User Management
A user is an entity, in a Linux operating system, that can manipulate files and perform
several other operations. Each user is assigned an ID that is unique for each user in the
operating system. In this post, we will learn about users and commands which are used to
get information about the users. After installation of the operating system, the ID 0 is
assigned to the root user and the IDs 1 to 999 (both inclusive) are assigned to the system
users and hence the ids for local user begins from 1000 onwards.
In a single directory, we can create 60,000 users. Now we will discuss the important
commands to manage users in Linux.
1. To list out all the users in Linux, use the awk command with -F option. Here, we are
accessing a file and printing only first column with the help of print $1 and awk.
awk -F':' '{ print $1}' /etc/passwd

2. Using id command, you can get the ID of any username. Every user has an id assigned to
it and the user is identified with the help of this id. By default, this id is also the group id of
the user.
id username

3. The command to add a user. useradd command adds a new user to the directory. The
user is given the ID automatically depending on which category it falls in. The username of
the user will be as provided by us in the command.
sudo useradd username

4. Using passwd command to assign a password to a user. After using this command we
have to enter the new password for the user and then the password gets updated to the
new password.
passwd username

5. Accessing a user configuration file.


cat /etc/passwd
This commands prints the data of the configuration file. This file contains information about
the user in the format.
username : x : user id : user group id :message_user : /home/username : /bin/bash

6. The command to change the user ID for a user.


usermod -u new_id username
This command can change the user ID of a user. The user with the given username will be
assigned with the new ID given in the command and the old ID will be removed.

7. Command to Modify the group ID of a user.


usermod -g new_group_id username
This command can change the group ID of a user and hence it can even be used to move a
user to an already existing group. It will change the group ID of the user whose username is
given and sets the group ID as the given new_group_id.

8. You can change the user login name using usermod command. The below command is
used to change the login name of the user. The old login name of the user is changed to the
new login name provided.
sudo usermod -l new_login_name old_login_name

9. The command to change the home directory. The below command change the home
directory of the user whose username is given and sets the new home directory as the
directory whose path is provided.
usermod -d new_home_directory_path username

10. You can also delete a user name. The below command deletes the user whose
username is provided. Make sure that the user is not part of a group. If the user is part of a
group then it will not be deleted directly, hence we will have to first remove him from the
group and then we can delete him.
userdel -r username
Group Management
There are 2 categories of groups in the Linux operating system
i.e. Primary and Secondary groups. The Primary Group is a group that is automatically
generated while creating a user with a unique user ID simultaneously a group with ID same
as the user ID is created and the user gets added to the group and becomes the first and
only member of the group. This group is called the primary group. The secondary group is a
group that can be created separately with the help of commands and we can then add users
to it by changing the group ID of users.
1. Command to Make a group (Secondary Group): Below command created a group with
the name as provided. The group while creation gets a group ID and we can get to know
everything about the group as its name, ID, and the users present in it in the file
“/etc/group”.
groupadd group_name
Example:
groupadd Group1

2. Command to Set the Password for the Group: Below command is used to set the
password of the group. After executing the command we have to enter the new password
which we want to assign to the group. The password has to be given twice for confirmation
purposes.
gpasswd group_name
Example:
gpasswd Group1

3. Command to Display the Group Password File: The below command gives us the
password file as output. The file is present in a form such that no information about the file
is open for the viewers. Instead of this try: “cat /etc/group” to get more information about
the groups.
cat /etc/gshadow

4. Command to Add a User to an Existing Group: Below command is used to add a user to
an existing group. The users which may be present in any primary or secondary group will
exit the other groups and will become the part of this group.
usermod -G group_name username
usermod -G group1 abcd

5. Command to Add User to Group Without Removing From Existing Groups: This
command is used to add a user to a new group while preventing him from getting removed
from his existing groups.
usermod -aG *group_name *username
Example:
usermod -aG group1 John_Doe

6. Command to Add Multiple Users to a Group at once:


gpasswd -M *username1, *username2, *username3 ...., *usernamen *group_name
Example:
gpasswd -M Person1, Person2, Person3 Group1
7. Command to Delete a User From a Group: Below command is used to delete a user from
a group. The user is then removed from the group though it is still a valid user in the system
but it is no more a part of the group. The user remains part of the groups which it was in
and if it was part of no other group then it will be part of its primary group.
gpasswd -d *username1 *group_name
Example:
gpasswd -d Person1 Group1

8. Command to Delete a Group: Below Command is used to delete the group. The users
present in the group will not be deleted. They will remain as they were, but now they will no
more be part of this group as the group will be deleted.
groupdel *group_name
Example:
groupdel Group1
ACCESS CONTROL LISTS
What is ACL ?
Access control list (ACL) provides an additional, more flexible permission mechanism for file
systems. It is designed to assist with UNIX file permissions. ACL allows you to give
permissions for any user or group to any disc resource.
Use of ACL :
Think of a scenario in which a particular user is not a member of group created by you but
still you want to give some read or write access, how can you do it without making user a
member of group, here comes in picture Access Control Lists, ACL helps us to do this trick.
Basically, ACLs are used to make a flexible permission mechanism in Linux.
From Linux man pages, ACLs are used to define more fine-grained discretionary access rights
for files and directories.
setfacl and getfacl are used for setting up ACL and showing ACL respectively.
For example :
getfacl test/declarations.h

List of commands for setting up ACL :


1) To add permission for user
setfacl -m "u:user:permissions" /path/to/file

2) To add permissions for a group


setfacl -m "g:group:permissions" /path/to/file

3) To allow all files or directories to inherit ACL entries from the directory it is within
setfacl -dm "entry" /path/to/dir

4) To remove a specific entry


setfacl -x "entry" /path/to/file
5) To remove all entries
setfacl -b path/to/file

Modifying ACL using setfacl :


To add permissions for a user (user is either the user name or ID):
# setfacl -m "u:user:permissions"
To add permissions for a group (group is either the group name or ID):
# setfacl -m "g:group:permissions"
To allow all files or directories to inherit ACL entries from the directory it is within:
# setfacl -dm "entry"

View ACL :
To show permissions :
# getfacl filename
Observe the difference between output of getfacl command before and after setting up ACL
permissions using setfacl command.

Remove ACL :
If you want to remove the set ACL permissions, use setfacl command with -b option.
Using Default ACL :
The default ACL is a specific type of permission assigned to a directory, that doesn’t change
the permissions of the directory itself, but makes so that specified ACLs are set by default on
all the files created inside of it. Let’s demonstrate it : first we are going to create a directory
and assign default ACL to it by using the -d option:
$ mkdir test && setfacl -d -m u:dummy:rw test
File System Security and Management

Disk Management :
Disk management helps in utilizing the storage devices effectively.

Identifying a device :
IDE drives are identified as HDx, and SCSI drives are identified as SDx.
Where x refers value from a to z

Identifying a partition:
Partitions are the logical divisions of a hard disk for segregation of data. Hard disk partitions
are identified as /dev,/hd , “x”, “n”, where x refers value from a to z and n refers value from
1 to infinity.

Types of System:
Ext2 : Extended 2 file system was once default file system in Linux. It was introduced to
increase the size of the file system and also for to increase size of a single file.

Ext3 : Extended 3 file system developed In 2001 and it is available in kernel version 2.4.15.
The main feature of this file system is journaling which makes the file system more reliable.
Journaling will track all the changes in the file system and store them in a separate location ,
which will help to recover file system.

Ext4 : Extended 4 file system was developed in 2008. It is available in kernel version 2.6.19.
Some new features in this file system are , large file system support , multi-block allocation,
delayed allocation , fast fsck , journal checksum etc.

In linux every partition is managed by using partition ID’s.


Regularly used partition ID’s are listed below:
83 – Linux
82 – Linux swap
5 or f – Extended
8e – Linux LVM
Fd – Linux RAID

To list down partition table following command is used


fdisk -l

Disk Partitioning :
Each hard disk can have maximum 4 primary partitions by default. If more than four
partitions are needed then instead of the fourth primary partition , extended partition can
be created for accommodating logical partitions.

Creating partition :
F disk is the command line interface, for managing the hard disk, which can support linux
partitions and also other DOS partitions.
Information about the partitions are maintained in partitions table. Any modification in the
partition table can be updated to the running kernel, using partprobe command.
Partitions are created using :
fdisk <device name>
n – new partition
d – delete partition
p – print partition
w – save the changes

Fields in fstab file :


Device – partition to mount
Mount Point – the directory to mount the partition
Fstype – file system type used by partition
Options – options to mount the partition
Dump_frequency – interval in number of days for the data to be dumped
Fsck_order – the order to check the filesystem.
Shell Scripting and Kerberos Authentication

Kerberose Protocol

Kerberos is a Network Authentication Protocol evolved at MIT, which uses an encryption technique called
symmetric key encryption and a key distribution center. Although Kerberos is ubiquitous in the digital world, it
is widely used in secure systems based on reliable testing and verification features. Kerberos is used in Posix
authentication, as well as in Active Directory, NFS, and Samba. And it is another authentication system for SSH,
POP, and SMTP.

Kerberos Protocol Flow:

This works on the Client-Server based Model. Kerberos makes use of symmetric key cryptography and a key
distribution center (KDC) to authenticate and verify consumer identities. The symmetric key used is the same
for encryption and decryption. A KDC is a database of all the secret keys. A KDC entails 3 aspects:

• A ticket-granting server (TGS) that connects the consumer with the service server (SS).

• A Kerberos database that shops the password and identification of all tested users.

• An authentication server (AS) that plays the preliminary authentication.

Let’s say we have a user (Client) and We have a server(whose network services we require). The User must be
an Authorised User.

• The user sends a message to KDC, requesting keys so that the user can prove its authenticity and
access the services of the Network.

• Now AS (Authentication server) in KDC will send the ticket back to the User. The ticket will be in
encrypted form.

• The user will decrypt the message and get the hash code.

• The hash code is again sent back to AS. Now AS will check for Authenticity.

• If the user is authorized, then AS gives a service ticket (Secret Key) to the Ticket Granting Server.

• TGS gives it to the User.

• Using this Ticket, the client communicates with a server.


Advantages of Kerberos:

• Access Control: The Kerberos authentication protocol permits powerful access control. Users
advantage of a single point for track of all logins and the enforcement of protection policies.

• Mutual Authentication: Kerberos authentication permits carrier structures and customers to


authenticate each other. During all steps of the process, the user and the server will understand that
the counterparts that they may be interacting with are authentic.

• Limited Ticket Lifetime: Each ticket in Kerberos has timestamps and lifelong data, and the period of
authentication is managed through admins.

• Reusable Authentication: Kerberos authentication is durable and reusable. Each user will effectively
be tested through the system once.

• Security: Multiple secret keys, third-party authorization, and cryptography make Kerberos a secure
verification protocol. Passwords are not sent over the networks, and secret keys are encrypted,
making it hard for attackers to impersonate users or services.

• Performance: With respect to the Performance, Kerberos keeps track of client information after
verification. This means it can do better than NTLM, especially on large farms. Also, Kerberos can
transfer client information from an end-to-end webserver to other background servers such as SQL
Server.
Shell Scripting
Usually shells are interactive that mean, they accept command as input from users and
execute them. However some time we want to execute a bunch of commands routinely, so
we have type in all commands each time in terminal.
As shell can also take commands as input from file we can write these commands in a file
and can execute them in shell to avoid this repetitive work. These files are called Shell
Scripts or Shell Programs. Shell scripts are similar to the batch file in MS-DOS. Each shell
script is saved with .sh file extension eg. myscript.sh
A shell script have syntax just like any other programming language. If you have any prior
experience with any programming language like Python, C/C++ etc. it would be very easy to
get started with it.
A shell script comprises following elements –
• Shell Keywords – if, else, break etc.
• Shell commands – cd, ls, echo, pwd, touch etc.
• Functions
• Control flow – if..then..else, case and shell loops etc.
Why do we need shell scripts
There are many reasons to write shell scripts:
• To avoid repetitive work and automation
• System admins use shell scripting for routine backups
• System monitoring
• Adding new functionality to the shell etc.
Advantages of shell scripts
• The command and syntax are exactly the same as those directly entered in
command line, so programmer do not need to switch to entirely different syntax
• Writing shell scripts are much quicker
• Quick start
• Interactive debugging etc.
Disadvantages of shell scripts
• Prone to costly errors, a single mistake can change the command which might be
harmful
• Slow execution speed
• Design flaws within the language syntax or implementation
• Not well suited for large and complex task
• Provide minimal data structure unlike other scripting languages. etc

Basic Operators
There are 5 basic operators in bash/shell scripting:
• Arithmetic Operators
• Relational Operators
• Boolean Operators
• Bitwise Operators
• File Test Operators
1. Arithmetic Operators: These operators are used to perform normal
arithmetics/mathematical operations. There are 7 arithmetic operators:
• Addition (+): Binary operation used to add two operands.
• Subtraction (-): Binary operation used to subtract two operands.
• Multiplication (*): Binary operation used to multiply two operands.
• Division (/): Binary operation used to divide two operands.
• Modulus (%): Binary operation used to find remainder of two operands.
• Increment Operator (++): Unary operator used to increase the value of operand by
one.
• Decrement Operator (- -): Unary operator used to decrease the value of a operand
by one

2. Relational Operators: Relational operators are those operators which define the relation
between two operands. They give either true or false depending upon the relation. They are
of 6 types:
• ‘==’ Operator: Double equal to operator compares the two operands. Its returns true
is they are equal otherwise returns false.
• ‘!=’ Operator: Not Equal to operator return true if the two operands are not equal
otherwise it returns false.
• ‘<‘ Operator: Less than operator returns true if first operand is less than second
operand otherwise returns false.
• ‘<=’ Operator: Less than or equal to operator returns true if first operand is less than
or equal to second operand otherwise returns false
• ‘>’ Operator: Greater than operator return true if the first operand is greater than
the second operand otherwise return false.
• ‘>=’ Operator: Greater than or equal to operator returns true if first operand is
greater than or equal to second operand otherwise returns false

3. Logical Operators : They are also known as boolean operators. These are used to perform
logical operations. They are of 3 types:
• Logical AND (&&): This is a binary operator, which returns true if both the operands
are true otherwise returns false.
• Logical OR (||): This is a binary operator, which returns true is either of the operand
is true or both the operands are true and return false if none of then is false.
• Not Equal to (!): This is a unary operator which returns true if the operand is false
and returns false if the operand is true.

4. Bitwise Operators: A bitwise operator is an operator used to perform bitwise operations


on bit patterns. They are of 6 types:
• Bitwise And (&): Bitwise & operator performs binary AND operation bit by bit on the
operands.
• Bitwise OR (|): Bitwise | operator performs binary OR operation bit by bit on the
operands.
• Bitwise XOR (^): Bitwise ^ operator performs binary XOR operation bit by bit on the
operands.
• Bitwise complement (~): Bitwise ~ operator performs binary NOT operation bit by
bit on the operand.
• Left Shift (<<): This operator shifts the bits of the left operand to left by number of
times specified by right operand.
• Right Shift (>>): This operator shifts the bits of the left operand to right by number
of times specified by right operand.
5. File Test Operator: These operators are used to test a particular property of a file.
• -b operator: This operator check whether a file is a block special file or not. It returns
true if the file is a block special file otherwise false.
• -c operator: This operator checks whether a file is a character special file or not. It
returns true if it is a character special file otherwise false.
• -d operator: This operator checks if the given directory exists or not. If it exists then
operators returns true otherwise false.
• -e operator: This operator checks whether the given file exists or not. If it exits this
operator returns true otherwise false.
• -r operator: This operator checks whether the given file has read access or not. If it
has read access then it returns true otherwise false.
• -w operator: This operator check whether the given file has write access or not. If it
has write then it returns true otherwise false.
• -x operator: This operator check whether the given file has execute access or not. If
it has execute access then it returns true otherwise false.
• -s operator: This operator checks the size of the given file. If the size of given file is
greater than 0 then it returns true otherwise it is false.

Conditional Statements
Conditional Statements: There are total 5 conditional statements which can be used in bash
programming
1. if statement
2. if-else statement
3. if..elif..else..fi statement (Else If ladder)
4. if..then..else..if..then..fi..fi..(Nested if)
5. switch statement

1] if statement
This block will process if specified condition is true.
Syntax:
if [ expression ]
then
statement
fi
if-else statement
If specified condition is not true in if part then else part will be execute.
Syntax

if [ expression ]
then
statement1
else
statement2
fi

if..elif..else..fi statement (Else If ladder)


To use multiple conditions in one if-else block, then elif keyword is used in shell. If
expression1 is true then it executes statement 1 and 2, and this process continues. If none
of the condition is true then it processes else part.
Syntax
if [ expression1 ]
then
statement1
statement2
.
.
elif [ expression2 ]
then
statement3
statement4
.
.
else
statement5
fi

if..then..else..if..then..fi..fi..(Nested if)
Nested if-else block can be used when, one condition is satisfies then it again checks
another condition. In the syntax, if expression1 is false then it processes else part, and again
expression2 will be check.
Syntax:
if [ expression1 ]
then
statement1
statement2
.
else
if [ expression2 ]
then
statement3
.
fi
fi

switch statement
case statement works as a switch statement if specified value match with the pattern then it
will execute a block of that particular pattern
When a match is found all of the associated statements until the double semicolon (;;) is
executed.
A case will be terminated when the last command is executed.
If there is no match, the exit status of the case is zero.

Syntax:
case in
Pattern 1) Statement 1;;
Pattern n) Statement n;;
esac
Looping Statements
Looping Statements in Shell Scripting: There are total 3 looping statements which can be
used in bash programming

1. while statement
2. for statement
3. until statement
To alter the flow of loop statements, two commands are used they are,

1. break
2. continue
Their descriptions and syntax are as follows:

• while statement
Here command is evaluated and based on the result loop will executed, if command
raise to false then loop will be terminated
Syntax

for statement
The for loop operate on lists of items. It repeats a set of commands for every item in a list.
Here var is the name of a variable and word1 to wordN are sequences of characters
separated by spaces (words). Each time the for loop executes, the value of the variable var is
set to the next word in the list of words, word1 to wordN.

until statement
The until loop is executed as many as times the condition/command evaluates to false. The
loop terminates when the condition/command becomes true.

You might also like