0% found this document useful (0 votes)
90 views61 pages

Foss Unit IV Notes

Wikipedia is a free online encyclopedia created collaboratively by volunteers. Anyone can edit articles, though reliability is a concern due to lack of oversight. To contribute, one creates an account, learns content guidelines, and helps improve accuracy. Maintaining an open source project requires documenting the project, organizing feedback, and engaging the community to encourage contributions that further the project.

Uploaded by

Vaibhav Pearson
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
90 views61 pages

Foss Unit IV Notes

Wikipedia is a free online encyclopedia created collaboratively by volunteers. Anyone can edit articles, though reliability is a concern due to lack of oversight. To contribute, one creates an account, learns content guidelines, and helps improve accuracy. Maintaining an open source project requires documenting the project, organizing feedback, and engaging the community to encourage contributions that further the project.

Uploaded by

Vaibhav Pearson
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 61

UNIT IV FOSS

Introduction to Wikipedia

Wikipedia is a free, open content online encyclopedia created through the


collaborative effort of a community of users known as Wikipedians. Anyone
registered on the site can create an article for publication; registration is not
required to edit articles. The site's name comes from wiki, a server program that
enables anyone to edit Web site content through their Web browser.

Jimmy Wales and Larry Sanger co-founded Wikipedia as an offshoot of an earlier


encyclopedia project, Nupedia, in January 2001. Originally, Wikipedia was created
to provide content for Nupedia. However, as the wiki site became established it
soon grew beyond the scope of the earlier project. As of January 2015, the website
provided well over five million articles in English and more than that number in all
other languages combined. At that same time, Alexa ranked Wikipedia as the
seventh-most popular site on the Internet. Wikipedia was the only non-commercial
site of the top ten.

Criticisms of Wikipedia include assertions that its openness makes it unreliable and
unauthorative. Because articles don't include bylines, authors aren't publicly
accountable for what they write. Similarly, because anyone can edit any article, the
site's entries are vulnerable to unscrupulous edits. In August 2007, Virgil Griffiths
created a site, WikiScanner, where users could track the sources of edits to
Wikipedia entries. Griffiths reported that self-serving edits typically involved
whitewashing or removal of criticism of a person or organization or, conversely,
insertion of negative comments into the entry about a competitor. Wikipedia
depends upon the vigilance of editors to find and reverse such changes to content.
UNIT IV FOSS

Contributing to Wikipedia

Getting started

As a new editor, also known as a contributor, you may feel a little overwhelmed by
the sheer size and scope of this project called Wikipedia. Don't worry too much if
you don't understand everything at first, as it is acceptable to use common sense as
you go about editing. Wikipedia not only allows you to create, revise, and edit
articles, but it wants you to do so.

Creating an account is free of charge and has several benefits (for example, the
ability to create pages, upload media and edit without one's IP address being
visible to the public).

The basics of contributing

Wikipedia is the result of the contributions of thousands of editors, each of whom


brings something unique to the table, including: research abilities, technological
know-how, writing talent, informational nuggets, but most significantly, a
willingness to assist.

Article development and content protocols

Instead of a term and its meaning, which typically belong in Wiktionary, each
article is focused on a single subject. Original research is not published on
Wikipedia.

The quality of Wikipedia articles varies widely; many are very good, but some lack
depth and clarity, contain bias or are out of date. In general, high-quality articles
have the following elements: a lead section that gives an easy-to-understand
UNIT IV FOSS

overview, a clear structure, balanced coverage, neutral content, and are based upon
verifiable information found in reliable sources.

Benefits to contributing to Wikipedia?

● Sharing knowledge and information: By contributing to Wikipedia, you can


help share knowledge and information with others. This can be particularly
beneficial for people who may not have access to other sources of
information.
● Improving accuracy and reliability: By contributing to Wikipedia, you can
help ensure that the information on the site is accurate and reliable. This can
be especially important for people who rely on Wikipedia as a source of
information.
● Building community: Contributing to Wikipedia can be a great way to
connect with other people who have similar interests. This can help build a
sense of community and belonging.
● Personal growth and development: Contributing to Wikipedia can be a great
way to learn new skills and develop your knowledge. This can be
particularly beneficial for students and researchers.
● Giving back: By contributing to Wikipedia, you can help make the world a
better place by giving back to others. This can be a rewarding and fulfilling
experience.
● Please note that Wikipedia has specific guidelines and policies that must be
followed when contributing to the site. It's important to familiarize yourself
with these guidelines and policies before making any contributions.

Starting and Maintaining own Open Source Project


UNIT IV FOSS

how to start an open-source project? The process can be classified as in three phases,

● Individual senses the need of the project: This is the phase when a
developer thinks about developing open-source software that is required by
the people in the community, by the corporates or by day to day users.
He/She senses the need for a certain kind of software that should be
available in the market so that everyone can benefit from the development.
● Announcing the intent to develop it to public: When a developer thinks of
developing certain software there are multiple hurdles he might face and also
have lack of resources; here resources can be termed as the time to invest,
Tools Required and the utilities that might help for the development. In such
cases, the developer thinks of releasing the Idea to the public wherein he
proposes the technologies required, are of specialization and also the tools
that are required to develop that particular idea into a fully functional
project.
● Source Code of a mature software is made available to the Public: No
one is going to contribute until you show some intent and approach towards
the development of the software. The developer tries to build software which
will be modified and updated by the people in Community and the one who
use it.

Now, Let's talk about maintaining an Open-Source Project. This is a vast topic and
needs to be understood very clearly.

Introduction:

At the point when you maintain an open-source project, you're taking on an


influential position. Regardless of whether you're the sole developer of an idea who
released it to people in general for use and commitments, or you're dealing with a
UNIT IV FOSS

group and are maintaining one explicit part of the undertaking, you will be giving
significant support of the larger developer community network.

While open-source contributions through demands from the community are vital
for guaranteeing that product is as helpful as it tends to be for end clients,
maintainers effect forming the general task. project maintainers are very engaged
with the open-source software they oversee, from everyday association and
advancement to interfacing with people in general and giving brief and successful
feedback to the contributors.

This article will take you through certain tips for maintaining open source projects.
Being a leader of an open-source project accompanies both specialized and
non-specialized obligations to help encourage a client base and network around
your undertaking. Assuming the job of a maintainer is a chance to gain from
others, get involved in the venture the board, and watch your undertaking develop
and change as your clients become potential contributors.

Maintain Documentation

Documentation that is careful, efficient, and serves the expected communities of


your project will help grow your client base. After some time, your client base will
turn into the contributors to your open-source software.

Since you'll be thoroughly considering the code you are making at any rate, and
may even be writing down notes, it tends to be beneficial to fuse documentation as
a feature of your advancement procedure while it is crisp in your psyche. You may
even need to think about composing the documentation before the code, following
the way of thinking of a documentation-driven advancement approach that
UNIT IV FOSS

document includes first and builds up those highlights after working out what they
will do.

Documentation can come in numerous structures and can target various crowds. As
a component of your documentation, and relying upon the extent of your work, you
may choose to do at least one of the accompanying,

1. A guide that will introduce the project to the public that you have
developed.
2. You can even design tutorials to give people a brief walkthrough of what
you’ve developed.
3. The most required document is the one that contains the Frequently Asked
Questions.
4. There should be a document that must help the user Troubleshoot the errors.
5. Video Tutorials can be of a plus point if provided.

These few documents will make your client base very strong.

Organize Feedbacks

Feedbacks are normally an approach to monitor or report bugs, or to demand new


highlights to be added to the code. Open-source projects facilitating services like
GitHub, GitLab and Bitbucket will give you an interface for yourself as well as
other people to monitor feedbacks inside your vault. When discharging
open-source code to people in general, you ought to hope to have feedbacks
opened by the network of clients. Arranging and organizing feedbacks will give
you a decent guide for upcoming work on your task.
UNIT IV FOSS

Since any client can record feedback, not all feedbacks will report bugs or be
include demands; you may get questions using the feedback tracker tool, or you
may get demands for littler improvements to the UI, for instance. It is ideal to
arrange these feedbacks however much as could reasonably be expected and to be
open to the clients who are making these feedbacks.

Feedbacks ought to speak to a solid task that should be done on the source code,
and you should organize them as needs be. You and your community of developers
will have a comprehension of the measure of time and can dedicate to documented
feedbacks, and together you can work cooperatively to settle on choices and make
a noteworthy updating. At the point when you realize you won't have the option to
find a workable pace issue inside a brisk period, you can even now remark on the
feedback to tell the client that you have perused the feedback and that you'll find a
workable pace you can, and on the off chance that you can give a normal course of
events to when you can take a step on the feedbacks once more.

Understanding Open Source Ecosystem Open Source Operating


Systems:

What is an Open-Source Operating System?

The term "open source" refers to computer software or applications where the
owners or copyright holders enable the users or third parties to use, see, and edit
the product's source code. The source code of an open-source OS is publicly
visible and editable. The usually operating systems such as Apple's iOS,
Microsoft's Windows, and Apple's Mac OS are closed operating systems.
Open-Source Software is licensed in such a way that it is permissible to produce as
many copies as you want and to use them wherever you like. It generally uses
UNIT IV FOSS

fewer resources than its commercial counterpart because it lacks any code for
licensing, promoting other products, authentication, attaching advertisements, etc.

The open-source operating system allows the use of code that is freely distributed
and available to anyone and for commercial purposes. Being an open-source
application or program, the program source code of an open-source OS is
available. The user may modify or change those codes and develop new
applications according to the user requirement. Some basic examples of the
open-source operating systems are Linux, Open Solaris, Free RTOS, Open BDS,
Free BSD, Minix, etc.

In 1997, the first Open-Source software was released. Despite the industry, there
are now Open-Source alternatives for every Software program. Thanks to
technological developments and innovations, many Open-Source Operating
Systems have been developed since the dawn of the 21st century.

How does Open-Source Operating System work?

It works similarly to a closed operating system, except that the user may modify
the source code of the program or application. There may be a difference in
function even if there is no difference in performance.

For instance, the information is packed and stored in a proprietary (closed)


operating system. In open-source, the same thing happens. However, because the
source code is visible to you, you may better understand the process and change
how data is processed.

While the former operating system is secure and hassle-free, and the latter requires
some technical knowledge, you may customize these and increase performance.
UNIT IV FOSS

There is no specific way or framework for working on the open-source OS, but it
may be customized on the user requirements.

GNU/Linux

GNU/Linux is a Unix-like operating system made up of different OS components


and services that create the Linux OS.

GNU stands for GNU's not Unix, which makes the term a recursive acronym, or an
acronym in which one of the letters stands for the acronym itself. The GNU Project
initially created most of the components and services used in GNU/Linux and later
added the Linux kernel to create the GNU/Linux OS. The Linux kernel is the core
component of GNU/Linux, as it provides basic services and allocates OS
resources.

GNU/Linux is not one organization's product, as several organizations and


individuals contribute to it. The OS comes with source code that can be copied,
modified and redistributed. GNU/Linux also branches off into many different
software packages, called distributions. Distributions change the appearance and
function of GNU/Linux, making it an especially flexible OS.

Although there are numerous distributions, Debian, Fedora and Ubuntu are three
user-friendly examples of GNU/Linux desktop distributions.

Debian was developed by the community-supported Debian Project and is one of


the oldest OSes based around the Linux kernel. It is developed openly and
distributed following the principles of the GNU Project. The Free Software
Foundation (FSF) sponsored Debian between 1994 and 1995.
UNIT IV FOSS

Fedora was developed by the Fedora Project and is sponsored by Red Hat Inc. Its
goal is to lead in open source technologies by focusing on integrating new
technologies and working closely with Linux-based communities.

The Ubuntu OS, which is based on the Debian Linux distribution, is composed of
free and open source software. Ubuntu is an OS typically used for cloud computing
and is supported by OpenStack.

Free software movement activist and programmer Richard Stallman announced the
GNU/Linux project and, with others, formed FSF in 1985.

How is GNU/Linux used?

GNU/Linux is not much different from Microsoft Windows. Commercial-quality


software is available for users to work with, with additional free, high-quality
applications users can find and install.

The original purpose of the GNU Project was to create a free OS. Free -- not in the
context of cost -- but in terms of giving users the freedom to run, copy, distribute,
study, change and improve the software as needed. As such, individuals can change
the OS and exchange its components however they want. The Linux community
participates in the development and improvement of the OS.

Software developers profit by selling support and services around their own
GNU/Linux distribution. Corporate customers buy security updates and support.
UNIT IV FOSS

Other organizations contribute to GNU/Linux by pre-installing the OS on servers


they sell.

What are the advantages of GNU/Linux?

GNU/Linux comes with the following benefits:

● Software customization. Users can customize the OS' software to their


liking. For example, users can choose from different command-line
shells, which are programs that enable them to process or give commands
to a computer program in text. It is referred to as a shell, as it is an outer
layer of the OS.
● Stability. The OS is stable, as it rarely crashes.
● Open standards. GNU/Linux integrates with other open source
platforms, as it supports open
● Community. The GNU/Linux user base is a wide and varying group that
can create, distribute and help support software.
● Transparency. Users can study the source code, as well as modify and
share it. Distributions are also developed in the open.

What are the disadvantages of GNU/Linux?

Some disadvantages of GNU/Linux include the following:

● Learning curve. If a user is accustomed to Windows or macOS, it might


take time to get used to the new system and applications.
UNIT IV FOSS

● Different software. Users might miss familiar applications, such as


Microsoft Office or the Adobe Creative suite.
● Potential lack of hardware support. Even though a lot of hardware
supports GNU/Linux, not all does. Users must know beforehand if the
hardware they want supports their OS.

Android

Android OS is a Linux-based mobile operating system that primarily runs on


smartphones and tablets.

The Android platform includes an operating system based upon the Linux kernel, a
GUI, a web browser and end-user applications that can be downloaded. Although
the initial demonstrations of Android featured a generic QWERTY smartphone and
large VGA screen, the operating system was written to run on relatively
inexpensive handsets with conventional numeric keypads.

Android was released under the Apache v2 open source license; this allows for
many variations of the OS to be developed for other devices, such as gaming
consoles and digital cameras. Android is based on open source software, but most
Android devices come preinstalled with a suite of proprietary software, such as
Google Maps, YouTube, Google Chrome and Gmail.

History and development of Android OS

Android began its life as a Palo Alto-based startup company called Android Inc., in
2003. Originally, the company set out to develop an operating system for digital
cameras, but it abandoned those efforts in lieu of reaching a broader market.
UNIT IV FOSS

Google acquired Android Inc. and its key employees in 2005 for at least $50
million. Google marketed the early mobile platform to handset manufacturers and
mobile carriers with its major benefits as flexibility and upgradability.

Google was discreetly developing Android OS when Apple released the iPhone in
2007. Previous prototypes of an Android phone closely resembled a BlackBerry,
with a physical keyboard and no touchscreen. The launch of the iPhone, however,
changed the mobile computing market significantly and forced Android creators to
support touchscreens more heavily. Nevertheless, the HTC Dream, which was the
first commercially available smartphone to run Android OS, featured a QWERTY
keyboard and was met with some critical reception during its 2008 release.

In late 2007, the Open Handset Alliance (OHA) announced its formation. The
OHA was a coalition of more than 30 hardware, software and telecommunications
companies, including Google, Qualcomm, Broadcom, HTC, Intel, Samsung,
Motorola, Sprint, Texas Instruments and Japanese wireless carriers KDDI and NTT
DoCoMo. The alliance's goal was to contribute to the development of the first open
source platform for mobile devices.

Google released the public beta version of Android 1.0 for developers around the
same time of the alliance's announcement, in November 2007. It wasn't until
Google released Android 1.5 in April 2009 that Google introduced Android's
signature dessert-themed naming scheme; the name of Android 1.5 was "Cupcake."
Around the time of the release of Android 4.4 KitKat, Google released an official
UNIT IV FOSS

statement to explain the naming: "Since these devices make our lives so sweet,
each Android version is named after a dessert."

What are Android OS features?

The default UI of Android relies on direct manipulation inputs such as tapping,


swiping and pinching to initiate actions. The device provides haptic feedback to the
user via alerts such as vibrations to respond to actions. If a user presses a
navigation button, for example, the device vibrates.

When a user boots a device, Android OS displays the home screen, which is the
primary navigation hub for Android devices and is comprised of widgets and app
icons. Widgets are informational displays that automatically update content such as
weather or news. The home screen display can differ based on the device
manufacturer that is running the OS. Users can also choose different themes for the
home screen via third-party apps on Google Play.

A status bar at the top of the home screen displays information about the device
and its connectivity, such as the Wi-Fi network that the device is connected to or
signal strength. Users can pull down the status bar with a swipe of a finger to view
a notification screen.

Android OS also includes features to save battery usage. The OS suspends


applications that aren't in use to conserve battery power and CPU usage. Android
UNIT IV FOSS

includes memory management features that automatically close inactive processes


stored in its memory.

Android runs on both of the most widely deployed cellular standards,


GSM/HSDPA and CDMA/EV-DO. Android also supports:

● Bluetooth
● Edge
● 3G communication protocols, like EV-DO and HSDPA
● Wi-Fi
● Autocorrect
● SMS and MMS messaging
● video/still digital cameras
● GPS
● compasses
● accelerometers
● accelerated 3D graphics
● multitasking applications

Free BSD

FreeBSD is a free and open-source UNIX OS that is developed from the Berkeley
Software Distribution (BSD). In 1993, the initial version of FreeBSD was
released. It was the most famous open-source BSD OS in 2005, and it was
responsible for over three-quarters of all BSD systems installed with a simple and
permissive license. It may not be labeled as a UNIX OS due to legal constraints
UNIT IV FOSS

while being compatible with UNIX internals and APIs. Because the license rules of
FreeBSD allow developers a great deal of freedom in using it, many FreeBSD code
has been reused by other operating systems (such as MAC OSX). However, it is
not categorized as a UNIX OS, and the MAC OSX does have an official UNIX
branding.

The Lynne Jolitz and William Jolitz developers renamed the OS 386BSD after
obtaining it to the 80386 CPUs. It is defined as a feature-complete operating
system due to its well-known characteristics, which contain full-fledged
documentation, tools, kernel, and device drivers. This OS's functional design
makes it suitable for several applications. As a result, it works in both desktop
environments and servers. It is widely rumored that it is used in developing Apple
OS.

Most of FreeBSD's codebase has found its way into other operating systems like
Darwin, TrueNAS, PlayStation 3, PlayStation 4, and Nintendo Switch gaming
consoles system software. Additional third-party software can be installed using
pkg, FreeBSD Ports, or manually compiling source code. A security team oversees
all software supplied in the base distribution as part of its initiatives.

Features of FreeBSD Operating System

There are various reasons where it may be used as an operating system. Let's take a
look at them one by one.

1. Server

A FreeBSD system usually includes many software packages relevant to servers in


the base system. This availability of much important software allows you to
UNIT IV FOSS

configure FreeBSD operating system easily and use it as a web server, DNS server,
Firewall, FTP server, mail server, or router.

2. Networking

The FreeBSD TCP/IP stack considerably helps the widespread use of these
protocols. It provides supports a large range of networks, such as IPSec, SCTP,
IPv6, and wireless networking. FreeBSD supports even outdated protocols like IPX
and AppleTalk. In addition, FreeBSD currently supports CARP (Common
Address Redundancy Protocol). It was imported from the OpenBSD OS. CARP
enables numerous nodes to share a common set of IP addresses. The main benefit
of this is that if one node fails, others are available to handle the request.

3. Embedded System

It can be used as an embedded system because it may be easily extended to support


PowerPC, MIPS, and ARM.

4. Portability

Usually, the FreeBSD OS project splits the entire architecture into various tiers.
These tiers provided various levels of support. Tier 1 architecture is very mature
and fully supported. Tier 2 has major development. Tier 3 is experimental and
doesn't go via development. In the end, tier 4 architecture has no support.

5. Storage

Storage is an important feature of FreeBSD OS. It usually releases the soft updates
that secure a UFS (UNIX File System) filesystem's consistency. Its maintenance
UNIT IV FOSS

helps us if the computer system crashes. The filesystem snapshots allow you to
create a file instantaneously while performing other valuable tasks, and these
snapshots allow you to take a reliable backup of a live filesystem. GEOM is a
modular architecture that currently offers RAID levels 0, 1, and 3, caching,
concatenation, full disc encryption, and network-backed storage. GEOM also
allows you to create complicated storage solutions by chaining various
mechanisms together.

6. FreeBSD bhyve

Its base system now contains a new BSD-licensed, legacy-free hypervisor. It may
currently run all supported versions of OpenBSD OS, FreeBSD OS, and Linux via
the grub-bhyve port.

7. Kernel

The kernel of FreeBSD supports various important tasks such as process


management, communication, booting, and filesystems. It has a monolithic kernel
and has a modular design. Modules are used to design various parts of the kernel,
including such drivers. These modules may be loaded and unloaded at any time by
the user.

Advantages and disadvantages of FreeBSD Operating System

There are various advantages and disadvantages of the FreeBSD Operating


System. Some advantages and disadvantages of the FreeBSD Operating System are
as follows:

Advantages
UNIT IV FOSS

1. It is a free and open-source operating system, so the users can use and
develop the OS for free.

2. FreeBSD offers detailed installation guides for different platforms. Even if


the users are not familiar with other OS like Linux and UNIX, users may
install it with the help of documentation. It may be installed through a DVD,
CD-ROM, or using FTP or NFS with the aid of documentation.

3. FreeBSD gives a high priority to security, and its developers are always
working to make the OS as secure as possible.

4. It provides high stability for the database, internet server, client-server, etc.

5. It has the potential to be a suitable alternative to existing UNIX platforms.

6. It uses ipfw as the firewall.

7. It is a monolithic kernel.

Disadvantages

1. It has less developer support.

2. It is very complex to understand.

3. It requires a good amount of practice.

4. It has an issue with hardware compatibility.

5. It has limited third-party software.

6. There is no support for plug-and-play.

Open Solaris
UNIT IV FOSS

OpenSolaris is an open source operating system, similar in scope to GNU/Linux


and BSD, but descended from the proprietary Solaris operating system from Sun
Microsystems. The authors of this book find it helpful to think of OpenSolaris as
divided into three distinct but related aspects: the code, the distributions, and the
community.

Solaris is the UNIX-based operating system. It was initially developed by Sun


Microsystems first released in 1992. It has initially licensed software and must
obtain licenses to install it on systems. In 2010, Oracle acquired sun microsystems,
and it was renamed Oracle Solaris. Oracle discontinues the open-source Solaris.
Solaris was written in the C and C++ programming languages. Solaris has been
designed to work with the SPARC and Power PC systems.

Oracle is offering a free 90-day trial version of the software. You would have to
buy a license from Oracle to utilize Solaris as a development platform if you
wanted to keep using the software after the free trial period ended.

Oracle Solaris is considered simple to update cloud installations. It has been used
for legacy apps on the cloud by offering the highest security and performance.
Over time, Oracle has added new capabilities and additions to Solaris, including
the service management facility, kernel zones, and other services.

Open Source Hardware

Open-source hardware consists of physical artifacts of technology designed and


offered by the open design movement.

The term typically refers to tangible machines and other physical systems which
are designed and released to the public in such a way that anyone can study,
modify, distribute, build, and sell the design or hardware based on that design.
UNIT IV FOSS

Open source hardware has much in common with open-source software in that it
has many of the same benefits.

Open source hardware offers relatively inexpensive alternatives to their closed,


proprietary counterparts.

Some good examples of open source hardware are Arduino boards. These boards
are part of a complete open source electronics prototyping platform including a
software development environment. A complete Arduino system is made up of
both open source software and hardware. Because the supporting software of
Arduino systems can be downloaded for free and the reference designs for the
hardware are available under an open source license, people could easily create
their own boards or build devices out of the Arduino software and hardware at a
minimal cost.

Just like open source software (OSS), open source hardware uses licenses. A
majority of these licenses are based on existing OSS licenses. Some of the widely
used licenses for open source hardware include the TAPR Open Hardware License,
Balloon Open Hardware License and the Hardware Design Public License.

Virtualization Technologies

Virtualization is the creation of a virtual -- rather than actual -- version of


something, such as an operating system (OS), a server, a storage device or network
resources.

Virtualization uses software that simulates hardware functionality to create a


virtual system. This practice allows IT organizations to operate multiple operating
UNIT IV FOSS

systems, more than one virtual system and various applications on a single server.
The benefits of virtualization include greater efficiencies and economies of scale.

OS virtualization is the use of software to allow a piece of hardware to run multiple


operating system images at the same time. The technology got its start on
mainframes decades ago, allowing administrators to avoid wasting expensive
processing power.

How virtualization works

Virtualization describes a technology in which an application, guest OS or data


storage is abstracted away from the true underlying hardware or software.

A key use of virtualization technology is server virtualization, which uses a


software layer -- called a hypervisor</a -- to emulate the underlying hardware. This
often includes the CPU's memory, input/output (I/O) and network traffic.

Hypervisors take the physical resources and separate them so they can be utilized
by the virtual environment. They can sit on top of an OS or they can be directly
installed onto the hardware. The latter is how most enterprises virtualize their
systems.

The Xen hypervisor is an open source software program that is responsible for
managing the low-level interactions that occur between virtual machines (VMs)
UNIT IV FOSS

and the physical hardware. In other words, the Xen hypervisor enables the
simultaneous creation, execution and management of various virtual machines in
one physical environment.

With the help of the hypervisor, the guest OS, normally interacting with true
hardware, is now doing so with a software emulation of that hardware; often, the
guest OS has no idea it's on virtualized hardware.

While the performance of this virtual system is not equal to the performance of the
operating system running on true hardware, the concept of virtualization works
because most guest operating systems and applications don't need the full use of
the underlying hardware.

This allows for greater flexibility, control and isolation by removing the
dependency on a given hardware platform. While initially meant for server
virtualization, the concept of virtualization has spread to applications, networks,
data and desktops.

The virtualization process follows the steps listed below:

1. Hypervisors detach the physical resources from their physical


environments.
2. Resources are taken and divided, as needed, from the physical
environment to the various virtual environments.
UNIT IV FOSS

3. System users work with and perform computations within the virtual
environment.
4. Once the virtual environment is running, a user or program can send an
instruction that requires extra resources form the physical environment.
In response, the hypervisor relays the message to the physical system and
stores the changes. This process will happen at an almost native speed.

The virtual environment is often referred to as a guest machine or virtual machine.


The VM acts like a single data file that can be transferred from one computer to
another and opened in both; it is expected to perform the same way on every
computer.

Types of virtualization

You probably know a little about virtualization if you have ever divided your hard
drive into different partitions. A partition is the logical division of a hard disk drive
to create, in effect, two separate hard drives.

There are six areas of IT where virtualization is making headway:

1. Network virtualization is a method of combining the available


resources in a network by splitting up the available bandwidth into
channels, each of which is independent from the others and can be
assigned -- or reassigned -- to a particular server or device in real time.
The idea is that virtualization disguises the true complexity of the
UNIT IV FOSS

network by separating it into manageable parts, much like your


partitioned hard drive makes it easier to manage your files.
2. Storage virtualization is the pooling of physical storage from multiple
network storage devices into what appears to be a single storage device
that is managed from a central console. Storage virtualization is
commonly used in storage area networks.
3. Server virtualization is the masking of server resources -- including the
number and identity of individual physical servers, processors and
operating systems -- from server users. The intention is to spare the user
from having to understand and manage complicated details of server
resources while increasing resource sharing and utilization and
maintaining the capacity to expand later.
The layer of software that enables this abstraction is often referred to as
the hypervisor. The most common hypervisor -- Type 1 -- is designed to
sit directly on bare metal and provide the ability to virtualize the
hardware platform for use by the virtual machines. KVM virtualization is
a Linux kernel-based virtualization hypervisor that provides Type 1
virtualization benefits like other hypervisors. KVM is licensed under
open source. A Type 2 hypervisor requires a host operating system and is
more often used for testing and labs.

4. Data virtualization is abstracting the traditional technical details of data


and data management, such as location, performance or format, in favor
of broader access and more resiliency tied to business needs.
5. Desktop virtualization is virtualizing a workstation load rather than a
server. This allows the user to access the desktop remotely, typically
using a thin client at the desk. Since the workstation is essentially
UNIT IV FOSS

running in a data center server, access to it can be both more secure and
portable. The operating system license does still need to be accounted for
as well as the infrastructure.
6. Application virtualization is abstracting the application layer away
from the operating system. This way, the application can run in an
encapsulated form without being depended upon on by the operating
system underneath. This can allow a Windows application to run on
Linux and vice versa, in addition to adding a level of isolation.

Virtualization can be viewed as part of an overall trend in enterprise IT that


includes autonomic computing, a scenario in which the IT environment will be
able to manage itself based on perceived activity, and utility computing, in which
computer processing power is seen as a utility that clients can pay for only as
needed. The usual goal of virtualization is to centralize administrative tasks while
improving scalability and workloads.

Advantages of virtualization

The advantages of utilizing a virtualized environment include the following:

● Lower costs. Virtualization reduces the amount of hardware servers


necessary within a company and data center. This lowers the overall cost
of buying and maintaining large amounts of hardware.
● Easier disaster recovery. Disaster recovery is very simple in a
virtualized environment. Regular snapshots provide up-to-date data,
allowing virtual machines to be feasibly backed up and recovered. Even
UNIT IV FOSS

in an emergency, a virtual machine can be migrated to a new location


within minutes.
● Easier testing. Testing is less complicated in a virtual environment. Even
if a large mistake is made, the test does not need to stop and go back to
the beginning. It can simply return to the previous snapshot and proceed
with the test.
● Quicker backups. Backups can be taken of both the virtual server and
the virtual machine. Automatic snapshots are taken throughout the day to
guarantee that all data is up-to-date. Furthermore, the virtual machines
can be easily migrated between each other and efficiently redeployed.
● Improved productivity. Fewer physical resources result in less time
spent managing and maintaining the servers. Tasks that can take days or
weeks in a physical environment can be done in minutes. This allows
staff members to spend the majority of their time on more productive
tasks, such as raising revenue and fostering business initiatives.

Benefits of virtualization

Virtualization provides companies with the benefit of maximizing their output.


Additional benefit for both businesses and data centers include the following:

● Single-minded servers. Virtualization provides a cost-effective way to


separate email, database and web servers, creating a more comprehensive
and dependable system.
● Expedited deployment and redeployment. When a physical server
crashes, the backup server may not always be ready or up to date. There
UNIT IV FOSS

also may not be an image or clone of the server available. If this is the
case, then the redeployment process can be time-consuming and tedious.
However, if the data center is virtualized, then the process is quick and
fairly simple. Virtual backup tools can expedite the process to minutes.
● Reduced heat and improved energy savings. Companies that use a lot
of hardware servers risk overheating their physical resources. The best
way to prevent this from happening is to decrease the number of servers
used for data management, and the best way to do this is through
virtualization.
● Better for the environment. Companies and data centers that utilize
copious amounts of hardware leave a large carbon footprint; they must
take responsibility for the pollution they are generating. Virtualization
can help reduce these effects by significantly decreasing the necessary
amounts of cooling and power, thus helping clean the air and the
atmosphere. As a result, companies and data centers that virtualize will
improve their reputation while also enhancing the quality of their
relationship with customers and the planet.
● Easier migration to the cloud. Virtualization brings companies closer to
experiencing a completely cloud-based environment. Virtual machines
may even be deployed from the data center in order to build a
cloud-based infrastructure. The ability to embrace a cloud-based mindset
with virtualization makes migrating to the cloud even easier.
● Lack of vendor dependency. Virtual machines are agnostic in hardware
configuration. As a result, virtualizing hardware and software means that
a company does not need to depend on a vendor for these physical
resources.
UNIT IV FOSS

Limitations of virtualization

Before converting to a virtualized environment, it is important to consider the


various upfront costs. The necessary investment in virtualization software, as well
as hardware that might be required to make the virtualization possible, can be
costly. If the existing infrastructure is more than five years old, an initial renewal
budget will have to be considered.

Fortunately, many businesses have the capacity to accommodate virtualization


without spending large amounts of cash. Furthermore, the costs can be offset by
collaborating with a managed service provider that provides monthly leasing or
purchase options.

There are also software licensing considerations that must be considered when
creating a virtualized environment. Companies must ensure that they have a clear
understanding of how their vendors view software use within a virtualized
environment. This is becoming less of a limitation as more software providers
adapt to the increased use of virtualization.

Converting to virtualization takes time and may come with a learning curve.
Implementing and controlling a virtualized environment demands each IT staff
member to be trained and possess expertise in virtualization. Furthermore, some
applications do not adapt well when brought into a virtual environment. The IT
staff will need to be prepared to face these challenges and should address them
prior to converting.
UNIT IV FOSS

There are also security risks involved with virtualization. Data is crucial to the
success of a business and, therefore, is a common target for attacks. The chances of
experiencing a data breach significantly increase while using virtualization.

Finally, in a virtual environment, users lose control of what they can do because
there are several links that must collaborate to perform the same task. If any part is
not working, then the entire operation will fail.

Containerization Technologies:
Containerization technology, by default, comes from the improvements of
virtualization. It is also commonly described as OS-level virtualization. Confusing,
is it? Let’s break it down.

Containerization is all about packaging the requirements of an application under


development in the form of a base image. This image can run in an isolated space
(containers) on different systems. It is crucial to remember that these containers
share the same OS. Most IT leaders are intrigued by this technology because it is
often used for deploying and running a distributed app without having to use a
Virtual Machine (VM).

VMs had the problem of error-prone coding while transferring application


infrastructure from one computing environment to another. For example, most
enterprises found it impossible to build apps when one developer shared a
development file from a Linux to a Windows-based OS.

In 2013, Docker was introduced, eliminating the said problem by disrupting the
software development processes. It allowed Linux-based codes to run efficiently on
UNIT IV FOSS

Windows-based systems via Docker. Though it was theoretical back then,


Microsoft was the first to make this possible in 2016. Ever since, many websites
and leading tech giants such as Shopify, Pinterest, Riot Games, and more have
been using containerization technology.

Docker

Docker overview

Docker is an open platform for developing, shipping, and running applications.


Docker enables you to separate your applications from your infrastructure so you
can deliver software quickly. With Docker, you can manage your infrastructure in
the same ways you manage your applications. By taking advantage of Docker's
methodologies for shipping, testing, and deploying code, you can significantly
reduce the delay between writing code and running it in production.

The Docker platform

Docker provides the ability to package and run an application in a loosely isolated
environment called a container. The isolation and security lets you to run many
containers simultaneously on a given host. Containers are lightweight and contain
everything needed to run the application, so you don't need to rely on what's
installed on the host. You can share containers while you work, and be sure that
everyone you share with gets the same container that works in the same way.

Docker provides tooling and a platform to manage the lifecycle of your containers:

● Develop your application and its supporting components using containers.


● The container becomes the unit for distributing and testing your application.
UNIT IV FOSS

● When you're ready, deploy your application into your production


environment, as a container or an orchestrated service. This works the same
whether your production environment is a local data center, a cloud provider,
or a hybrid of the two.

What can I use Docker for?

Fast, consistent delivery of your applications

Docker streamlines the development lifecycle by allowing developers to work in


standardized environments using local containers which provide your applications
and services. Containers are great for continuous integration and continuous
delivery (CI/CD) workflows.

Consider the following example scenario:

● Your developers write code locally and share their work with their
colleagues using Docker containers.
● They use Docker to push their applications into a test environment and run
automated and manual tests.
● When developers find bugs, they can fix them in the development
environment and redeploy them to the test environment for testing and
validation.
● When testing is complete, getting the fix to the customer is as simple as
pushing the updated image to the production environment.

Responsive deployment and scaling


UNIT IV FOSS

Docker's container-based platform allows for highly portable workloads. Docker


containers can run on a developer's local laptop, on physical or virtual machines in
a data center, on cloud providers, or in a mixture of environments.

Docker's portability and lightweight nature also make it easy to dynamically


manage workloads, scaling up or tearing down applications and services as
business needs dictate, in near real time.

Running more workloads on the same hardware

Docker is lightweight and fast. It provides a viable, cost-effective alternative to


hypervisor-based virtual machines, so you can use more of your server capacity to
achieve your business goals. Docker is perfect for high density environments and
for small and medium deployments where you need to do more with fewer
resources.

Docker architecture
Docker uses a client-server architecture. The Docker client talks to the Docker
daemon, which does the heavy lifting of building, running, and distributing your
Docker containers. The Docker client and daemon can run on the same system, or
you can connect a Docker client to a remote Docker daemon. The Docker client
and daemon communicate using a REST API, over UNIX sockets or a network
interface. Another Docker client is Docker Compose, that lets you work with
applications consisting of a set of containers.
UNIT IV FOSS

The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages
Docker objects such as images, containers, networks, and volumes. A daemon can
also communicate with other daemons to manage Docker services.

The Docker client

The Docker client (docker) is the primary way that many Docker users interact
with Docker. When you use commands such as docker run, the client sends these
commands to dockerd, which carries them out. The docker command uses the
Docker API. The Docker client can communicate with more than one daemon.

Docker Desktop

Docker Desktop is an easy-to-install application for your Mac, Windows or Linux


environment that enables you to build and share containerized applications and
UNIT IV FOSS

microservices. Docker Desktop includes the Docker daemon (dockerd), the Docker
client (docker), Docker Compose, Docker Content Trust, Kubernetes, and
Credential Helper. For more information, see Docker Desktop.

Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that
anyone can use, and Docker looks for images on Docker Hub by default. You can
even run your own private registry.

When you use the docker pull or docker run commands, Docker pulls the required
images from your configured registry. When you use the docker push command,
Docker pushes your image to your configured registry.

Docker objects

When you use Docker, you are creating and using images, containers, networks,
volumes, plugins, and other objects. This section is a brief overview of some of
those objects.

Images

An image is a read-only template with instructions for creating a Docker container.


Often, an image is based on another image, with some additional customization.
For example, you may build an image which is based on the ubuntu image, but
installs the Apache web server and your application, as well as the configuration
details needed to make your application run.

You might create your own images or you might only use those created by others
and published in a registry. To build your own image, you create a Dockerfile with
UNIT IV FOSS

a simple syntax for defining the steps needed to create the image and run it. Each
instruction in a Dockerfile creates a layer in the image. When you change the
Dockerfile and rebuild the image, only those layers which have changed are
rebuilt. This is part of what makes images so lightweight, small, and fast, when
compared to other virtualization technologies.

Containers

A container is a runnable instance of an image. You can create, start, stop, move, or
delete a container using the Docker API or CLI. You can connect a container to one
or more networks, attach storage to it, or even create a new image based on its
current state.

By default, a container is relatively well isolated from other containers and its host
machine. You can control how isolated a container's network, storage, or other
underlying subsystems are from other containers or from the host machine.

A container is defined by its image as well as any configuration options you


provide to it when you create or start it. When a container is removed, any changes
to its state that aren't stored in persistent storage disappear.

Example docker run command

The following command runs an ubuntu container, attaches interactively to your


local command-line session, and runs /bin/bash.

$ docker run -i -t ubuntu /bin/bash


UNIT IV FOSS

When you run this command, the following happens (assuming you are using the
default registry configuration):

1. If you don't have the ubuntu image locally, Docker pulls it from your
configured registry, as though you had run docker pull ubuntu manually.
2. Docker creates a new container, as though you had run a docker container
create command manually.
3. Docker allocates a read-write filesystem to the container, as its final layer.
This allows a running container to create or modify files and directories in
its local filesystem.
4. Docker creates a network interface to connect the container to the default
network, since you didn't specify any networking options. This includes
assigning an IP address to the container. By default, containers can connect
to external networks using the host machine's network connection.
5. Docker starts the container and executes /bin/bash. Because the container is
running interactively and attached to your terminal (due to the -i and -t
flags), you can provide input using your keyboard while Docker logs the
output to your terminal.
6. When you run exit to terminate the /bin/bash command, the container stops
but isn't removed. You can start it again or remove it.

The underlying technology

Docker is written in the Go programming languageopen_in_new and takes


advantage of several features of the Linux kernel to deliver its functionality.
Docker uses a technology called namespaces to provide the isolated workspace
called the container. When you run a container, Docker creates a set of namespaces
for that container.
UNIT IV FOSS

These namespaces provide a layer of isolation. Each aspect of a container runs in a


separate namespace and its access is limited to that namespace.

Development tools
Software development tools are computer programs used by software development
teams to create, debug, manage and support applications, frameworks, systems,
and other programs. These tools are also commonly referred to as software
programming tools.

Examples of software development tools include:

● Linkers
● Code editors
● GUI designers
● Performance analysis tools
● Assemblers
● Compilers

In some cases, one tool can house multiple functions. For example, one tool can act
as a code editor, a performance analysis tool, and a compiler. But in other cases,
you might have to purchase multiple tools to cover each function.

10 top open source development tools

Now that we've examined the benefits of open source, let's look at some of the top
options available.

1. Git
UNIT IV FOSS

Git is a distributed code management and version-control system, often used with
web-based code management platforms like GitHub and GitLab. The integration
with these platforms makes it easy for teams to contribute and collaborate,
however getting the most out of Git will require some kind of third-party platform.
Some claim, however, that Git support for Windows is not as robust as it is for
Linux, which is potentially a turnoff for Windows-centric developers.

2. Apache Subversion

Also known as SVN, Subversion is another open source option for code
management. It's very similar to Git, although their major differences lie in the
code repositories: Git uses local repositories by default, whereas Subversion stores
code on a remote server. However, you can use SVN and Git together by
connecting them through git-svn, which allows you to interact with Subversion
repositories through your Git tooling.

3. Eclipse IDE

Eclipse is an open source IDE that features a wide ecosystem of plugins and
extensions. It's written primarily in Java -- and is most popular with Java
development -- but can be used to write code in almost any major programming
language. Eclipse features a continually growing plugin marketplace to support
customization of extension of capabilities. However, some Eclipse plugins are
dependent on others, which can make it tricky to add and remove those plugins
without breaking existing functionality.

4. Apache NetBeans
UNIT IV FOSS

NetBeans is a Java-based IDE similar to Eclipse, and also supports development in


a wide range of programming languages. However, NetBeans focuses on providing
functionality out of the box, whereas Eclipse leans heavily on its plugin ecosystem
to help developers set up needed features.

5. EMACS

Emacs is an open source text editor written by GNU project members in the
mid-1980s. It has the ability to automate complex key entry sequences using
macros, and developers can use it as full-fledged IDE. The disadvantage to Emacs,
however, is the time it can take to configure Emacs and integrate it into your
environment. Some also say that the tool has a steep learning curve -- although
others argue it is easier than other text editors like Vim.

6. Vim

Vim is another decades-old open source text editor with an entrenched set of users.
Vim reportedly starts up a bit faster than Emacs, and some say it has a lower
learning curve. Other developers also claim that it requires less time to customize
to individual software environments, but there are developers that argue the
opposite too. But overall, Vim and Emacs are both excellent choices if you want a
tried-and-true open source development tool for editing code.

7. Atom

Atom is billed by GitHub as a "hackable" text editor that, like Emacs and Vim, can
be turned into a complete IDE. Atom offers features that cater to modern coding
needs, such as easy integration with GitHub and built-in support for collaborative
coding. However, some claim its performance is on the slow side, takes a little
UNIT IV FOSS

while to start, and consumes slightly more memory than expected for a typical text
editor.

8. Jenkins

Jenkins is a CI server that advertises a very large plugin ecosystem. These plugins
make it possible to integrate Jenkins with various source code management
systems and deployment environments. They also extend its functionality with
features like email notifications and timestamps that track how long various
Jenkins operations take to complete. Jenkins offers broad platform support and can
run on any modern OS, as well as inside a Docker container.

9. Chef

Chef is an open source configuration management tool which enables admins to


create "cookbooks" that describe the ideal configuration of their environment. Chef
can also automatically configure that environment for you based on the
specifications you provide. It's written in Ruby, and fully supports Windows, Linux
and macOS. That's an advantage over some comparable tools, which limit support
to Linux and macOS.

10. Ansible

Ansible, another open source configuration management tool, is claimed to be one


of the biggest competitors to Chef. Developers say it offers somewhat better
performance than Chef, and many say it's easier to set up. However, Ansible offers
fewer customization options, and isn't always well-suited to complex environments
or niche configuration management. It's support for Windows is also somewhat
limited.
UNIT IV FOSS

IDEs
10 Open Source Editors And IDEs

The line of code needed for the website starts with one thing which is known
as text editor. There many text editors which are considered by the
developers but here is the list of top 5 which are trustable platforms to use by
anyone. Along with that, the top 5 open source IDE’s which work with the
development tools with text editor.

It provides smooth integration of documentation and control system. Both


the open source editors and IDE’s have connection with each other serving
the web development which is why they will be linked together in this blog.

Let’s look into the 10 open source Editors and IDE’s as followings.

1. Atom

URL: https://atom.io/

Atom is the open source text editor established by GitHub. It is compatible


with Linux, MAC and Windows OS. The system is built-in with the
packages installed in it. It is easy to customize with having HTML, CSS,
JavaScript and Node.js web technologies. It provides quick service to the
developer regarding community with huge collection of themes and plugins
UNIT IV FOSS

to support well. It is free to use with multiple features which may please the
developers among all other text editors.

2. Brackets

URL: http://brackets.io/

Brackets has been popular due to the connection with Adobe since 2014. It
is a great platform to work on text editing through CSS, HTML and
JavaScript. The web development done through this platform leaves no
flaws within the website after completion. It supports the features like code
specification, tabs between the files, preview of live sessions, options to
change browsers and much more.

3. Notepad++

URL: https://notepad-plus-plus.org/

Notepad++ is a free text editor which provides high speed to the developers
for web development. It supports the C++ program size no matter what size
it is in. it is user-friendly along with proving multiple benefits to the
developers. It provides the space of more than 4MB with more than 40
language support. The users are able to define their own language and view
the web page according to the convenience.
UNIT IV FOSS

4. Sublime Text

URL: www.sublimetext.com/

One of the top text editors which most of the developers prefer to use is
Sublime text. It is light weight and fast to use. The performance is
exceptional with having multiple support of features. The plugins provide
ultimate convenience to the users and developers to optimize the web page.
The features include go to option, making changes to tabs and can be
customized easily through the users.

5. Vim

URL: https://www.vim.org/

Vim is an open source text editor which was released earlier than all other
platforms. It is old yet powerful to use until today. With the continuous
optimization and following the trends of technologies, it has adopted the fast
paced web development quickly. It is a favorite platform for many web
developers until now. The features include text coding, tab options, easy
interface, documentation, how to guide and much more. Users can easily
make changes to their page according to their choice through multiple
themes and backgrounds of their choice. Open Source IDEs

6. Eclipse
UNIT IV FOSS

URL: www.eclipse.org/

Eclipse is one of the top and trending IDE which supports the Java
development. It is the primary choice for developers when they are working
on editors and IDEs. It supports the apps of other languages like PHP, C++
and Python. The license of Eclipse established through Eclipse foundation
few years ago with supporting the apps cross-platform. It is widely available
for Linux, Windows and MAC.

7. NetBeans

URL: https://netbeans.org/about/os/

NetBeans support the features support of Java along with HTML 5 and C++.
It has functions like editing, supporting for verification, language modules
and more. It is under the Oracle corporation now which is why has a huge
popularity working in all the OS.

8. KDevelop

URL: https://www.kdevelop.org/

A great IDE to work with which runs on Windows, MAC and Linux. It
supports QML, PHP, C++ and Python programs. The workflow gets smooth
through this platform with coding becoming easier to manage. The quality of
code gets improved through the continuous functions which get applied
UNIT IV FOSS

whenever they are deployed. It is licensed under the GNU GPL with the
support of JavaScript.

9. Geany

URL: https://www.geany.org/

There have been many transformations for Geany before coming into the
text editor version. It is lightweight with bringing complete automation for
the text editing and IDE for web development. There are modules available
to work over Java, C++, C, HP, Perl, Python and more. It works on
Windows, Linux and MAC conveniently with smooth interface. The
integration is smart and fast which makes it a prior choice for some of the
web developers.

10. Code Blocks

URL: www.codeblocks.org/

Code Blocks support the Fortran, C++ and C platforms easily. It has features
such as debugger, compiling information, documentation, highlighting
syntax and more. The GCC compiler works great through the Digital Mars
and more information over software. It works with Windows, Linux and
MAC easily upon installation.
UNIT IV FOSS

This post is curated by IssueHunt that a issue-based bounty platform for


open-source projects.

IssueHunt offers a service that pays freelance developers for contributing to


the open-source code. We do it through what is called bounties: financial
rewards granted to whoever solves a given problem. The funding for these
bounties comes from anyone who is willing to donate to have any given bug
fixed or feature added.

Debuggers
A debugger is a tool that allows you to examine the state of a running program.
Debugging is the process of locating and then removing bugs or errors in a
program. An interactive debugging system gives programmers tools to help them
test and debug their programs. Debugging is the methodical process of locating and
eliminating bugs or defects in a computer program.

Types of Debuggers:
● Static debugger: A static debugger does not rely on any specific
software. The debugging can be completed by the user.
● Dynamic debugger: A dynamic debugger can be either software or
hardware. There are several types of dynamic debuggers, including the
following:
● Breakpoint Debugger: Adding conditional and unconditional
breakpoints to the program at various points
● Kernel Debugger: To debug the operating system, a debugger
with kernel debugging capabilities is provided.
UNIT IV FOSS

● Meta Debugger: Debugger includes debugging and


meta-debugging features.
● Debugger with the same process: The debugger and debuggee
processes are identical, and they both share the main memory.
This type of debugger is simple and straightforward to
implement. This debugger executes more quickly.

Need for Debugging:


When errors in a program code are identical, it is necessary to first identify the
precise program statements responsible for the errors and then fix them. Debugging
is the process of identifying and correcting errors in program code.

Features of Breakpoint Debuggers:


1. The Breakpoint debugger supports unit testing.
2. The flow of program execution is controlled by a breakpoint debugger.
3. The programmer may use some unconditional statement in the program,
so it supports this type of program.
4. It is possible to trace the flow of execution logic at different levels and
data modifications.
5. Checkpoint provides a snapshot of program output.
6. The program can add a watchpoint in the source code.
7. The process of execution in a debugger helps back to the previous state of
execution.

Process of Debugging:
The following are the various steps involved in debugging:
UNIT IV FOSS

● Recognize the Error: Identifying an error in a wrong may lead to time


waste. It is obvious that the production errors reported by users are
difficult to interpret, and the information we receive is sometimes
misleading. As a result, identifying the actual error is required.
● Locate the Error: Once the error has been correctly identified, you will
need to thoroughly review the code several times to locate the position of
the error. This step, in general, focuses on locating the error rather than
perceiving it.
● Evaluate the Error: The third step is error analysis, which is a
bottom-up approach that begins with locating the error and then analyses
the code. This step facilitates understanding of the errors. Essentially,
error analysis has two major goals: reevaluating errors to find existing
bugs and postulating the uncertainty of incoming collateral damage in a
fix.
● Verify the Analysis: After analyzing the primary bugs, look for any
additional errors that may appear on the application. The fourth step is
used to write automated tests for such areas by incorporating the test
framework.
● Cover Lateral Damage: The fifth phase involves collecting all of the
unit tests for the code that needs to be modified. When you run these unit
tests, they must succeed.
● Fix & Validate: The final stage is fix and validation, which focuses on
fixing bugs before running all of the test scripts to see if they pass.

Advantages of Debugging
● Identifying and fixing errors: The primary advantage of debugging is
that it helps identify and fix errors in software code. By locating and
UNIT IV FOSS

correcting bugs, developers can ensure that their code performs as


intended and meets the required standards.
● Enhancing code quality: Debugging also helps improve the overall
quality of the code. By detecting and fixing errors, developers can
eliminate issues that could cause performance problems or security
vulnerabilities.
● Saving time and resources: Debugging can save time and resources by
reducing the need for trial-and-error testing. By pinpointing the source of
the problem, developers can more quickly and accurately address the
issue.
● Gaining insights into software behavior: Debugging can provide
valuable insights into how software behaves under different conditions.
Developers can use this information to refine their code, optimize
performance, and enhance the user experience.

Disadvantages of Debugging
● Time-consuming: Debugging can be a time-consuming process,
especially if the issue is complex or difficult to reproduce. This can slow
down the development process and delay the release of software.

● Costly: Debugging can be a costly process, as it requires skilled


developers and specialized tools. This can add to the overall cost of
software development and maintenance.
● Difficulty in reproducing bugs: Sometimes, it can be difficult to
reproduce bugs, especially if they occur under specific conditions or with
UNIT IV FOSS

specific data inputs. This can make debugging more challenging and
time-consuming.
● Over-reliance on debugging: If developers rely too heavily on
debugging, they may miss opportunities to design more efficient, reliable,
and secure code. Debugging should be used in combination with other
software development practices, such as testing and code reviews, to
ensure the highest quality code.

Programming languages

As we know, to communicate with a person, we need a specific language, similarly


to communicate with computers, programmers also need a language is called
Programming language.

Before learning the programming language, let's understand what is language?

What is Language?

Language is a mode of communication that is used to share ideas, opinions with


each other. For example, if we want to teach someone, we need a language that is
understandable by both communicators.

What is a Programming Language?

A programming language is a computer language that is used by programmers


(developers) to communicate with computers. It is a set of instructions written in
any specific language ( C, C++, Java, Python) to perform a specific task.
UNIT IV FOSS

A programming language is mainly used to develop desktop applications,


websites, and mobile applications.

Most Popular Programming Languages –

● C
● Python
● C++
● Java
● SCALA
● C#
● R
● Ruby
● Go
● Swift
● JavaScript

Characteristics of a programming Language –

● A programming language must be simple, easy to learn and use, have


good readability, and be human recognizable.
● Abstraction is a must-have Characteristics for a programming language in
which the ability to define the complex structure and then its degree of
usability comes.
● A portable programming language is always preferred.
● Programming language’s efficiency must be high so that it can be easily
converted into a machine code and its execution consumes little space in
memory.
UNIT IV FOSS

● A programming language should be well structured and documented so


that it is suitable for application development.
● Necessary tools for the development, debugging, testing, maintenance of
a program must be provided by a programming language.
● A programming language should provide a single environment known as
Integrated Development Environment(IDE).
● A programming language must be consistent in terms of syntax and
semantics.

Basic Terminologies in Programming Languages:


● Algorithm: A step-by-step procedure for solving a problem or performing
a task.
● Variable: A named storage location in memory that holds a value or data.
● Data Type: A classification that specifies what type of data a variable can
hold, such as integer, string, or boolean.
● Function: A self-contained block of code that performs a specific task
and can be called from other parts of the program.
● Control Flow: The order in which statements are executed in a program,
including loops and conditional statements.
● Syntax: The set of rules that govern the structure and format of a
programming language.
● Comment: A piece of text in a program that is ignored by the compiler or
interpreter, used to add notes or explanations to the code.
● Debugging: The process of finding and fixing errors or bugs in a
program.
UNIT IV FOSS

● IDE: Integrated Development Environment, a software application that


provides a comprehensive development environment for coding,
debugging, and testing.
● Operator: A symbol or keyword that represents an action or operation to
be performed on one or more values or variables, such as + (addition), –
(subtraction), * (multiplication), and / (division).
● Statement: A single line or instruction in a program that performs a
specific action or operation.

Advantages of programming languages:

1. Increased Productivity: Programming languages provide a set of


abstractions that allow developers to write code more quickly and
efficiently.
2. Portability: Programs written in a high-level programming language can
run on many different operating systems and platforms.
3. Readability: Well-designed programming languages can make code
more readable and easier to understand for both the original author and
other developers.
4. Large Community: Many programming languages have large
communities of users and developers, which can provide support,
libraries, and tools.

Disadvantages of programming languages:

1. Complexity: Some programming languages can be complex and difficult


to learn, especially for beginners.
UNIT IV FOSS

2. Performance: Programs written in high-level programming languages


can run slower than programs written in lower-level languages.
3. Limited Functionality: Some programming languages may not have
built-in support for certain types of tasks or may require additional
libraries to perform certain functions.
4. Fragmentation: There are many different programming languages,
which can lead to fragmentation and make it difficult to share code and
collaborate with other developers.

LAMP

LAMP is an open-source Web development platform that uses Linux as the


operating system, Apache as the Web server, MySQL as the relational database
management system and PHP/Perl/Python as the object-oriented scripting
language.

Sometimes LAMP is referred to as a LAMP stack because the platform has four
layers. Stacks can be built on different operating systems.

LAMP is a example of a web service stack, named as an acronym. The LAMP


components are largely interchangeable and not limited to the original selection.
LAMP is suitable for building dynamic web sites and web applications.

Since its creation, the LAMP model has been adapted to another component,
though typically consisting of free and open-source software.
UNIT IV FOSS

Developers that use these tools with a Windows operating system instead of Linux
are said to be using WAMP, with a Macintosh system MAMP, and with a Solaris
system SAMP.

Linux, Apache, MySQL and PHP, all of them add something unique to the
development of high-performance web applications. Originally popularized from
the phrase Linux, Apache, MySQL, and PHP, the acronym LAMP now refers to a
generic software stack model.

The modularity of a LAMP stack may vary. Still, this particular software
combination has become popular because it is sufficient to host a wide variety of
website frameworks, such as Joomla, Drupal, and WordPress.

The components of the LAMP stack are present in the software repositories of the
most Linux distributions. The LAMP bundle can be combined with many other
free and open-source software packages, such as the following:

LAMP Stack Components


UNIT IV FOSS

Linux based web servers consist of four software components. These components
are arranged in layers supporting one another and make up the software stack.
Websites and Web Applications run on top of this underlying stack. The common
software components are as follows:

1. Linux: Linux started in 1991. It sets the foundation for the stack model. All
other layers are run on top of this layer.
It is an open-source and free operating system. It is endured partly because
it's flexible, and other operating systems are harder to configure.

2. Apache: The second layer consists of web server software, typically Apache
Web Server. This layer resides on top of the Linux layer.
Apache HTTP Server is a free web server software package made available
under an open-source license. It used to be known as Apache Web Server
when it was created in 1995.
It offers a secure and extendable Web server that's in sync with current
HTTP standards. Web servers are responsible for translating from web
browsers to their correct website.

3. MySQL: MySQL is a relational database management system used to store


application data. It is an open-source and keeps all the data in a format that
can easily be queried with the SQL language.
SQL works great with well-structured business domains, and a great
workhorse that can handle even the most extensive and most complicated
websites with ease.
MySQL stores details that can be queried by scripting to construct a website.
MySQL usually sits on top of the Linux layer alongside Apache. In high-end
configurations, MySQL can be offloaded to a separate host server.
UNIT IV FOSS

4. PHP: The scripting layer consists of PHP and other similar web
programming languages.
The PHP open-source scripting language works with Apache to create
dynamic web pages. We cannot use HTML to perform dynamic processes
such as pulling data out of a database.
To provide this type of functionality, we drop PHP code into the parts of a
page that you want to be dynamic. Websites and Web Applications run
within this layer.
PHP is designed for efficiency. It makes programming easier and allowing to
write new code, hit refresh, and immediately see the resulting changes
without the need for compiling.

Open Source database technologies

Open source databases are those databases who have an open source code i.e
anyone may view the code, study it or even modify it. Open source databases could
be relational (SQL) or non relational (NoSQL).

Why use Open Source Databases?

It is quite expensive to create and maintain a database for any company. A huge
chunk of the total software expenditure is used to handle databases. So, it is
feasible to switch to low cost open source databases. This saves companies a lot of
money in the long run.

Open Source Databases in use


UNIT IV FOSS

There many different open source databases in the market. All of them have their
own pros and cons. A decision to use a open source database depends on personal
requirements.

Some examples of open source databases are −

MySQL

This is the world’s most successful open source database. A free community
edition of MySQL is available but it was acquired by Oracle in 2010 and now
Oracle charges for service.

MariaDB

This is a replacement of MySQL and intended to remain free unlike MySQL.


MariaDB has a high compatibility with MySQL and its structure matches with
MySQL API’s and commands.

PostgresSQL

This is an object relational database management system. PostgresSQL is more


robust and has a better performance than MySQL. It is also known for its reliability
and data integrity.

PostgresPURE

This is build on PostgresSQL but has extra functionality. It is available from


Splendid data on a subscription basis.

EnterpriseDB
UNIT IV FOSS

This is also based on PostgresSQL but has extra features and tools such as
performance, security and manageability enhancements.

MongoDB

This is a free open source NoSQL database program.It provides document


validation, encrypted storage engine etc. MongoDB is used primarily in mobile
apps etc.

1. Open Source Database:

An open-source database is a database where anyone can easily view the source
code and this is open and free to download. Also for the community version, some
small additional and affordable costs are imposed. Open Source Database provides
Limited technical support to end-users. Here Installation and updates are
administered by the user. For example: MYSQL, PostgreSQL, MongoDB etc.

Advantages of Open Source Databases:

● Cost: Open source databases are generally free, which means they can be
used without any licensing fees.
● Customization: Since the source code is available, developers can modify
and customize the database to meet specific requirements.
● Community Support: Open source databases have a large community of
users who contribute to documentation, bug fixes, and improvements.
● Security: With open source databases, security vulnerabilities can be
detected and fixed quickly by the community.
● Scalability: Open source databases are typically designed to be scalable,
which means they can handle large amounts of data and traffic.
UNIT IV FOSS

Disadvantages of Open Source Databases:

● Limited Technical Support: While there is a large community of users


who can help troubleshoot issues, there is no guarantee of professional
technical support.
● Complexity: Open source databases can be more difficult to set up and
configure than commercial databases, especially for users who are not
experienced in database administration.
● Lack of Features: Open source databases may not have all the features
that are available in commercial databases, such as advanced analytics
and reporting tools.

You might also like