0% found this document useful (0 votes)
146 views47 pages

Conceptual Framework of Open Source Systems

This chapter presents a review of related literature and studies to provide the conceptual framework for the study. It includes the conceptual model and definitions of key terms. The review covers topics such as open source software, Linux distributions like Red Hat and Debian, the GNU General Public License (GPL), and common misconceptions about Linux's capabilities. The goal is to establish a framework for understanding pricing and security considerations for web servers.

Uploaded by

api-26570979
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views47 pages

Conceptual Framework of Open Source Systems

This chapter presents a review of related literature and studies to provide the conceptual framework for the study. It includes the conceptual model and definitions of key terms. The review covers topics such as open source software, Linux distributions like Red Hat and Debian, the GNU General Public License (GPL), and common misconceptions about Linux's capabilities. The goal is to establish a framework for understanding pricing and security considerations for web servers.

Uploaded by

api-26570979
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd

CHAPTER II

CONCEPTUAL FRAMEWORK

This chapter presents the review of related literature and studies

underlying the framework of the study. It includes the conceptual model of the

study and operational definition of terms.

REVIEW OF RELATED LITERATURE AND STUDIES

Introduction

Complicated servers are doing the process to parse what ever people

would like to know by means of browser. The information will be set to be

centralized so that end users, wherever they are get the same information

available for them. Fast access and with less cost to pay is the main reason why

we use the internet as source of information. These servers must be error free

and must be protected from intrusion. Web-host in particular indeed plays a big

role in giving what is true to anyone who would like to get accurate information.

Security is essential to one’s company’s integrity in disseminating basic

knowledge to its end users.

Pricing is the main concern of this study. Maintenance may not be that

expensive for open-source was opened for the public to use. Today, different

companies bought high price security tools and appliances for their networks and

application protection. Web servers are built for production to cater the internet

savvy the information they need. They run different services like APACHE, DNS,

MySQL database, FTP, Mail services and CGI. Hackers are there to interrupt this
6

service and postpone jobs costing downtime and system in dependability. This

now leads to the basic administration and maintenance to prevent damage.

The challenge now to administrators is to create a basis of a good system

criteria, which is practical, economic and of high integrity to address the risks of

company in dealing with such data loss and attacks. With this, great attention on

system security and good performance that may protect not just client's necessity

but also company's own interest as a whole.

OPEN SOURCE – An Overview

Open Source is defined in Wikipedia (2006) as the principles and

methodologies to promote open access to the production and design process for

various goods, products and resources. The term is most commonly applied to

the source code of software that is made available to the general public with

either relaxed or non-existent intellectual properties restrictions. This allows

users to create user-generated software content through either incremental

individual effort, or collaboration. Some consider open source as one of various

possible design approaches, while others consider it a critical strategic element

of their operations. Before open source became widely adopted, developers and

producers used a variety of phrases to describe the concept; the term open

source gained popularity with the rise of the Internet and its enabling of diverse

production models, communication paths, and interactive communities.

Subsequently, open source software became the most prominent face of open
7

source practices. The open source model can allow for the concurrent use of

different agendas and approaches in production, in contrast with more

centralized models of development such as those typically used in commercial

software companies "Open source" as applied to culture defines a culture in

which fixations are made generally available. Participants in such a culture are

able to modify those products and redistribute them back into the community.

According to Antonio (2004) Linux is an operating system that was created

at the University of Helsinki in Finland by a young student named Linus Torvalds

who was when working on a UNIX system that was running on expensive

platform. Because of his low budget, and his need to work at home, he decided

to create a copy of the UNIX system in order to run it on a less expensive

platform, Such as an IBM PC. He began his work in 1991 when he release

version 0.02 and worked steadily until 1994 when version 1.0 of the Linux Kernel

was released.

It has been said that technically speaking, Linux is a kernel, the core part

of an Operating System that handles networking, hardware management, and

makes the whole thing run. Most people, however, refer to Linux as the entire

operating system and applications together, an alternative to Microsoft Windows

or Apple’s Mac OS. Linux can replace Windows as a desktop operating system,

or Windows NT on a server environments. It has all features a modern fully-

fledged UNIX, including true, stable multitasking, virtual memory, shared libraries,

demand loading, share copy-on-write executables, proper memory management,


8

and TCP/IP networking. It runs all application that a UNIX server system can,

including web servers like Apache, mail serving software like Sendmail, and

database servers like Oracle, Informix, or more open applications like MySQL

and PostgreSQL. Linux supports a wide range of file system types, and through

programs like Samba can be seamlessly replace NT as a Windows file server

and Primary Domain Controller (PDC). With clustering technology, Linux can

scale up to handle the super computing loads required by many scientific and

engineering applications, and required in high availability environments.

The GNU/GPL

"GPL" stands for "General Public License". The most widespread such

license is the GNU General Public License, or GNU GPL for short. This can be

further shortened to "GPL", when it is understood that the GNU GPL is the one

intended.

The GPL grants the recipients of a computer program the rights of the free

software definition and uses copyleft ensure the freedoms are preserved, even

when the work is changed or added to.

Linux Distributions

There are lots of distribution of Linux, Most common are from Mandriva,

RedHat, and Debian. Its flavors are RHEL, Centos, Fedora, Mandiva, Suse,

Ubuntu, FreBSD.
9

There is quite a variety of Linux distributions from which to choose from.

Each distribution offers the same base Linux kernel and system tools, but differ

on installation method and bundled applications. Each distribution has its own

advantages as well as disadvantages, so it is wise to spend a bit of time

researching which features are available in a given distribution before deciding

on one.

The Red Hat distribution, by commercial vendor Red Hat Software, Inc. is one of

the most popular distributions. With a choice of GUI- and text-based installation

procedures, Red Hat 6.1 is possibly the easiest Linux distribution to install. It

offers easy upgrade and package management via the “RPM” utility, and includes

both the GNU Network Object Model Environment (GNOME) and the “K Desktop

Environment” (KDE), both popular GUI window managers for the X Window

System. This distribution is available for the Intel, Alpha, and Sparc platforms.

The Debian distribution, by non-profit organization known as “The Debian

Project” is the darling of the Open Source community. It also offers easy upgrade

and package management via the “dpkg” utility. This distribution is available for

the Intel, Alpha, Sparc, and Motorola (Macintosh, Amiga, Atari) platforms.

The S.U.S.E. distribution, by commercial vendor S.U.S.E., is another

popular distribution, and is the leading distribution in Europe. It includes the “K

Desktop Environment” (KDE), and also offers easy upgrade and package

management via the “YaST” utility. This distribution is available for both Intel and

Alpha platforms.
10

The OpenLinux distribution, by commercial vendor Caldera, is aimed

towards corporate users. With the new OpenLinux 2.2 release, Caldera has

raised the bar with what appears to be the easiest to install distribution of Linux

available today. In addition, it comes standard with the “K Desktop Environment”

(KDE). This distribution is available for the Intel platform only.

The Mandrake distribution now Mandriva, by commercial vendor

MandrakeSoft S.A., integrates the Red Hat or Debian distributions (your choice)

with additional value-add software packages than those included with the original

distributions.

The Slackware distribution, by Patrick Volkerding of Walnut Creek

Software, is the grandfather of modern distributions of Linux. Offers a fairly

simple installation procedure, but poor upgrade and package management. Still

based on the libc libraries but the next version will probably migrate to the newer

glibc. Recommended for users who are more technical and familiar with Linux.

This distribution is available for the Intel platform only.

Linux Intension

Frampton (2001) stressed that he had been using Linux for several years,

and he would like to think that he know a bit about the operating system and

what it can and cannot do. As he is an avid USENET reader, he followed the

latest developments and of course, the various flame-wars that invariably crop
11

that more than a few people believe. So, he run down a few of the more common

answers and attempt to shatter them.

“Linux is free, hence, it is a toy.” He explained that some people seem to

have the notion that, because a piece of software was written by volunteers with

no profit motive in mind, that the results must clearly be inferior to commercial-

grade offerings. This may have been true in the past that there was a lot of

freeware which was absolute garbage in the DOS and early Windows world, but

it is most certainly not true in recent days. The power of the Internet has made it

possible to bring together some of the brightest minds in the globe, allowing

collaboration on projects they find interesting. The people who have put a hand

into developing Linux or the thousands of GNU utilities and applications

packages are from a diverse background, and all of them have different personal

reasons for wanting to contribute. Some are hard-core hackers who develop for

the love of coding, others have a need for something like example, a network

traffic monitor for a LAN at work and decide to write it themselves, others are

academics and computer scientists who are using Linux for its research qualities.

Unlike a commercial offering where a package is developed and sold, source

code excluded, to the end-user, code used in Linux is scrutinized, debugged, and

improved upon by anyone who has the interest and ability. This act of peer-

review is one of the reasons that Linux offers the high reliability and high

performance that it does. Do not forget: The Internet itself was built and runs

almost exclusively on Open Source projects. The e-mail you exchange on a daily

basis with people around the world has an 80% chance of being handled on one
12

or both ends by Sendmail, the web pages you browse while “Surfin’ the Web” are

served to you by Apache on over 50% of the world’s web sites.

“There is no support for Linux.” He heard the myth somewhat sickens him.

That supposedly the “other” vendors do offer support. He had personal

experience with one very popular commercial operating system, where the

vendor’s so-called “support” was completely useless. First of all, there is support

for Linux. Yes, commercial support. There are some companies that can provide

as much support as you are willing to pay for; offering telephone and e-mail

support, many offering to come right to your door to deal with the problem!

However, in 99% of the situations you will run into with Linux, you will be able to

accomplish what you wish if you can simply get the answer to a question or two.

This is easily accomplished on USENET or on any of the many mailing lists

available. There lots of forum to look from when it comes to bugs and fixes. All

are available by just posting once issue and it was look upon by people in the

community who already encountered and had fix that specific problem.

SERVER APPLICATION SERVICES

Apache

The name “Apache” appeared during the early development of the

software because it was “a patchy” server, made out of patches for the freely

available source code of the NCSA HTTPd Web Server. For a while after the

NCSA HTTPd project was discontinued, a number of people wrote a variety of


13

patches for the code, either to fix bugs or add features that they wanted. There

was a lot this code floating around and people are freely sharing it, but it was

completely unmanaged. After a while, Bob Behlendorf and Cliff Skolnick set up a

centralized repository of these patches, and the Apache project was born. The

project is still composed of a rather small core group of programmers, but anyone

is welcome to submit patches to the group for possible inclusion in the code. In

the last couple of years, there has been a surge of interest in the apache project,

partially buoyed by new interest in Open Source. It is also due, in part, to IBM’s

commitment to support and use Apache as the basis for company’s Web

offerings. They have dedicated substantial resources to the project because it

has made more sense to use an established, proven Web server than to try to

write their own. The consequences of this interest have been stable version for

Windows NT operating system and accelerated release schedule. In mid-1999

The Apache Software Foundation was incorporated as a not-for—profit company.

A board of directors, who are elected on the amount basis by ASF members,

overseas the company, This company provides a foundation for several different

Open Source Software development projects – including the Apache Web Server

project (Ball et al, 2002).

The next table shows the current statistic growth of Apache users as Web

Server:

Table 1.
Percentage of Web Server Users
14

DEVELOPER USERS PERCENTAGE


Apache 64747516 58.62 %
Microsoft 34265321 31.02 %
Sun 1851269 1.68 %
Zeus 525405 0.48 %
Source: [Link]

FTP ( File Transfer Protocol )

Using the File Transfer Protocol (FTP) is a popular way to transfer files

from machine to machine across a network. Clients and servers have been

written for all the popular platforms, thereby often making FTP the most

convenient way of performing file transfers.

You can configure FTP servers in one of two ways. The first is as a private

user-only site, which is the default configuration for the FTP server; I will cover

this configuration here. A private FTP server allows users on the system only to

be able to connect via FTP and access their files.

You can place access controls on these users so that certain users can be

explicitly denied or granted access.

The other kind of FTP server is anonymous. An anonymous FTP server

allows anyone on the network to connect to it and transfer files without having an

account. Due to the potential security risks involved with this setup, you should

take precautions to allow access only to certain directories on the system (Pitts et

al, 1998).
15

Email Server

At least two components are involved in electronic mail. These are MTAs

and MTUs. MTA stands for Mail Transfer Agent, and MUA stands for Mail User

Agent.

The MTA is the server application that handles sending and receiving e-mail.

Whenever you send an e-mail program, It is handled by your internet provider’s

MTA after you press the Send button. Likewise, any incoming mail for you is

handled by the MTA. The MTA responsibilities include things such as the

following:

 Accepting and delivering mail send from clients.

 Querying outgoing mail so that client will not have to wait for the mail to

actually be sent.

 Accepting mail for clients and placing that mail in a holding area until the

user connects to pick up the mail.

 Selectively relaying and denying relaying of messages received that are

intended for a different host.

Main transfer is done with the protocol called SMTP, which stands for Simple

Mail Transfer Protocol. As the name suggests , the protocol is really quite simple.

It can send and received only plain text, and it uses relative simple commands to

communicate with other nail servers (Ball, et al. 1998).


16

The other necessary part of the e-mail system is a MUA, or Mail User

Agent. The MUA is the client that user actually interacts with. Common MUAs

with which you might be familiar are Microsoft Outlook, Eudora, Outlook Express,

Thunderbird, Mac Mail, kmail, and Evolution.

SMTP

The Simple Mail Transfer Protocol (SMTP) is the established standard

way of transferring mail over the Internet. The sendmail program provides the

services needed to support SMTP connections for Linux. Armed with a better

understanding of the protocols, you can take on understanding sendmail itself

beginning with the various tasks that sendmail performs (such as mail routing,

header rewriting, and so on) as well as its corresponding configuration files. As

with any large software package, sendmail has its share of bugs. Although the

bugs that cause sendmail to fail or crash the system have been almost

completely eliminated, security holes that provide root access are still found from

time to time

Internet Mail Protocols

To understand the different jobs that sendmail performs, you need to know

a little about Internet protocols. Protocols are simply agreed-upon standards that
17

software and hardware use to communicate. Protocols are usually layered, with

higher levels using the lower ones as building blocks. For example, the Internet

Protocol (IP) sends packets of data back and forth without building an end-to-end

connection such as used by SMTP and other higher-level protocols. The

Transmission Control Protocol (TCP), which is built on top of IP, provides for

connection-oriented services such as those used by programs such as Telnet

and the Simple Mail Transfer Protocol (SMTP). Together, TCP/IP provides the

basic network services for the Internet. Higher-level protocols such as the File

Transfer Protocol (FTP) and SMTP are built on top of TCP/IP. The advantage of

such layering is that programs which implement the SMTP or FTP protocols do

not have to know anything about transporting packets on the network and making

connections to other hosts. They can use the services provided by TCP/IP for

that job. SMTP defines how programs exchange e-mail on the Internet. It does

not matter whether the program exchanging the e-mail is sendmail running on an

HP workstation or an SMTP client written for an Apple Macintosh. As long as both

programs implement the SMTP protocol correctly, they can exchange mail.

Structured Query Language

As the Structured Query Language (SQL) is an international standard (and

American National Standards Institute - ANSI) for definition and manipulation of

databases. Almost all database vendors support SQL while adding their own SQL
18

extensions. Originally developed by IBM as a result of the work on the relational

model, it was first made commercial by Relational Systems, now called Oracle

Corporation in late 1970s. D. Chamberlin of IBM first defined a language that was

called Structured English Query Language (SEQUEL) where the programmer or

the business user could define, query or manipulate database with simple

English-like statements. These English-like statements could be embedded

within other procedural languages (e.g., CoboL, C, Pascal etc.) saving significant

amount of coding that programmers typically wrote.

SQL has been created such that it is intuitive, simple (relatively speaking),

non-procedural (that is, one need not specify step-by-step instructions to execute

certain actions) and maps human's cognitive model. Ideally, programmers and

business users need not know how or where data is stored. They should be able

to specify what they want and how they want given their requirements (e.g.,

conditions or additional computations such as sum, average, and maximum).

Furthermore, they should be able to specify according to how our human mind

conceptualizes the query in mind. Although, translating certain complex queries

into SOL statements may be difficult and the constructs may not be powerful

enough to define certain requirements. However, despite the limitations, SOL

provides a convenient method to create and manipulate databases. And, SOL

makes developing applications relatively easier. Anyone who has done

programming in COBOL or other languages and then used SOL (and SOL within

other languages) will appreciate the power of SQL.


19

MYSQL

MySQL is a database management system. A database is a structured

collection of data. It may be anything from a simple shopping list to a picture

gallery or the vast amounts of information in a corporate network. To add,

access, and process data stored in a computer database, you need a database

management system such as MySQL. Since computers are very good at

handling large amounts of data, database management plays a central role in

computing, as stand-alone utilities, or as parts of other applications.

MySQL is a relational database management system.

The SQL part of MySQL stands for "Structured Query Language" - the

most common standardised language used to access databases. MySQL is

Open Source Software. Open Source means that it is possible for anyone to use

and modify. Anybody can download MySQL from the Internet and use it without

paying anything. Anybody so inclined can study the source code and change it to

to their needs. MySQL uses the GPL (GNU General Public License) to define

what you may and may not do with the software in different situations.

MYSQL Usability

MySQL is very fast, reliable, and easy to use. It has a practical set of

features developed in close cooperation with our users. Many can find a
20

performance comparison of MySQL to some other database managers on our

benchmark. MySQL was originally developed to handle large databases much

faster than existing solutions and has been successfully used in highly

demanding production environments for several years. Though under constant

development, MySQL today offers a rich and useful set of functions. The

connectivity, speed, and security make MySQL highly suited for accessing

databases on the Internet.

The technical features of MySQL is a client/server system that consists of

a multi-threaded SQL server that supports different backends, several different

client programs and libraries, administrative tools, and several programming

interfaces. It also provide MySQL as a multi-threaded library which you can link

into your application to get a smaller, faster, easier to manage product.

MySQL has a lot of contributed software available. It is very likely that

people will find that your favorite application or language already supports

[Link] social way to pronounce MySQL is “My Ess Que Ell" (not \my

sequel"), but we do not mind if a person pronounce it as \my sequel" or in some

other localized way (Urban, 2002).

History of MYSQL

We once started out with the intention of using mSQL to connect to our

tables using our own fast low-level (ISAM) routines. However, after some testing
21

we came to the conclusion that mSQL was not fast enough nor exible enough for

our needs. This resulted in a new SQL interface to our database but with almost

the same API interface as mSQL. This API was chosen to ease porting of third-

party code. The derivation of the name MySQL is not perfectly clear. Our base

directory and a large number of our libraries and tools have had the prefix \my"

for well over 10 years. However, Monty's daughter (some years younger) is also

named My. Which of the two gave its name to MySQL is still a mystery, even for

users and developers.

MySQL Main Features:

 Internals and Portability

 Written in C and C++. Tested with a broad range of different compilers.

 No memory leaks. MySQL has been tested with Purify, a commercial

memory leakage detector.

 Works on many different platforms.

 Uses GNU Automake, Autoconf, and Libtool for portability.

 APIs for C, C++, Eiffel, Java, Perl, PHP, Python and Tcl

 Fully multi-threaded using kernel threads. This means it can easily use

multiple CPUs if available.

 Very fast B-tree disk tables with index compression.

 A very fast thread-based memory allocation system.

 Very fast joins using an optimized one-sweep multi-join.

 In-memory hash tables which are used as temporary tables.


22

 SQL functions are implemented through a highly optimized class library

and should be as fast as possible! Usually there is not any memory

allocation at all after query initialization.

DNS and BIND

It is often convenient to refer to networked computers to name than IP

address, and various translation mechanisms have been devise to make this

possible. The DNS (Domain Name Service) is one such method, now use almost

universally on the internet. Hostnames are merely a convenience for users.

Communication with other computers still requires knowledge of their IP

addresses, and to allow host to be referred to by name, it must be possible to

translate a name to an equivalent IP address. This process is called name

resolution, and is usually performed by software known as resolver. Because it is

a very common operation, what ever translation method we use must be very

fast and reliable.

Hostname to address mappings were maintained were once maintained

by SRI (Stranford Research Institute) in the [Link] file, each line of which

contains the name and address of a host. Anyone could obtain a copy of this file

via FTP and let their resolver use it locally. This scheme worked well when there

were only few machines, it quickly grew impractical as more people began

connecting to the Internet.


23

A lot of bandwidth was wasted in keeping the ever-growing [Link] file

synchronized between the increasing number of hosts. Name resolution was

progressively slowed down because the resolver look longer to search the list of

host each time. Changes to the database took forever to make and propagate

because SRI was inundated by request for additions and changes.

DNS is design to address these problems and provide a consistent,

portable namespace for network resources. Its database is maintained in a

distributed fashion to accommodate its size and the need for frequent updates.

Performance and bandwidth utilization is improved by the extensive use of local

caches. Authority over portions of the database is delegated to people who are

able and willing to maintain them in a timely manner so that updates are no

longer constrained by the schedules of a central authority.

DNS is a simple but delicate system that is vital to today’s internet. Errors

might manifest themselves in far from obvious ways, long after the changes that

caused theme were made, often leading to unacceptable and embarrassing

service disruptions. An understanding of the concept and process involved will

help to make that admin experiences as a DNS admin are pleasant ones (Blum,

2002).

Network
24

According to Duff (2002), In the late 1960s, the U.S. Department of

Defense (DOD) recognized an electronic communication problem developing

within the department. communicating the ever-increasing volume of electronic

information among DOD staff, research labs, universities, and contractors had hit

a major obstacle. The various entities had computer systems from different

computer manufacturers, running different operating systems, and using different

networking topologies and protocols. How could information be shared? The

Advanced Research projects Agency (ARPA) was assigned to resolve the

problem of dealing with different networking equipment and topologies. ARPA

formed an alliance with universities and computer manufacturers to develop

communication standards. his alliance specified and built a four-node network

that is the foundation of today’s Internet. During the 1970s, this network migrated

to a new, core protocol design that became the basis for TCP/IP. The mention of

TCP/IP requires a brief introduction to the Internet, a huge network of networks

that allows computers all over the world to communicate. It is growing at such a

phenomenal rate that any estimate of the number of computers and users on the

Internet would be out of date by the time this book went to print! Nodes include

universities, major corporations, research labs in the United States and abroad,

schools, businesses both large and small, and individually owned computers.

The explosion in past years of the World Wide Web has driven the Internet’s

expansion. In addition, the Internet is also a repository for millions of shareware

programs, news on any topic, public forums and information exchanges, and e-

mail. Another feature is remote login to any computer system on the network by
25

using the Telnet protocol. Because of the number of systems that are

interconnected, massive computer resources can be shared, enabling large

programs to be executed on remote systems. Massively distributed processing

projects such as the 1997 decryption of the Data Encryption Standard are

possible only with the “everything is connected to everything else” behavior of the

Internet.

Open Systems Interconnection Model

Moreover ISO (2000) defined many different types of computers are used

today, varying in operating systems, CPUs, network interfaces, and many other

qualities. These differences make the problem of communication between

diverse computer systems important. In 1977, the International Organization for

Standardization (ISO) created a subcommittee to develop data communication

standards to promote multi-vendor interoperability. The result is the Open

Systems Interconnection (OSI) model. The OSI model does not specify any

communication standards or protocols; instead, it provides guidelines that

communication tasks follow.

To simplify matters, the ISO subcommittees took the divide-and-conquer

approach. By dividing the complex communication process into smaller subtasks,

the problem becomes more manageable, and each subtask can be optimized

individually. The OSI model is divided into seven layers:

 Application

 Presentation
26

 Session

 Transport

 Network

 Network

 Data Link

 Physical

The next table (Table 2) will identify Services Provided at Each OSI Layer:

Table 2.
OSS Layer

Layer Description
Physical (Layer 1) This layer provides the physical connection between a
computer system and the network. It specifies connector
and pin assignments, voltage levels, and so on.
Data Link (Layer 2) This layer “packages” and “unpackages” data for
transmission. It forms the information into frames. A
frame represents the exact structure of the data
physically transmitted across the wire or other medium.

Network (Layer 3) This layer provides routing of data through the network.
This layer provides sequencing and acknowledgment of
transmission. This layer establishes and terminates
communication links.
Transport (Layer 4) This layer establishes and terminates communication
links.
Session (Layer 5) This layer provides sequencing and acknowledgment of
transmission.
Presentation (Layer 6) This layer does data conversion and ensures that data is
exchanged in a universal format.
Application (Layer 7) This layer provides an interface to the application that a
user executes: a “gateway” between user applications
and the network communication process.
27

Each layer communicates with its peer in other computers. For example,

layer 3 in one system communicates with layer 3 in another computer system.

When information is passed from one layer down to the next, a header is added

to the data to indicate where the information is coming from and going to. The

header-plus-data block of information from one layer becomes the data for the

next. For example, when layer 4 passes data to layer 3, it adds its own header.

When layer 3 passes the information to layer 2, it considers the header-plus-data

from layer 4 as data and adds its own header before passing that combination

down.

Before the advent of the OSI model, the U.S. Department of Defense

defined its own networking model, known as the DOD model. The DOD model is

closely related to the TCP/IP suite of protocols. TCP/IP does not make as fine

distinctions between the top layers of the protocol stack as does OSI. The top

three OSI layers are roughly equivalent to the Internet process protocols. Some

examples of process protocols are Telnet, FTP, SMTP, NFS, SNMP, and DNS.

The Transport layer of the OSI model is responsible for reliable data

delivery. In the Internet protocol stack, this corresponds to the host-to-host

protocols. Examples of these are TCP and UDP. TCP is used to translate

variable-length messages from upper-layer protocols and provides the necessary

acknowledgment and connection-oriented flow control between remote systems.

UDP is similar to TCP, except that it is not connection-oriented and does not

acknowledge data receipt. UDP only receives messages and passes them along
28

to the upper-level protocols. Because UDP does not have any of the overhead

related to TCP, it provides a much more efficient interface for such actions as

remote disk services. The Internet Protocol (IP) is responsible for connectionless

communications between systems. It maps onto the OSI model as part of the

Network layer, which is responsible for moving information around the network.

This communication is accomplished by examining the Network layer address,

which determines the systems and the path to send the message. IP provides

the same functionality as the Network layer and helps get the messages between

systems, but it does not guarantee the delivery of these messages. IP may also

fragment the messages into chunks and then reassemble them at the

destination. Each fragment may take a different network path between systems.

If the fragments arrive out of order, IP reassembles the packets into the correct

sequence at the destination (Ball, 2002).

IP Addresses

The Internet Protocol requires that an address be assigned to ever y

device on the network. This address, known as the IP address, is organized as a

series of four octets. These octets each define a unique address, with part of the

address representing a network (and optionally a subnetwork) and another part

representing a particular node on the network.

Several addresses have special meanings on the Internet:


29

 An address starting with a zero references the local node within its current

network. For example, [Link] references workstation 23 on the current

network. Address [Link] references the current workstation.

 The loopback address, 127, is important in troubleshooting and network

diagnoses. The network address [Link] is the local loopback inside a

workstation.

 The ALL address is represented by turning on all bits, giving a value of

255. Therefore, [Link] sends a message to all nodes on network

192.18; similarly, [Link] sends a message to every node on the

Internet. These addresses are important to use for multicast messages

and service announcements.

IP Addressing Classes

The IP addresses are assigned in ranges referred to as classes,

depending on the application and the size of an organization. The three most

common classes are A, B, and C. These three classes represent the number of

locally assignable bits available for the local network. Table 3 shows the

relationships among the different address classes, the available number of

nodes, and the initial address settings.


30

Table 3.
Network Classes Table

Class Available Nodes Initial Bits Starting Address


A 167,772 0xxx 0-127
B 65,536 10xx 128-191
C 256 110x 192-223
D 1100 224-239
E 1111 240-255

Class A addresses are used for very large networks or collections of

related networks. Class B addresses are used for large networks having more

than 256 nodes (but fewer than 65,536 nodes). Class C addresses are used by

most organizations. It is a better idea for an organization to get several class C

addresses because the number of class B addresses is limited. Class D is

reserved for multicast messages on the network, and class E is reserved for

experimentation and development. The administration of Internet addresses is

currently handled by the Network Information Center (NIC).

Naming Network

According to Burnett (2003), the naming of network nodes requires some

planning. When you select names, keep network management and user

acceptance in mind. Many organizations have network-naming standards. If your

organization has such standards in place, It is best to follow them to prevent

confusion. If not, There are plenty of room for imagination. Computer and

network names can be as simple as naming the workstations after the users,

such as Diane, Beth, or John. If you have many similar computers, numbering

them (for example, PC1, PC2, and PC128) may be appropriate. Naming must be
31

done in a way that gives unique names to computer systems. Do not name a

computer the computer in the north office and expect users not to complain. After

all, even the system administrator must type the names of computers from time

to time. Also avoid names like oiiomfw932kk. Although such a name may prevent

network intruders from connecting to your computer, it may also prevent you from

connecting to your workstation. Names that are distinctive and follow a theme

work well, helping the coordination of future expansion and giving the users a

sense of connection with their machines. After all, It is a lot easier to have a good

relationship with a machine called sparky than a machine called OF1284.

Remember the following points when selecting a naming scheme:

 Keep names simple and short—six to eight characters at most. Although

the Internet Protocol allows names up to 255 characters long, you should

avoid this, as some systems can not handle long names. (Each label can

be up to 63 characters long. Each part of a period-separated full domain

name for a node is a label.)

 Consider using a theme such as stars, flowers, or colors, unless other

naming standards are required at your site.

 Do not begin the name with numbers.

 Do not use special characters in the name.

 Do not duplicate names.

 Be consistent in your naming policy.

If you follow these guidelines, you can establish a successful naming

methodology. Internet names represent the organizations and the functionality of


32

the systems within the network. Following are examples of names that you can

use:

[Link]

[Link]

The following are examples of names that are difficult to use or remember:

[Link]

[Link]

The latter of these could be encoded information about a workstation in

room 345 on network 56 with network executive functions, but this type of

naming scheme is usually considered poor practice because it can lead to

confusion and misdirected messages. An Internet name such as

Eddie@[Link] enables you to reference a user on

a particular node.

Network Security in Linux Environment

According to Weeks (2002) Security consists of multi-tiered hardening and

monitoring methodologies that exist as outer shells of protection, and more

central, inner layers. The outer shells consist of systems such as router

configurations, firewalls, and Network Intrusion Detection Systems (or NIDS),

which form common methods for securing and watching entire LANs or WANs.

This article will examine and illustrate the implementation of the inner shells, or

host-centric layers of server security.


33

Although the outer security layers play a major role in overall security, the

inner layers are often overlooked. It is important to remember that network

security risks do not always stem from the outside, but from what your own

users, employees, and contractors are trying to do with your internal systems and

networks. Almost half of all system attacks come from within you LAN/WAN and

because these attackers know more about your internal systems, they are

typically more dangerous challenges (as compared to outside viruses, trojans, or

scans). These host-centric layers of protection are especially important on

unshielded or exposed Internet/Web servers, not to mention any internal

systems.

To help protect your systems, you do not need to go out and buy some big,

expensive, comprehensive Host Intrusion Detection System (HIDS) suite with a

pretty GUI. In fact, using off-the-shelf security can be a security risk in itself

because the product can also be bought or downloaded and dissected by

crackers. To successfully implement a fairly good host-centric security system,

start by employing two basic strategies:

• Harden your systems

• Monitor or watch your servers

Server Hardening

Of course, the first step in hardening any server is to always be sure the

server is fully patched and the required foundational security steps are properly

applied. Properly executing these in an automated fashion is key. In an open


34

source environment, there is a plethora of tools to help harden the network stack

and services on our servers. Linux is particularly nice because it comes with its

own kernel-level IP/networking tools — ipchains and iptables — for doing stateful

packet inspection and control. Ipchains allow us to monitor incoming traffic and

make decisions based on what we see coming into the server from the network.

In order to tie ipchains with a good scan detection/control and automation

package (Weeks, 2002).

Network Security in Win Environment

According to Lambert (1997), Computer networks are indispensable to

most businesses. Networked computer systems are used to share and access

key information and resources among millions users throughout all types and

sizes of organizations. Frequently, the information stored on this system is

confidential and/or intended for use only by the specific authorized individuals.

The ability of the network system to prevent authorized access and control

authorized access to such information is critical to both the security and

competitiveness of an organization. Network security comprises all aspects of

protection for all components of a computer network system (hardware, software,

and stored data) this includes protection from damage, theft, and unauthorized

access of system resources simple, and unauthorized access nearly impossible.

Protecting confidential, sensitive data from being lost or exposed is a top

priority to any organization. Whether It is a large corporation, SOHO (small

office/home office), a bank, or government of a country, those in charge want to

be assured that critical data is protected from malicious tampering, unauthorized


35

access, and user errors. With the use of Windows NT Workstation and Windows

NT Server, such assurance is real. Windows NT provides a full range of security

options. The security implementation is easy for both administrators and end-

users: A simple password-based logon procedure gives users access to

authorized resources. Users do not see the internal complexities of system-level

encryption; they log on.

Control Access and Auditing

Microsoft included security as part of the initial design specifications for

Windows NT, and it is prevalent throughout the operating system. Windows NT is

loaded with features and tools that make it easy to customize security for your

needs. The security model includes components to

 Govern access to objects (such as files and printers)

 Controls actions an individual user can take on objects (such as Read or

Write access)

 Specify which events are audited

Access to objects is the key objective of the Windows NT security model.

The security model maintains security information for all user and group accounts

as well as all objects. Access is controlled by the assignment of permissions.

Owners or other authorized users may, at their discretion, change permissions

and the use of user accounts. You can create an almost unlimited number of

accounts or groups of accounts. You can permit or restrict an account’s access to


36

any computer resource. An administrator assigns permissions to users and

groups to grant or deny access to particular objects.

For every account, a range of security attributes can be set on a per-file or

per-directory basis. These attributes can be on a per-user or per-group basis. In

addition to controlling access, these mechanisms allow you to identify access

attempts that re made directly by a user, or indirectly by a program or another

process that is running.

In support of all this control over the actions of a large number of users, It

is important to maintain consistent, reliable desktop environments that provide

the flexibility for network users to accomplish their daily tasks via a friendly

interface. Policies and profiles are used for this in the NT Security subsystem.

You can define security policies that apply to the domain as whole. The

trust relationship policy defines relationship with other domains. The user rights

policy controls access rights given to groups and user accounts. The account

policy controls how user accounts use passwords. The audit policy controls the

type of the events recorded in the security log.

Auditing is build into WINDOWS NT. This allows you to track which user

account was use to attempt a particular kind of access to files or other objects.

Auditing also can be used to track logon attempts, system shutdowns and

restarts, and similar events. These features support the monitoring of the events

related to system security. Helping you identify any security breaches and

determine the extent and location of any damage. The level of audited events is
37

customizable to your needs. The security log in the event viewer can list and

audited events by category and by event ID. (Patel, 1997)

SECURITY LEVELS

Operating system provides full range of security options, from no security

at all to the C2 level of security required by the U.S. government agencies. This

section describes three security categories- low (or more), medium and

maximum – and the security measures used obtain each level. These categories

are arbitrary, and you will probably combine characteristics of these categories

in creating you own categories in creating your own security policy.

SECURITY OF A NETWORK NEEDED

Moreover, Lambert (1997), claimed that there is much hullabaloo in the

computer industry about security and levels of security. Why not just set

maximum security at all times? Setting various limits on access to computer

resources complicates users work with those resources. It is also extra work for

security administrators to set up and maintain the various levels.

Here’s what can happen: Suppose only members of the account Payable

user group are allowed. To access AP records. A new person hired into that

group. For starters, someone needs to create an account for new user and add

that Accounts Payable group account. If the new Accounts Payable Group. The

new user cannot access the AP records and will be prevented from contributing

any meaningful work. Or, if the account is created and made a member of the
38

account payable group, you’ll need to consider what other access privileges are

thereby granted to this new person.

Another possible problem may occur when the security too tight: User will

try to “beat the system” in order to work done. Take passwords, for instance. You

might decide to make them long and required that they be changed often. Users

may find it hard to remember their password and will write it down to avoid

forgetting it and being locked out of the network. Another dangerous password

workaround is a very common occurrence in corporate environments: User who

are denied access from files they truly need to use are ”loaned” other employees

accounts and password so that work can be done (Patel, 1997).

Low Security

Security may not be of much concern to you or your organization when the

computer is not used to store or access sensitive data, or if the machine is in a

very secure location. For example, if a computer is in the SOHO of a sole

proprietor business, or if it is used as a testing machine within a locked lab .Then

security precaution might be unnecessary: System allows you to fully access the

system with no security protection to all, if you desire.

Physical Consideration for Low Level

Lambert (1997) also claims that for the simplest level of security, take the

same precautions you would with any piece of valuable equipment to protect

against theft. This can include locking the room when one can is using the
39

computer, or using a locked cable to attack the unit to wall. You might also want

to establish procedure for moving or repairing the computer so that the hardware

cannot be stolen or altered under false pretenses.

Use a surge protection or power conditioner to protect the computer and

all peripherals from power spikes. Also perform regular disk scans and de-

fragmentation. Always maintain backup of important data.

Configuration for Low-Level Security

For low security, none of the server security features are used. You might

even allow automatic logon to the Administration account (or any other user

account). This allows anyone with physical access to the computer to turn it on

and immediately have full access to its resources. Even if anyone chooses low

level security, do not let a bug get you! Take adequate precautions against

viruses-they can damage your data and prevent programs from operating. And

that virus may spread from your low-security computer to a more secure

machine.

Bear in mind that low security is not the norm. Most computers are used to

store sensitive and/or valuable data. This could be financial data, personal files,

confidential correspondence…just about anything. In addition, most computers

need to be protected against configuration changes, whether accidental or

deliberate.
40

Finally, if you do choose low security, keep in mind that the computer’s

users need to be able to do their work, with minimal barriers to the resources

they need (Patel, 1997).

Medium Security

Most environments require more than low security. Although every

implementation of security will be different, we have outlines a broad spectrum of

security requirements that we call medium security. For most implementations of

security, however, you will want to ensure you include all of the following

guidelines and suggestions.

Warning Banners

Moreover, Lambert (1997) claims that to protect against intrusion is, of

course, the goal of all security systems; unfortunately, however, simply making

the system secure may not be enough. You may also decide to need post

warnings that it is against company policy or even against the law to intrude upon

your system. In recent court cases, the argument has put forth that a logon

screen “invites” you to log on, and therefore a clever hacker is “justified” in

working hard to accomplish what you invited.

Before a user logs on to the system, server can display a message box

with the caption and text of your choice. This mechanism is often used to issue

warning message that users will be held liable if they attempt to use the
41

computer without authorization. We consider such a warning necessary for both

medium-and high-security implementations.

Physical Considerations for Medium Security

As with low-level security, the medium-security computer should be

physically secured and protected like any other valuable equipment. Keep the

computer in a locked environment-particularly if it is a server. A building that is

locked to unauthorized users, as most homes and offices are, is generally

acceptable.

If you use a physical lock (a cable from the computer to a wall for

instance), keep the key in a safe place for additional security. Remember, if the

key is lost or unavailable, authorized users might be unable to work.

Configuration for Medium-Level Security

First, security system administrators must set appropriate account

policies. User must form good logon habits, such as loggings off at the end of

each day and memorizing (rather than writing down) their passwords.

In the security system, a series of specific steps are taken to set up and

ensure medium-level security. Following are brief descriptions of these concepts.

Set Up User Account and Groups


42

In the medium-security environment, a user account and password are

required to use computer. Secured Server provides a GUI tool, the user

manager, for creating, deleting, and disabling user accounts. User manager also

allows you to set password policies and other security system policies, and to

organize user accounts into groups.

Maintain Separate Administrative and User Accounts

The least-privilege use axiom is use medium-security level. This axiom

advocates that, to avoid accident changes to secure resources, the account with

the least privilege that can accomplish the task should be used. You use

separate accounts for administrative activities and user activity. All administrators

should have two user accounts: one for administrative task and one for general

activity. For example, viruses can do much more damage if activated from an

account with administrator privileges.

The build-in administrator account must be renamed to something else.

This account is the only one that can be locked out and is thus attractive to

hackers. By remaining account, you force hackers to guess the account name as

well the password.

The Guest Account


43

In medium security, it is not necessarily required that the Guest account

be disabled. However, the Guest account should be prohibited from writing or

deleting any files, directories, or Registry keys. If the computer is for public use,

the Guest account can be used for public logons.

Precaution for Logging On and Logging Off

Under medium security, all users should always press Ctrl+Alt+Del before

logging on. “Trojan horse” programs are design to collect account passwords

while appearing as a logon screen. Pressing Cnt+Alt+Del foils such programs

and provided a secure logon screen. Users should also either log off or lock the

workstation if they will be away from the computer for any length of time.

Passwords

In medium security, protected username and password are your greatest

ally. Anyone who knows a username and the correct password can log on. Here

are a fe tips for using passwords:

. Change password frequently, and do not reuse passwords.

. Avoid passwords that are easily guessed or common words that

appear in a dictionary. A phase or a combination of letters and numbers works as

well. Do not write a password down; choose one that easy to remember.

Backups for medium-level Security


44

Regular backups are a must to protect your data from hardware failures,

accidents, viruses, and other malicious tampering. Since files must be read to be

backed up, and they must be written to be restored, backup rights should be

limited to administrators. Also, you’ll want to assign accountability for the proper

operation of the backup and restore procedures.

Auditing for Medium Security

When you establish an audit policy, you’ll need to consider the overhead

in disk space and CPU usage of the auditing options against the advantages of

these options. For medium security, you’ll want to at least audit failed logon

attempts to access sensitive data, and changes to security changes settings.

Maximum Security

Although medium security is sufficient for most installations, additional

security precautions are required for computers that contain highly sensitive data,

or that are at risk for theft or malice.

Physical Considerations for High-Level Security

In addition to the physical security considerations for low-and medium-

security configurations, the requirements for high-level security include

examining your physical network links, where the lines come into your office or

building. You may also want to control who has physical access to the computer.
45

As soon as you put a computer on the network, you add a funnel or route

to your system. This access port must be secured. Maintenance of user account

validation and object permissions is sufficient for Medium-level security, but for

maximum security you’ll need to make sure the network itself it secure.

The two risks to network connections are unauthorized network users and

unauthorized network taps. By containing the network entirely within a secure

building, you prevent at least minimize the chance of authorized taps. If the

cabling must pass through unsecured areas, use optical fiber links rather than

twisted pair, to foil attempts to tap the wire and collect transmitted data. Use data

encryption techniques on all transmission outside your site.

Controlling Access to the Computer

No computer will ever completely secure if people other than authorized

user can physical access it. For maximum security on a computer that is not

physical secure (locked safely away), consider the following security measures:

 Disable or remove the floppy disk drive.

 The CPU should have a case that cannot be opened without a key. Store

the key in a safe location away from the computer.

Controlling Access to the Power Switch

Users without shutdown rights should be barred from the computer’s

power or reset switches. The most secure computer (other than those in locked

and guarded rooms) expose only the computer’s keyboard, monitor, mouse, and
46

(when appropriate) printer to users. The CPU and the removable media drives

can be locked away where only authorized can be access them.

On many of today’s new machines, the system can be protected using a

power-on-password. This prevents unauthorized users from starting an

operating system other than Windows NT. Power-on password are a function of

the computer hardware, not operating system, so check with your hardware

vendors for choices.

Other Maximum Security Options

Some maximum-security options can be implemented only by using the

Registry Editor. The following lists of topics also need to be addressed for the

maximum-security. Bear them in mind as you this book.

 Assigning user rights

 Protecting files and directories

 Protecting the Registry

 Using the Schedule Service (AT Command)

 Hiding the last user name

 Restricting the boot process

 Allowing only logged-on users to shut down the computer

 Controlling access to removable media


47

The C2 Evaluation Process

The most recognized (at least, in the U.S.) baseline measurement for a

secure operating system in the U.S. Department of Defense (DOD) criteria for a

C2 level secure system. C2 security sis a requirement for many U.S. government

installations, and its standards are of immense value to all organizations

concerned about the security of business-sensitive data.

Scripting Languages

Scripting languages which is commonly called scripting programming

languages or script languages, are common computer programming languages

created to shorten the traditional edit-compile-link-run process. The name comes

from a written script such as a screenplay, where dialog is repeated verbatim for

every performance.

PHP

PHP (PHP: Hypertext Preprocessor) is an open source, reflective

programming language. It was originally designed as a high level scripting

language for producing dynamic web pages. PHP is used mainly in server – side

application software.

PHP was created by Danish – Canadian programmer Rasmus Lerdorf in

1994, initially as a simple set of Perl scripts for tracking accesses on his resume.

Lerdorf initially created PHP to display his résumé and to collect certain data,

such as how much traffic his page was receiving. "Personal Home Page Tools"
48

was publicly released on June 8, 1995 after Lerdorf combined with Form

Interpreter to create PHF/FI.

PHP generally runs on a web server, taking PHP code as its input and

creating web pages as output.

This study will utilize the Hypertext Preprocessor (PHP) scripting

language. It is an open source scripting language which is also compatible with

MySQL database server. It can be dowloaded freely over the internet and

adapatable to different operating system.


49

CONCEPTUAL MODEL OF THE STUDY

On the basis of the foregoing concepts, theories, and findings of related

literature and insights taken from them, a conceptual model is developed as

shown below:
INPUT PROCESS OUTPUT

Knowledge
Requirements Design

Strategic Planning Server


Linux Operating System Application Services
Application Service System
Networking and Firewalling Resource
Concerns and Aspects of Management
Servers Network Security and
Server Security Protection
Running and Processing
Server Application

Software Requirements Development Open Source


LINUX Project Identification Centralized Server
Apache / Bind / MySQL / and Selection
FTP / Mail / CGI
Project Initiation and
Shell Scripting Maintenance
PHP Planning
HTML Analysis
GCC,Vi, bourne Shell and Logical Design System
Perl Implementation
Maintenance
Hardware Requirements

Computer (clone/branded)
Testing and Revision
Installers
Network Peripherals

Evaluation

Figure 1. Conceptual Model of the Study


50

The centralized server maintenance system will be developed on an Open

Source platform to improve and lessen burden of system and network

administrators. This is to replace the and simplify the task of manually injecting

scripts by creating a preconfigured script that will run on the server to optimized,

protect, and secure servers from attacks and instability. The system administrator

will input the things that would be automated and it will run on specific time. Also

regular report will be e-mailed to him or the people that would allow receiving

such report. Since all are logged on the server, it has an auto filtering that will

extract the report daily, weekly or monthly on which the user wish to configure the

script. All the script that would be running requires basic skill on server

maintenance. Once all is set in to place, only updates need to be run and it can

be scheduled using a cron job. All wrong access will be on the system and will

use all desired function to correct the problem automatically. Error and intrusion

based from the logs will then be processed to come up with the design and

correct development of the system. With that an open source centralized server

maintenance system into reality.

Figure 1 illustrates the conceptual model of the proposed project. The inputs

include the knowledge, software and hardware requirements. The knowledge

requirements are system processes, good strategic planning, application

services, basic networking and fire-walling, internet and Linux operating system

core.
51

In creating the system, Hypertext Pre – Processor (PHP) will be used as

well as perl and GCC to develop and execute codes. It is a scripting language

which can be freely downloaded over the internet and can be run with any

operating system on the client side. It is supported by program utilities and

scheduler to maximized and optimized system process. It will utilize different

Linux distribution (server side) as the operating system to be supported by

Pentium 4.0 to dual zeon (cloned or branded) as a hardware requirement. The

system will also use the Apache as a web server, Internet Explorer and firefox as

web browser.

You might also like