2025 Intro To Computer 500 Study Guide
2025 Intro To Computer 500 Study Guide
• Introduction to Computers
• Introduction to Information Technology
• System Software
• e-Commerce
• Business Communication
• Introduction to Web Technologies
• Introduction to Programming
• IT Help Desk
• Work Integrated Learning
Students gain practical experience through hands-on laboratory sessions and real-world
projects that reinforce theoretical concepts. The curriculum emphasizes understanding
fundamental IT concepts, basic programming principles, and essential business applications.
The e-Commerce modules provide students with insights into modern digital business
practices, while the Work Integrated Learning component offers valuable industry exposure.
The program's focus on practical skills development is evident through modules like IT Help
Desk and Introduction to Web Technologies, which prepare students for real-world technical
support and web development scenarios. The Business Communication module ensures
graduates can effectively communicate technical concepts in a business environment.
This Higher Certificate serves as an excellent foundation for students planning to pursue
further qualifications at Richfield, such as the Diploma in Information Technology (DIT) or
Bachelor of Science in Information Technology (BSc IT). The comprehensive nature of the
program, combining technical skills with business applications, ensures graduates are well-
prepared for either entry-level IT positions or continued academic advancement in the field.
Introduction to computers
The module's practical components focus on developing essential computer literacy skills that
students will need throughout their academic journey and future careers. Students gain
hands-on experience with common computer applications, learn basic troubleshooting
techniques, and develop problem-solving skills that are crucial for IT professionals. This
practical exposure helps build student confidence in using computer systems effectively and
prepares them for the more advanced technical challenges they will encounter in later
modules.
As students advance through the HCIT program, the knowledge and skills acquired in this
introductory module become increasingly valuable, particularly during their Work Integrated
Learning experience. The module's comprehensive approach to basic computing concepts
ensures that students develop technical competencies and the foundational understanding
necessary for continued learning and adaptation in the rapidly evolving field of information
technology.
PRESCRIBED OR RECOMMENDED BOOK
LEARNING OUTCOMES
After reading this Section of the guide, the learner should be able to:
An information system (IS) functions through the interaction of its core components—people,
software, hardware, data, procedures, and communication networks—to efficiently manage and
process information.
• People: End users who interact with the system. They are critical of the system’s success
because the goal is to help them work more effectively and productively.
• Software: programs or sets of instructions that control how hardware operates, processes
data, and provides outputs.
• Hardware: The physical devices like computers, servers, and networking equipment that
run the software and store/process data.
• Data: Raw, unorganized facts that the system gathers. When processed, this data turns
into information, which is structured and meaningful, aiding decision-making.
• Procedures: The established guidelines and instructions that define how users and
systems interact to complete tasks, ensuring smooth operation.
• Communication Networks: The systems that allow data and information to be transferred
across various locations or departments, ensuring connectivity within the organization
What is a computer
A computer is an electronic device that processes, stores, and retrieves data, transforming raw facts
and figures into meaningful information through a combination of hardware and software. This
transformation is the essence of computing, where unprocessed data becomes organized and useful
information.
Figure 1.1 Desktop Computer (Adapted from Rainer and Prince, 2022).
The computer accomplishes this through five basic operations: input, where it captures data from
users or other sources; processing, where it converts this input into output; output, where it displays
or produces the results; storage, where it retains data, information, or instructions for future use; and
control, which directs the sequence of all these operations within the system. These functions work
in concert to make computers powerful tools for handling and manipulating data, enabling them to
perform a wide range of tasks and solve complex problems in our increasingly data-driven world.
Figure 1.2: Information processing life cycle (Adapted from Rainer and Prince, 2022).
Using a computer offers various advantages, such as enhanced efficiency and speed in data
processing, vast storage capacity for managing information, and internet connectivity for
communication and access to resources. They also facilitate automation, multimedia creation, data
analysis, remote work, and online education. However, notable disadvantages include overreliance
leading to decreased problem-solving skills, health issues like eye strain, and security risks from viruses
and hacking. Increased computer usage can also result in social isolation, high acquisition and
maintenance costs, distractions, and environmental concerns related to electronic waste. In summary,
while computers significantly improve productivity and connectivity, users should be mindful of the
associated challenges
Information literacy is the ability to identify, locate, evaluate, and use information effectively,
requiring critical thinking to assess the reliability and relevance of various sources. It enables
individuals to recognize their information needs, search for data efficiently, and organize it
responsibly. Meanwhile, computer literacy refers to the ability to operate computers and related
technologies, including understanding software, hardware, and safe internet practices. Both literacies
are essential in today's digital world, allowing individuals to navigate professional and personal
environments, make informed decisions, and function as responsible digital citizens. In the context of
information systems, these skills facilitate the transformation of raw data into meaningful
information, driving better decision-making and operational efficiency.
A computer processes data by utilizing both hardware and software, working together to perform
tasks efficiently. Hardware refers to the physical components of a computer system that you can
touch and see. These components work together to perform various functions necessary for
processing data, Software refers to the collection of programs and instructions that tell the hardware
what to do. It cannot be physically touched but is essential for the operation of a computer system.
Hardware serves as the foundation of a computer system, enabling data processing, while software
acts as the brain, providing the necessary instructions for that processing to occur. Together, they
allow computers to perform a wide range of tasks effectively.
[Link]
Computer hardware refers to the physical components of a computer system that enable it to operate
and perform tasks. Key hardware components include:
1. Central Processing Unit (CPU): Often referred to as the "brain" of the computer, the CPU
processes instructions and manages tasks, executing calculations and controlling other
components.
2. Motherboard: The main circuit board that serves as the backbone of the system unit. It
connects CPU, memory, storage devices, and other peripherals, allowing communication
among all hardware components.
3. Memory (RAM): Random Access Memory is a type of temporary storage that holds data and
instructions for quick access by the CPU. It is essential for multitasking and overall system
performance.
4. Storage Devices: Hardware like hard disk drives (HDD) and solid-state drives (SSD) provide
permanent data storage. HDDs use magnetic storage to read and write data, while SSDs use
flash memory for faster access and reliability.
5. Power Supply Unit (PSU): This component converts electrical power from an outlet into
usable power for the internal components of the computer, ensuring that all parts receive the
correct voltage and current.
6. Graphics Processing Unit (GPU): A specialized processor that renders images and graphics,
making it essential for gaming, video editing, and graphic design tasks. Some systems may
have integrated graphics, while others include dedicated GPUs.
7. Input Devices: Tools such as keyboards, mice, and scanners that allow users to interact with
the computer. These devices enable data entry and user commands.
8. Output Devices: Components like monitors, printers, and speakers that display or output
information from the computer, converting digital signals into visual or auditory formats.
9. Cooling Systems: These components, including fans and heat sinks, help dissipate heat
generated by the CPU and other hardware. Effective cooling is vital for maintaining optimal
operating temperatures and preventing damage.
10. Network Interface Card (NIC): This hardware component enables the computer to connect to
a network, facilitating communication with other devices and access to the internet.
11. Ports and connectors are essential for enabling communication between a computer and
external devices like printers, keyboards, monitors, and storage. Common ports include USB
for data transfer, HDMI for video and audio, Ethernet for network connections, and audio
jacks for sound. Connectors ensure stable connections between devices. Proper use of ports
and connectors enhances a computer's functionality, improves data transfer, and supports
user experience. Understanding their types is key for efficient setup and troubleshooting.
[Link]
Figure 1.3 System Unit and Computer Hardware (Adapted from Rainer and Prince, 2022).
Computer software is a collection of programs and instructions that enable a computer to perform
specific tasks. It is categorized into system software, application software, and programming software.
In particular, the Microsoft Office suite includes a set of popular tools designed to streamline a variety
of tasks:
• Microsoft Word: A word processor that allows users to create, edit, and format documents.
It is widely used for reports, essays, letters, and other text-heavy tasks.
• Microsoft Excel: A powerful spreadsheet application that enables users to organize, analyze,
and visualize data using tools like formulas, charts, and pivot tables. It is commonly used for
financial analysis, data management, and reporting.
• Microsoft PowerPoint: A presentation software that helps users create visually engaging
slideshows. It offers tools for inserting text, images, animations, and transitions, making it
ideal for lectures, business pitches, and meetings.
• Microsoft Outlook: An email client and personal information manager that helps users send
and receive emails, manage calendars, schedule appointments, and organize contacts.
• Microsoft Access: A database management system that allows users to create and manage
databases, design forms, generate reports, and perform data queries. It is often used for small
to medium-sized business databases.
• Microsoft OneNote: A digital note-taking application that helps users organize notes, ideas,
and research in one place. It supports various input formats like images, links, and handwritten
notes.
• Microsoft Teams: A collaboration platform that integrates chat, video conferencing, and file
sharing, allowing teams to work together remotely and communicate in real-time.
[Link]
1. High-level languages: These are closer to human language and easier to read and write.
Examples include:
o Python: Known for its simplicity and versatility, widely used in web development, data
science, and automation.
o Java: A popular, object-oriented language used for building large-scale applications
and Android development.
o C++: A powerful language often used in game development, system software, and
high-performance applications.
o JavaScript: Primarily used for web development, enabling interactivity in websites.
2. Low-level languages: These are closer to machine code and offer more control over hardware.
Examples include:
o Assembly language: Used for tasks requiring direct hardware control and efficient
resource management.
o C: A foundational programming language used in system programming, operating
systems, and embedded systems.
• Software Developers: Create applications, from desktop to web and mobile, using
programming languages to meet users' needs.
• System Programmers: Work on low-level system software, developing and maintaining
operating systems or hardware drivers.
• Front-end Developers: Specialize in creating the visual and interactive aspects of websites and
applications, often using languages like HTML, CSS, and JavaScript.
• Back-end Developers: Focus on server-side development, working with databases, APIs, and
ensuring smooth communication between front-end and server, often using languages like
Python, Java, or PHP.
• Full-stack Developers: Have knowledge of both front-end and back-end development,
creating complete web applications.
• Data Scientists and Analysts: Use programming languages like Python, R, and SQL to analyze
data, build algorithms, and make data-driven decisions.
• DevOps Engineers: Bridge the gap between development and operations, automating
processes and ensuring continuous integration and delivery using scripting languages and
tools.
iii) System software is a type of computer software designed to manage and control the
hardware components of a computer and provide a platform for other software to run. It acts
as a bridge between hardware and user applications, ensuring the efficient operation of the
system. The key types of system software include:
1. Operating Systems (OS): These are the most critical components of system software,
responsible for managing hardware resources and providing an environment where
applications can run. Examples include Windows, macOS, Linux, and Android. Will be
discussed in detail in the next chapter.
2. Device Drivers: These are specialized programs that allow the operating system to
communicate with hardware devices, such as printers, graphic cards, and network adapters,
ensuring they work correctly with the computer.
3. Utility Software: These are programs that perform maintenance tasks or provide system
management tools, such as antivirus software, disk cleanup tools, and file management
utilities.
Applications
Applications are crucial components of information systems, designed to facilitate specific tasks and
enhance user experiences. These systems collect, store, and process data to support decision-making
and operations within organizations. Applications are user-centric, ensuring they are efficient and easy
to use across desktop, web, and mobile platforms.
• Desktop applications leverage a computer's power for tasks like word processing and data
analysis.
• Web applications provide remote access through browsers, promoting collaboration and
efficient data management.
• Mobile applications offer on-the-go functionality for tasks such as data entry and
communication. Additionally, applications enable automatic task execution, with underlying
programs managing data processing and system operations, exemplified by web applications
that interact with databases to display information while handling back-end processes.
1.3 Input, output, processing, and storage devices
Input, output, processing, and storage devices are key components of a computer system. Here's a
breakdown of each:
[Link]
Input Devices
These are used to enter data and instructions into a computer system.
• Examples:
o Keyboard: For typing text and commands.
o Mouse: For pointing and selecting objects on a screen.
o Scanner: For converting physical documents into digital form.
o Microphone: For audio input.
o Camera: For capturing images or video input.
Output Devices
These devices receive data from the computer and present it to the user.
• Examples:
o Monitor: Displays text, images, and videos.
o Printer: Produces physical copies of digital documents.
o Speakers: Output sound.
o Projector: Projects visual content onto larger surfaces.
Processing Devices
Processing devices perform calculations and execute instructions. The Central Processing Unit (CPU)
is the primary processing device.
• Examples:
o CPU: Processes data and runs programs.
o Graphics Processing Unit (GPU): Specialized in rendering images and video.
Storage Devices
Storage devices are used to save data for future use, both temporarily (short-term) or permanently
(long-term).
• Examples:
• Hard Disk Drives (HDDs): HDDs are a traditional form of storage media that use spinning
magnetic disks to store data. They are commonly found in desktop computers, laptops, and
external drives. HDDs offer high capacity and relatively low cost per gigabyte.
• Solid State Drives (SSDs): SSDs are a newer type of storage media that use flash memory to
store data. They are faster and more durable than HDDs because they have no moving parts.
SSDs are commonly used in laptops, desktops, servers, and increasingly in portable devices
like smartphones and tablets.
• USB Flash Drives: Also known as thumb drives or USB sticks, these are small, portable storage
devices that use flash memory to store data. USB flash drives are widely used for transferring
files between computers, backing up important data, and storing portable applications.
• Memory Cards: Memory cards are small, removable storage devices commonly used in digital
cameras, smartphones, tablets, and other portable devices. They come in various formats
such as Secure Digital (SD), microSD, CompactFlash (CF), and Memory Stick. Memory cards
offer high capacity and are often used for storing photos, videos, and other multimedia files.
• Optical Discs: Optical discs, such as CDs, DVDs, and Blu-ray discs, use optical technology to
store data. They are commonly used for distributing software, movies, music, and archival
purposes. While optical discs have been largely replaced by other storage media for everyday
use, they are still used in certain applications where long-term data retention is important.
• Tape storage: Used for long-term archival of large amounts of data, often in enterprise
settings.
• Network Attached Storage (NAS): NAS devices are specialized storage devices that are
connected to a network and provide centralized storage and file sharing services to multiple
users and devices. NAS devices are often used in homes and businesses for data backup, file
storage, and media streaming.
[Link]
Output devices and display devices are closely related, but they serve distinct purposes in a computer
system. An output device is any hardware that conveys processed data from the computer to the user,
which can be in various formats such as visual, audio, or physical form. Examples include printers that
produce physical copies of documents, and speakers that output sound. A display device, on the other
hand, is a specific type of output device that presents visual information, such as text, images, and
videos, directly to the user. Common display devices include monitors, projectors, and touchscreens,
which show visual content generated by the computer. While all display devices are output devices,
not all output devices are display devices, as some output data in other formats, such as sound or
printed media.
Cloud storage has transformed data management practices in the digital era by enabling users to store
and access data remotely via the internet. This technology offers significant flexibility and
convenience, allowing for efficient data handling without the constraints of physical storage devices.
Prominent cloud storage services, such as Dropbox, Google Drive, and Microsoft OneDrive, provide
online storage spaces that can be accessed from any location with an internet connection. These
platforms enable users to upload, store, and share files, thereby streamlining data management
processes and enhancing collaborative opportunities. Cloud storage offer the below services:
▪ Data Backup and Recovery: Cloud storage offers a secure way to backup important files and
data, ensuring they are protected in case of device failure, theft, or other disasters.
▪ Accessibility: Cloud storage allows users to access their files from any device with an internet
connection. This level of accessibility is particularly useful for people who work across multiple
devices or locations.
▪ Collaboration: Cloud storage services often include features that facilitate collaboration, such
as file sharing and simultaneous editing. This is beneficial for teams working on projects
together, as it allows for seamless communication and file management.
▪ Scalability: Cloud storage services typically offer flexible storage plans that can be easily scaled
up or down based on the user's needs. This makes it suitable for individuals and businesses of
all sizes.
▪ Cost-Effectiveness: Many cloud storage providers offer subscription plans that are more cost-
effective than investing in on-premises storage solutions. Users can pay for only the storage
they need, without the additional costs of hardware maintenance and upgrades.
▪ Security: Cloud storage providers invest heavily in security measures to protect users' data
from unauthorized access, such as encryption, firewalls, and multi-factor authentication. This
can provide users with greater peace of mind regarding the safety of their files.
[Link]
▪ Automatic Syncing: Cloud storage services often include automatic syncing features, ensuring
that files are always up to date across all devices. This eliminates the need for manual file
transfers and reduces the risk of version conflicts.
HTTPs://[Link]/watch?v=P7wQcrp4oe8
Figure 1.4 Cloud Storage (Adapted from Rainer and Prince, 2022).
1.3.2 The Central Processing Unit (CPU) is often referred to as the "brain" of a computer. It is a crucial
component responsible for executing instructions from programs, performing calculations, and
managing data processing tasks. The CPU carries out the following primary functions:
1. Instruction Execution: The CPU retrieves instructions from the computer's memory and
executes them sequentially. These instructions can include arithmetic operations, logical
comparisons, and data manipulation tasks.
2. Data Processing: It processes data by performing mathematical calculations and logical
operations, enabling the computer to carry out tasks such as running applications and
manipulating files.
3. Control Unit: The CPU includes a control unit that manages the flow of data within the system.
It directs how data moves between the CPU, memory, and other hardware components,
ensuring that each part operates in harmony.
4. Registers: The CPU contains small, high-speed storage locations called registers, which
temporarily hold data and instructions that are currently being processed. This allows for
quick access and execution.
5. Clock Speed: The performance of a CPU is often measured in terms of its clock speed, typically
expressed in gigahertz (GHz). A higher clock speed indicates that the CPU can execute more
instructions per second, enhancing overall performance.
6. Multicore Architecture: Modern CPUs often feature multiple cores, allowing them to perform
multiple tasks simultaneously (multithreading). This improves multitasking capabilities and
overall efficiency.
The machine cycle, also known as the instruction cycle, refers to the fundamental operational process
that a CPU follows to execute instructions. It consists of a series of steps that enable the CPU to
perform tasks efficiently. The machine cycle typically includes the following stages:
1. Fetch: In this initial stage, the CPU retrieves an instruction from memory. The address of the
instruction is stored in the program counter (PC), which keeps track of the sequence of
instructions. The instructions are then loaded into the instruction register (IR).
2. Decode: Once the instruction is fetched, the CPU decodes it to determine what action is
required. This involves interpreting the instructions to understand the operation to be
performed and identifying any operands (data) needed for the operation.
3. Execute: During the execution stage, the CPU performs the operation specified by the
instruction. This could involve arithmetic calculations, data transfer, or logic operations. The
relevant data is processed using the arithmetic logic unit (ALU) or other functional units within
the CPU.
4. Store: After executing the instruction, the CPU may need to store the result back into memory
or a register. This stage involves writing the processed data to its designated location for
future reference or use.
Figure 1.6 Machine Cycle (Adapted from Rainer and Prince, 2022).
Input and output devices are crucial components of a computer system, enabling interaction between
the user and the machine. They facilitate the entry of data into the computer and the presentation of
processed information back to the user.
1. Motion Input Devices: These devices capture user movements and gestures to control the
computer. Examples include motion sensors and gaming controllers, which allow for more
immersive interactions, particularly in gaming and virtual reality applications.
2. Voice Input Devices: Voice recognition technology enables users to interact with computers
through spoken commands. Devices such as microphones and smart speakers allow for hands-
free operation and facilitate tasks like voice dictation and virtual assistant commands.
3. Video Input Devices: These devices capture video and images for processing by computer.
Webcams and digital cameras are common examples, often used for video conferencing,
streaming, and content creation.
4. Scanners and Reading Devices: Scanners convert physical documents and images into digital
formats, allowing for easy storage and editing. Optical character recognition (OCR) scanners
can read text from printed materials and convert it into editable digital text.
5. Displays: Output devices, such as monitors and projectors, present visual information to the
user. They vary in size and resolution, with modern displays offering high-definition and 4K
options for enhanced clarity.
6. Assistive Technology: These specialized input and output devices cater to individuals with
disabilities, ensuring accessibility and usability. Examples include screen readers, adaptive
keyboards, and switch devices, which help users interact with computers in ways that suit
their needs.[Link]
7. Printers: Output devices that produce physical copies of digital documents and images.
Printers come in various types, including inkjet, laser, and dot matrix, each serving different
printing needs and quality requirements.
Computers can be classified into various categories based on their design, processing power, and
intended use. This classification includes personal computers, mobile game computers, server
computers, mobile devices, internet appliances, and specialized systems like supercomputers.
Personal computers are general-purpose computers designed for individual use. They can be further
categorized into:
• Laptops: Portable computers with integrated screens, keyboards, and batteries, making them
suitable for on-the-go use.
• Desktops: Stationary computers typically consist of separate components such as a monitor,
CPU, keyboard, and mouse. They offer more power and upgradeability compared to laptops.
• Tablets: Lightweight and portable touchscreen devices that combine features of both laptops
and smartphones.
•
2. Mobile Game Computers
Mobile game computers are specialized devices optimized for gaming on the go. They often feature
powerful processors, high-quality graphics, and dedicated gaming functionalities. This category
includes:
• Gaming Laptops: High-performance laptops specifically designed for gaming, equipped with
enhanced graphics and advanced cooling systems.
• Handheld Consoles: Portable gaming devices like the Nintendo Switch or Steam Deck,
designed for gaming with built-in controls.
• Smartphones: Mobile phones that support gaming applications and are equipped with robust
hardware to provide quality gaming experience.
• File Servers: These stores and manage files for multiple users over a network, enabling
centralized access to data.
• Database Servers: Responsible for managing databases and providing database services to
client applications, facilitating data storage, retrieval, and management.
• Web Servers: Host websites and deliver web content to users over the internet, processing
requests from web browsers.
• Application Servers: Provide software applications to client devices, managing business logic
and processing user requests.
• Virtual Servers: Created by virtualization software, allowing multiple server environments to
run on a single physical server.
Internet appliances are devices designed for internet connectivity and smart functionalities, including:
• Smart TVs: Internet-enabled televisions that allow streaming of content and direct access to
applications.
• Smart Speakers: Voice-activated devices that connect to the internet, enabling music
playback, smart home control, and answering questions.
• Digital Media Players: Devices such as Roku or Apple TV that stream online content to
televisions.
• Home Automation Devices: Internet-connected devices like smart thermostats, lights, and
security systems that can be controlled remotely.
6. Supercomputers
• High Performance: Capable of executing millions of instructions per second, suitable for rapid
data processing.
• Parallel Processing: Utilize multiple processors to work on different parts of a problem
simultaneously, significantly reducing computation time.
• Large Memory Capacity: Equipped with vast amounts of memory and storage for handling
extensive datasets and performing large-scale simulations.
[Link]
1.5 Internet, World Wide Web, Web Browsing and search Engine.
The Internet serves as a vast network connecting millions of devices, enabling access to the World
Wide Web, where users can engage in web browsing to explore various types of websites, including
informative, e-commerce, social media, and educational platforms, while online social networks
facilitate connections and interactions among users within these digital spaces. This
interconnectedness allows for the seamless sharing of information and resources across geographical
boundaries. From an information systems perspective, the Internet serves as the backbone for various
applications and services, including:
Email: A fundamental communication tool that enables the exchange of messages and files between
users.
File Sharing: Systems that facilitate the transfer of files and documents between users or systems,
enhancing collaboration.
Online Gaming: A form of interactive entertainment that relies on real-time data exchange between
players and servers. [Link]
Figure 1.8 The internet (Adapted from Rainer and Prince, 2022).
The World Wide Web (WWW) represents a significant aspect of the Internet, functioning as a system
of interlinked hypertext documents and multimedia content. The Web operates using HTTP (Hypertext
Transfer Protocol), which allows users to retrieve web pages from servers. This system of interlinked
documents provides a rich source of information and resources, making it a vital component of
modern information systems. The WWW can be understood through several key concepts:
• Interactivity: Users can interact with web content, allowing for dynamic exchanges of
information, such as filling out forms or participating in discussions.
• Content Management: Organizations can manage and update their web content to provide
users with current and relevant information, supporting decision-making processes.
• Accessibility: The Web enables access to information from anywhere in the world, enhancing
the ability of individuals and organizations to gather data and share knowledge.
Web Browsing is the process of navigating the World Wide Web using a web browser, a software
application designed to access and display web content. Popular web browsers, such as Google
Chrome and Mozilla Firefox, act as interfaces between users and the vast resources of the Web. Web
browsing involves several critical functions:
• Information Retrieval: Users can search for specific information quickly, enhancing the
efficiency of data access.
• User Experience: Browsers provide features like bookmarking and history, improving user
interaction with information systems.
• Data Visualization: Web browsers interpret HTML (Hypertext Markup Language) to render
content visually, making complex data more understandable.
Websites are collections of interconnected web pages hosted on the Internet, accessible through web
browsers. They serve various purposes and can be categorized into different types based on their
functionality and content. Websites can be categorized into various types, each serving distinct
purposes:
Web Searching is a key function within information systems that allows users to find specific
information online using search engines. Search engines index content from various websites and
employ algorithms to rank results based on relevance and authority. This indexing system is critical
for effective information retrieval. [Link]
Figure 1.9 Search Engines (Adapted from Rainer and Prince, 2022).
Online social networks are platforms that allow users to create profiles, share content, and connect
with others based on shared interests, with examples including Facebook, Twitter, Instagram,
LinkedIn, and TikTok.
Malware, or malicious software, refers to harmful programs designed to damage, disrupt, or gain
unauthorized access to computer systems. Here are the common types of malwares:
1. Viruses: Attach to legitimate files and spread through shared infected files, potentially
corrupting or deleting data.
2. Worms: Standalone malware that replicates itself across networks without attaching to other
files, consuming bandwidth and exploiting vulnerabilities.
3. Trojan Horses: Disguise themselves as legitimate software, tricking users into installation, and
can create backdoors for unauthorized access.
4. Ransomware: Encrypts files and demands ransom for restoration, posing significant risks to
individuals and organizations.
5. Spyware: Monitors user activities and collects sensitive information like passwords and
browsing habits, compromising privacy.
6. Adware: Displays unwanted advertisements and may track browsing habits, affecting system
performance.
7. Rootkits: Gain unauthorized control over systems while remaining hidden, modifying
operating system functionality.
8. Keyloggers: Record keystrokes to capture sensitive information, such as passwords and credit
card numbers.
9. Bots and Botnets: Automated programs for repetitive tasks; botnets consist of networks of
infected devices used for attacks or spam.
Viruses, a type of malware, come in various forms, each with distinct characteristics and behaviors.
• File Infector Viruses attach themselves to executable files and spread when the infected files
are executed.
• Macro Viruses exploit macros in applications like Microsoft Word and Excel, executing
malicious code when a document is opened.
• Polymorphic Viruses change their code each time they infect a new file, making them difficult
to detect by traditional antivirus software.
• Metamorphic Viruses go a step further by rewriting their own code with each infection,
presenting an even greater challenge for detection systems.
• Boot Sector Viruses infect the master boot record of a storage device, executing code before
the operating system loads, which can severely compromise the system
• Resident Viruses embed themselves in the computer's memory, allowing them to infect other
files without the need for an executable file. Each type of virus poses unique risks and
challenges, emphasizing the importance of robust security measures to protect against their
harmful effects. [Link]
viruses/
Information Privacy
Information privacy is the right of individuals to manage access to their personal data and how it is
collected, stored, and used. As digital technologies evolve, protecting personal information is vital to
prevent identity theft and unauthorized access. Organizations must adhere to regulations like the
General Data Protection Regulation (GDPR), while individuals can safeguard their privacy by utilizing
strong passwords and privacy settings. [Link]
[Link]
Health Concerns
Technological advancements, while beneficial, also pose health risks. Extended screen time can lead
to digital eye strain, causing blurred vision and discomfort. Additionally, excessive use of technology
can contribute to a sedentary lifestyle, increasing the risk of obesity and cardiovascular issues. There
are also ongoing concerns about the effects of electromagnetic fields from devices. Encouraging
healthy technology habits and ergonomic practices can mitigate these health issues.
[Link]
the-internet-and-other-communication-and-gaming-platforms
Communication Technologies
refer to the various tools, systems, and platforms that enable the transmission of information between
individuals, organizations, or devices. These technologies are fundamental to modern communication,
supporting everything from personal interactions to business operations and global information
exchange
Communication Technologies
encompass the tools, systems, and platforms that facilitate the transfer of information between
individuals, organizations, or devices. They are essential in modern communication, enabling personal,
business, and global information exchange. These technologies include wired and wireless methods,
mobile communication, internet-based communication, and satellite systems.
• Wi-Fi: Enables wireless internet connectivity for devices within a certain range.
• Bluetooth: Short-range wireless communication for devices like headphones, keyboards, and
smartphones.
• Cellular Networks (3G, 4G, 5G): Provide mobile data and voice services over long distances.
• Satellite Communication: Enables communication in remote areas via satellites.
• NFC (Near Field Communication): Allows for short-range communication, often used in
contactless payments.
Software licenses and availabilities are vital in the information systems (IS) world, regulating how
software is used, distributed, and modified. They influence legal compliance, cost management,
customization, scalability, and security within IS operations. Organizations must adhere to license
terms to avoid legal risks and ensure governance, risk, and compliance (GRC) standards.
[Link]
• "A customer reports unauthorized transactions on their account. Explain step by step
how Standard Bank's information system would be used to:
a) Investigating the incident
b) Preventing similar occurrences in the future
• "Compare and contrast information literacy and computer literacy in the context of
Standard Bank. Why are both types of literacy crucial for bank employees to provide
effective customer service?"
5. PROBLEM-SOLVING QUESTION
Short Questions
1. What is the difference between the Internet and the World Wide Web?
2. How do search engines rank websites, and what factors influence their search algorithms?
3. What privacy concerns arise from the use of online social networks?
4. How can organizations adopt green computing practices to reduce environmental impact?
5. What are the main types of software licenses, and how do they differ?
References:
LEARNING OUTCOMES
After reading this Section of the guide, the learner should be able to:
Operating systems is a set of programs that coordinates all activities among computer or mobile device
hardware. Most operating systems perform similar functions that includes starting, and shutting down
a computer or mobile device, providing a user interface, managing programs, managing memory,
coordinating tasks etc.
Although an operating system can run from an external drive, in most cases an operating system
resides inside the computer or mobile device. Firmware consists of ROM chips or flash memory chips
that store permanent instructions. Different sizes of computers typically use different operating
systems because the operating systems generally are written to run on a specific type of computer.
The operating system that a computer uses is sometimes called the platform.
Between the hardware and the application software lays the operating system. The operating system
is a program that conducts the communication between the various pieces of hardware like the video
card, sound card, printer, the motherboard and the applications.
Operating Systems
An operating system (OS) is a set of programs that coordinate all the activities among
computer or mobile device hardware:
❖ Manage programs
❖ Manage memory
❖ Coordinate tasks
❖ Configure devices
❖ Monitor performance
❖ Control a network
❖ Administer security
Every computer and mobile device have an operating system. Regardless of the type of
the computer or device, however, their operating systems provide many similar
functions. Today most operating systems perform the following important functions:
Figure 2.1 Operating Systems and its function (Adapted from Rainer and Prince, 2022).
The operating system directs the traffic inside the computer, deciding what resource will be used
and for how long.
Time Time in the CPU is divided into time slices which are measured in
milliseconds. Each task the CPU does is assigned a certain number of time
slices. When time expires, another task gets a turn. The first task must
wait until it has another turn. Since time slices are so small, you usually
can't tell that any sharing is going on. Tasks can be assigned priorities so
that high priority (foreground) tasks get more time slices than low
priority (background) tasks.
Memory Memory must be managed also by the operating system. All those
rotating turns of CPU use leave data waiting around in buffers. Care must
be taken not to lose data!! One way to help out the traffic jam is to use
virtual memory. This includes disk space as part of main memory. While
it is slower to put data on a hard disk, it increases the amount of data
that can be held in memory at one time. When the memory chips get full,
some of the data is paged out to the hard disk. This is called swapping.
Windows uses a swap file for this purpose.
Input & Output Flow control is also part of the operating system's responsibilities. The
operating system must manage all requests to read data from disks or
tape and all writes to these and to printers. To speed up the output to
printers, most operating systems now allow for print spooling, where the
data to be printed is first put in a file. This frees up the processor for other
work in between the times data is going to the printer. A printer can only
handle so much data at a time. Without print spooling you'd have to wait
for a print job to finish before you can do anything else. With it you can
request several print jobs and go on working. The print spool will hold all
the orders and process them in turn.
Time Time in the CPU is divided into time slices which are measured in
milliseconds. Each task the CPU does is assigned a certain number of
time slices. When time expires, another task gets a turn. The first task
must wait until it has another turn. Since time slices are so small, you
usually can't tell that any sharing is going on. Tasks can be assigned
priorities so that high priority (foreground) tasks get more time slices
than low priority (background) tasks.
Memory Memory must be managed also by the operating system. All those
rotating turns of CPU use leave data waiting around in buffers. Care
must be taken not to lose data!! One way to help out the traffic jam is
to use virtual memory. This includes disk space as part of main
memory. While it is slower to put data on a hard disk, it increases the
amount of data that can be held in memory at one time. When the
memory chips get full, some of the data is paged out to the hard disk.
This is called swapping. Windows uses a swap file for this purpose
Input & Output Flow control is also part of the operating system's responsibilities. The
operating system must manage all requests to read data from disks or
tape and all writes to these and to printers. To speed up the output to
printers, most operating systems now allow for print spooling, where
the data to be printed is first put in a file. This frees up the processor
for other work in between the times data is going to the printer. A
printer can only handle so much data at a time. Without print spooling
you'd have to wait for a print job to finish before you can do anything
else. With it you can request several print jobs and go on working. The
print spool will hold all the orders and process them in turn.
System A user or administrator can check to see whether the computer or
Performance
network is getting overloaded. Changes could be made to the way tasks
are allocated or maybe a shopping trip is in order! System performance
would include response time (how long it takes for the computer to
respond when data is entered) and CPU utilization (comparing the time
the CPU is working to the time it is idle.)
System Security Some system security is part of the operating system, though
additional software can add more security functions. For multiple users
who are not all allowed access to everything, there must be a logon or
login procedure where the user supplies a user ID and a secret
password. An administrator
When a computer or mobile device is off, press a power button to turn it on. The process of starting
or restarting a computer is called booting. There are two types of booting, Cold boot and Warm
boot. Cold boot means turning on the computer that has been powered off completely. Warm boot
means using the operating systems to restart the computer. An operating system includes various
shut down options. Sleep mode saves any open documents and running programs or apps to
memory, turns off all unneeded functions and then places the computer in a low-power state.
Hibernate mode, by contrast, saves any open documents and running programs or apps to an
internal hard drive before removing power from the computer or devices. A boot drive is the drive
from which your personal computer starts, which typically is an internal hard drive, such as a hard
disk or SSD.
Figure 2.2 Shutting down a computer or mobile device (Adapted from Rainer and Prince, 2022).
A Graphical User Interface (GUI) is a type of user interface item that allows people to
interact with programs in more ways than typing such as computers; hand-held devices
such as MP3 Players, Portable Media Players or Gaming devices; household appliances
and office equipment with images rather than text commands. A GUI offers graphical
icons, and visual indicators, as opposed to text-based interfaces, typed command labels
or text navigating A symbol that appears on the display screen and that you move to
select objects and commands. Usually, the pointer appears as a small, angled arrow.
Text processing applications, however, use an I-beam pointer that is shaped like a
capital. A device, such as a mouse or trackball that enables you to select objects on the
Menus Most graphical user interfaces let you execute commands by selecting a choice
from a menu.
Some consider command-line interface difficult to use because they require exact
spelling, form, and punctuation. When working with a command- line interface, the set
of commands used to control actions is called the command language.
Managing Programs
Figure 2.3 Managing Programs (Adapted from Rainer and Prince, 2022).
Managing Memory
Figure 2.4 Managing Memory (Adapted from Rainer and Prince, 2022).
Coordinating Tasks
The operating systems determine the order in which tasks are processed. A task, or job,
is an operation the processor manages. Tasks include receiving data from the input
device, processing instructions, sending information to an output device, and
transferring items from storage to memory and from memory to storage.
While waiting for devices to become idle, the operating system places items in buffers.
A buffer is a segment of memory or storage in which items are placed while waiting to
be transferred from an input device or to an output device. An operating system
commonly uses buffers with printed documents. This process is called spooling, sends
documents to be printed to a buffer instead of sending them immediately to the printer.
Configuring Devices
A driver, short for device driver is a small program that tells the operating system how
to communicate with a specific device. Each device connected to a computer, such as
mouse, keyboard, monitor, printer, card reader/writer, digital camera, webcam,
portable media player, smart phone or tablet has its own specialised set of commands
and thus requires its own specific driver. If you attach a new device, such as portable
media player or smartphone to a computer, its driver must be installed before you can
use the device. Today most devices and operating systems support Plug and Play. Plug
and Play means operating systems automatically configures new devices as you install
them.
Figure 2.5 Establishing Internet connection (Adapted from Rainer and Prince, 2022).
Monitoring Performance
Many of the first operating systems were device dependent and proprietary. Device-independent
programs run only on a specific type or make of computer or mobile device
Device-independent operating systems now runs on computers and mobile devices provided by a
variety of manufacturers. The advantage users can retain existing applications if the computer is
changed
Proprietary software is privately owned and limited to a specific vendor or computer or device
model. Backward compatible is when new versions of the operating system recognizes and work
with applications written for earlier versions
Upward compatible is when applications can run on new versions of the operating system.
Figure 2.7 Operating System categories (Adapted from Rainer and Prince, 2022).
For example, Apple iOS can only be purchased for mobile devices and not for servers
and desktops. Desktop is an on-screen work area. A desktop operating system is a
complete operating system that works on desktops, laptops, and some tablets.
✓ Windows
✓ Mac OS
✓ UNIX
✓ Linux
✓ And Chrome OS
Windows OS: The Windows operating system is a widely used OS developed by
Microsoft. It provides a variety of features that make it user-friendly, efficient, and
versatile. Here are some of the key features of the Windows operating system:
Graphical User Interface (GUI): Windows OS provides a visually rich graphical interface,
including icons, windows, and menus, making it easy for users to interact with the
system. The Start Menu, Taskbar, and File Explorer allow users to easily navigate, search,
and launch programs and manage files.
Multitasking: Windows supports multitasking, allowing users to run multiple
applications simultaneously. Users can switch between open programs via the Taskbar
or use Alt + Tab for quicker navigation.
Windows Update: Windows OS regularly receives security updates, bug fixes, and new
features via the Windows Update service. This ensures the system stays up-to-date and
secure.
File System: Windows uses the NTFS (New Technology File System), which provides
advanced features like file compression, encryption, and improved security. The File
Explorer makes it easy to manage files, folders, and storage devices.
User Accounts and Permissions: Windows allows users to create multiple user accounts,
each with its own settings and preferences. Users can have Administrator or Standard
User rights, controlling access to system settings and files.
Virtual Desktops: Windows supports virtual desktops, allowing users to organize their
workspace by creating multiple desktop environments for different tasks or projects.
Cortana: Cortana is the virtual assistant built into Windows. It allows users to perform
tasks, set reminders, search the web, and interact with their system using voice
commands.
Support for Touch and Pen Input: Windows provides support for touchscreens and stylus
input, making it ideal for tablet PCs and hybrid devices.
Microsoft Store: The Microsoft Store is an online marketplace where users can download
apps, games, movies, and other content for their device.
Networking and Internet Connectivity: Windows has robust support for networking
through Ethernet and Wi-Fi connections, allowing users to share files, connect to the
internet, and manage network settings. It also includes Remote Desktop to connect to
other computers over a network.
System Restore and Recovery: Windows includes System Restore to return the computer
to a previous state in case of issues. Windows Recovery Environment (WinRE) helps
users troubleshoot and fix problems, including resetting the system or restoring from a
backup.
Task Manager: The Task Manager allows users to monitor system performance,
including CPU, memory, disk, and network usage. It also provides information on
running processes and allows users to terminate unresponsive applications.
DirectX for Gaming: DirectX is a set of application programming interfaces (APIs) that
optimizes the system for gaming and multimedia applications. It ensures better
performance and compatibility with games and media content.
Support for Legacy Software: Windows supports older software applications through
compatibility modes, allowing users to run programs designed for previous versions of
Windows.
Accessibility Features: Windows includes built-in tools like Narrator (screen reader),
Magnifier (screen zoom), and Speech Recognition to assist users with disabilities.
Cloud Integration: Windows integrates with OneDrive, Microsoft's cloud storage service,
to store and sync files across devices. The OS also integrates with other cloud services,
making it easier for users to access their files from multiple devices.
Windows continues to evolve with each version, introducing new features and
improvements in areas such as security, performance, and user experience.
Figure 2.8 Window Operating Systems (Adapted from Rainer and Prince, 2022).
Mac OS: The Macintosh operating system is also known as Mac OS. It has earned a
reputation for its ease-of-use latest version is OS X, is a multitasking operating system
available for computers manufactured by Apple.
✓ Safari browser
✓ Open multiple desktops at once
✓ Dictated words convert to text
✓ Built in Facebook and Twitter support
✓ Mail calendar, contacts and other items sync with iCloud, Apple’s cloud server
✓ Support Braille displays
Mac app store
✓ Mac app store
Figure 2.9 MAC Operating Systems (Adapted from Rainer and Prince, 2022).
UNIX: UNIX pronounced You-nix. It is a multitasking operating system developed in the
early 1970s. Power users often work with UNIX because of its flexibility and
capabilities.
Figure 2.10 UNIX Operating Systems (Adapted from Rainer and Prince, 2022).
LINUX: Linux is an open-source code provided for use, modification, and redistribution; popular,
multitasking UNIX-based operating system.
Figure 2.11 LINUX Operating Systems (Adapted from Rainer and Prince, 2022).
Chrome OS: Chrome OS is a Linux-based operating system designed to work primarily with
web apps.
Figure 2.12 Google (Adapted from Rainer and Prince, 2022).
A server operating system is a multiuser operating system that organises and coordinates how multi
users’ access and share resources on a network. Client computers on a network rely on server(s) for
access to resources. Sever operating systems can handle high numbers of transactions, support large-
scale messaging and communications, and have enhanced security and backup capabilities.
✓ Windows Server
✓ OS X Server
✓ UNIX
✓ Linux
The operating system on mobile devices and many consumer electronics is called a mobile operating
systems and desires firmware.
✓ iOS: Supported devices include the iPhone, iPod Touch, and iPad. Features
unique to recent versions of the iOS operating system include the following:
➢ Siri, a voice recognition app, enables you to speak instructions or
questions to which it takes actions or responds with speech output.
➢ Apple Pay provides a centralized, secure location for credit and debit
cards, coupons, boarding passes, loyalty cards, and mobile payment
accounts.
➢ iCloud enables you to sync mail, calendars, contacts, and other items.
➢ iTunes Store provides access to music, books, podcasts, ringtones, and
movies.
➢ Integrates with iPod to play music, video, and other media.
➢ Improves connectivity with other devices running the Mac operating
system.
➢ Mac App Store provides access to additional apps and software
updates.
➢ iOS, developed by Apple, is a mobile operating system specifically made
for Apple’s mobile devices
Figure 2.14 Mobile operation systems (Adapted from Rainer and Prince, 2022).
Figure 2.15 Window mobile OS (Adapted from Rainer and Prince, 2022).
✓ Harmony mobile OS: HarmonyOS is a mobile operating system developed by
Huawei. Initially launched in 2019, it is designed to provide a seamless and
unified experience across a variety of devices, including smartphones, tablets,
smartwatches, smart TVs, and IoT (Internet of Things) devices.
Figure 2.16 Harmony mobile OS(Adapted from Rainer and Prince, 2022).
[Link] (more on mobile
OS)
Group Activity: "OS Feature Face-Off"
Topic: Operating Systems Comparison Duration: 30 minutes Total Marks: 10
Choose one: Windows, macOS, or Linux (Pick one envelope from the front desk containing
your OS card)
STEP 2: Mission Brief "Your mission: Become experts of your chosen OS!" Investigation:
• How users interact with it
• How it keeps information safe
• How it organizes files
STEP 3: Create Your Battle Card "Design your OS Champion Card!" Draw and fill out:
• OS Name
• Three Superpowers (Strengths)
• One Secret Weapon (Unique Feature)
• One Weakness (Limitation)
STEP 4: Show and Tell "Time to showcase your OS Champion!" Share with the class:
• Show one cool thing your OS can do
• Tell us where it works best
• Convince us why users love it
Remember: "Have fun! Everyone's input counts!" "Be creative!" "Keep it simple and clear!"
Ready? Let's begin! 🚀
REVIEW QUESTIONS
1. Define the term, operating system. List the functions of an operating system.
2. Define the term, firmware. Name another term for an operating system.
3. List methods to start a computer or device.
4. Identify the five steps in the start-up process.
5. The is the core of an operating system. Differentiate between resident and non-
resident, with respect to memory.
6. Explain the role of a boot drive.
7. List reasons why users might shut down computers or mobile devices regularly.
Differentiate between sleep mode and hibernate mode.
8. Define the term, user interface. Distinguish between GUI, natural-user, and
command-line interfaces.
9. Define the terms, foreground and background, in a multitasking operating system.
10. List steps for removing a program or app.
11. Describe how a computer manages memory. Define the term, virtual memory.
12. The technique of swapping items between memory and storage is called .
13. Explain what occurs during thrashing, and list steps to prevent it.
14. List actions you should take if a mobile device displays a message that it is running
low on memory.
15. Explain how a computer coordinates tasks. Define these terms: buffer, spooling, and
queue.
16. Describe the role of a driver. Explain how to find the latest drivers for a device.
17. Describe the role of a performance monitor.
18. Explain how an operating system establishes an Internet connection.
19. Explain the issues surrounding an operating system’s inclusion of additional
software.
20. Identify changes that may be made to an operating system during an automatic
update. List security concerns regarding automatic updates.
21. List file and disk management tools and describe the function of each.
References
1. Apple Inc. (2024) 'iOS Operating System Documentation', Technical Documentation Series,
14(2), pp. 45-60.
2. Asadi, A. and Kenyon, R. (2024) 'Understanding Modern Operating Systems', Journal of
Computer Science, 18(3), pp. 112-126.
3. GeeksforGeeks (2024) Mobile Operating Systems. Available at:
[Link] (Accessed: 11
December 2024).
4. GeeksforGeeks (2024) Operating Systems. Available at:
[Link] (Accessed: 11 December
2024).
5. Google LLC (2024) 'Android Platform Guide', Android Developer Documentation, 12(1), pp.
78-94.
6. Huawei Technologies (2024) 'HarmonyOS Technical Overview', Mobile Operating Systems
Review, 5(2), pp. 156-170.
7. Microsoft Corporation (2024) 'Windows Operating System Architecture', Windows Technical
Journal, 25(4), pp. 34-48.
8. Silberschatz, A., Galvin, P.B. and Gagne, G. (2024) Operating System Concepts. 11th edn. New
York: John Wiley & Sons.
9. Tanenbaum, A.S. and Bos, H. (2024) Modern Operating Systems. 5th edn. Upper Saddle River:
Pearson Education.
Chapter 3: Ethics and Privacy
LEARNING OUTCOMES
After reading this Section of the guide, the learner should be able to:
• Define ethics and explain its three fundamental tenets and the four
categories of ethical issues related to information technology.
• Discuss at least one potential threat to the privacy of the data stored in
each of three places that store personal data.
Throughout your professional career, you'll face various ethical and privacy challenges, particularly in
relation to information technology. These interconnected issues have become increasingly complex in
the digital era, with technology often making ethical decisions more complicated rather than simpler.
Take for example the Boston Red Sox case, where technology enabled unethical behavior, or consider
how implementing social computing tools for product development raises new privacy and ethical
concerns.
3. Enable you to assess how information systems impact people both inside and outside your
organization
A particular challenge exists for small businesses and startups: while they must protect sensitive
customer data, they often lack established ethical frameworks. The key lies in finding the right balance
between necessary information access and appropriate information use. While hiring trustworthy
employees who follow ethical guidelines helps, a fundamental question remains: do smaller
organizations have proper ethical guidelines in place to begin with?
The essence is that all organizations, regardless of size, must prioritize ethical considerations in their
operations, especially regarding information handling and privacy protection.
Defining Ethics
Ethics can be understood as the moral guidelines people follow when making decisions about what's
right and what's wrong. Making these moral choices isn't always straightforward or obvious. However,
we have access to various decision-making frameworks that can assist us in navigating these ethical
choices.
The core idea remains the same - ethics helps guide human behavior through established principles,
even though determining the right course of action can be challenging. The existence of various ethical
frameworks provides helpful structure for tackling complex moral decisions.
While many ethical frameworks exist, five key approaches stand out:
1. Utilitarian Approach
2. Rights Approach
3. Fairness Approach
• Advocates equal treatment unless unequal treatment can be justified
5. Deontology Approach
• Evaluate outcomes
This systematic approach helps transform abstract ethical principles into practical decision-making
tools, particularly valuable in business settings. [Link]
In the business world, organizations establish unique ethical guidelines to direct their members'
professional conduct. These ethical codes serve as foundational frameworks for decision-making
within organizations. A notable example is the Association for Computing Machinery (ACM), which
maintains comprehensive ethical standards for its computing professionals.
However, ethical guidelines often present complex challenges. Members of multiple professional
organizations may encounter conflicting ethical requirements - for instance, one organization might
demand strict adherence to all laws, while another might encourage resistance to laws deemed unjust.
This highlights the potential complexity in navigating different ethical standards simultaneously.
The implications of ethical choices ripple through society, affecting not just individuals but entire
organizations and communities. This underscores the importance of careful ethical consideration in
corporate decision-making, even when actions fall within legal boundaries.
The early 2000s witnessed several major corporate scandals highlighting severe ethical breaches in
financial reporting. Enron, WorldCom, and Tyco became notorious examples of corporate fraud
through illegal accounting practices. These scandals prompted the 2002 Sarbanes-Oxley Act, requiring
stricter financial controls and personal executive accountability for financial reports.
The financial sector continued to face ethical challenges. The subprime mortgage crisis revealed
widespread unethical lending practices, exposing regulatory weaknesses in both U.S. and global
financial systems, ultimately triggering a global recession. The Bernie Madoff Ponzi scheme in 2009,
resulting in a 150-year prison sentence, further exemplified financial sector misconduct.
Wells Fargo represents a more recent case of ethical failures. In 2016, the bank faced scandal when
employees, pressured by aggressive sales quotas, created approximately 2 million fraudulent accounts.
This resulted in:
• $185 million in fines (minimal compared to their $5.6 billion quarterly earnings)
Further misconduct emerged in 2018 when the SEC found Wells Fargo improperly encouraging high-
fee trading, resulting in additional penalties and client reimbursements.
[Link]
The rapid advancement of information technology has introduced new ethical complexities:
• Computing power doubles approximately every 18 months
The case of Google analyzing credit card data to track offline purchases exemplifies these emerging
ethical challenges in the digital age.
Market Context and Challenge: In 2019, retail sales data revealed a significant pattern: 90% of
purchases occurred in physical stores, with only 10% through e-commerce. This distribution posed
a challenge for digital advertising platforms like Google and Facebook in proving their advertising
effectiveness. Facebook's partnership with Square and Marketo to track in-store visits intensified
competition, responding to Google's existing AdWords store visit metrics.
Data Collection Strategy: Google implemented a multi-faceted approach to track consumer
behavior:
1. Location Tracking: Using Google Maps to monitor physical store visits
2. Digital Footprint Analysis: Collecting data from various Google services (YouTube, Gmail,
Google Play)
3. Website Monitoring: Princeton research revealed Google's tracking presence on 70% of
popular websites through Google Analytics and 50% through DoubleClick
Advanced Data Integration: Google enhanced its tracking capabilities by:
• Integrating credit card transaction data with online behavior
• Developing Google Attribution to link ad views to actual purchases
• Monitoring purchase patterns even when location tracking is disabled
Privacy Measures and Concerns: Google claims to protect user privacy through:
• Custom encryption technology
• "Double-blind" encryption process for purchase matching
• Patent-pending mathematical formulas for data anonymization
However, privacy advocates raise concerns:
• Marc Rotenberg (Electronic Privacy Information Centre) warns about increasing data
collection secrecy
• Paul Stephens (Privacy Rights Clearinghouse) questions data anonymity effectiveness
• Limited transparency about third-party partnerships handling 70% of U.S. card transactions
Consent and Transparency Issues:
• Google maintains users consent through service agreements
• Questions remain about merchant consent for credit card data sharing
• Limited disclosure about data handling processes and partnerships
• Historical reliance on loyalty program data with explicit consumer consent
The case highlights the ongoing tension between advanced marketing capabilities and privacy
concerns in the digital age.
Questions
1. Explain how Google utilizes information technology in combining and analyzing customers'
online search behavior with their real-world purchasing patterns. What technological tools
and systems enable this integration?
2. Evaluate Google's data integration practices through the lens of the three core ethical
principles: responsibility (accepting consequences), accountability (determining who is
answerable), and liability (legal obligations). How does each principle apply to Google's
handling of consumer data across digital and physical platforms?
These questions aim to examine both the technical implementation and ethical implications of
Google's comprehensive consumer tracking system.
Information Technology Ethics in the Workplace
In today's digital workplace, employees must promote responsible use of information technology.
Common ethical dilemmas in business often center around technology use, such as:
The growing complexity of IT applications has generated four main categories of ethical concerns:
1. Privacy Management How organizations collect, store, and share personal information
2. Data Accuracy Ensuring information authenticity, reliability, and correctness in collection and
processing
4. Information Access Deciding who gets access to what information and whether access should
involve fees
These categories encompass various ethical challenges faced by modern organizations. Understanding
these issues helps professionals navigate complex situations where the line between ethical and
unethical behavior may not be immediately clear. The practical scenarios provided in WileyPLUS
further illustrate these ethical challenges through real-world examples, offering context for
understanding ethical decision-making in technology-related situations.
This framework helps organizations and individuals make informed decisions about technology use
while maintaining ethical standards in the digital age.
Figure 3.1: A framework for ethical Issues (Adapted from Rainer and Prince, 2022).
CASE STUDY: QUIZLET - DIGITAL LEARNING TOOLS AND ACADEMIC INTEGRITY IN MODERN
EDUCATION
Background: Quizlet, a digital study platform, serves over 30 million users worldwide through web
and mobile applications. The platform's free learning tools reach half of U.S. high school students
and one-third of college students, generating revenue through advertising and premium
subscriptions.
Platform Ethics and Controls:
• Honor code implementation
• Anti-cheating guidelines
• Copyright protection measures
• Test material detection systems
• User-based violation reporting
The Cheating Controversy: Digital transformation of learning materials has raised new academic
integrity concerns. While most use the platform legitimately, some students have:
• Used Quizlet during online tests
• Accessed actual exam questions through the platform
Texas Christian University (TCU) Case 2018: Challenge:
• 12 students faced suspension for alleged cheating
• Students claimed inadvertent use of exam content
• University-employed tutors allegedly recommended the platform
Response:
• Initial yearlong suspensions
• Student appeals and legal challenges
• Debate over digital learning responsibility
Resolution:
• Some suspensions overturned
• Academic penalties maintained
• Ongoing appeals for other sanctions
Key Issues and Implications:
1. Educational Evolution
• Need for updated teaching methods
• Question reuse practices
• Digital resource management
2. Academic Integrity
• Defining cheating in digital age
• Student responsibility
• Faculty adaptation
3. Platform Growth
• 300 million study sets
• 50 million monthly users
• Multi-language availability
Lessons Learned: The case demonstrates the complex challenges educational institutions face in
balancing digital learning tools with academic integrity, highlighting the need for clear policies and
adapted teaching methods in the modern educational landscape.
Questions
1. Evaluate the ethical implications of using digital study platforms like Quizlet for exam preparation.
What are the moral boundaries between legitimate study aid usage and academic dishonesty?
2. Assess whether students have an ethical obligation to report when they discover actual exam
questions on digital learning platforms. Does their silence on finding current test material constitute
a form of academic misconduct?
These questions explore the intersection of digital learning tools, academic integrity, and ethical
student behavior in modern education.
3.3 Privacy
In today's digital age, privacy has evolved into a complex concept encompassing both personal and
informational dimensions. While personal privacy protects individuals from unwanted intrusion,
information privacy focuses on controlling how personal data is collected and shared. These rights,
though protected by state and federal laws, face increasing challenges in the modern technological
landscape. Courts have established that privacy rights must be balanced against broader societal
needs, sometimes yielding to public interest. The digital revolution has dramatically expanded data
collection capabilities, with personal information now constantly generated through surveillance
systems, financial transactions, communications, internet activities, and government records. This has
given rise to a sophisticated digital profiling industry, where companies like LexisNexis and Acxiom
aggregate data from various sources to create comprehensive "digital dossiers." These detailed
electronic profiles are then marketed to law enforcement agencies, employers conducting background
checks, and businesses seeking deeper customer insights, highlighting the growing tension between
privacy rights and the commodification of personal information in our increasingly connected world.
The ACLU has identified technology-enabled tracking as a major privacy threat, with surveillance
technologies creating unprecedented monitoring capabilities across multiple sectors. These
ubiquitous monitoring systems now span public spaces like airports, subways, and banks; digital
sensors in webcams, gaming devices, and smartphones; street-level surveillance through traffic
cameras and toll systems; aerial photography from mapping services; and personal identification
through passports and employee badges.
The rapid evolution of surveillance technology has been driven by several key factors: decreasing costs
of digital equipment, improved sensor technology, reduced data storage expenses, and enhanced
processing capabilities. Modern smartphones exemplify this advancement, serving as powerful
surveillance tools with processing power that has increased 13,000% since 2000, along with multi-
functional capabilities including video, photos, email, and GPS tracking.
GPS-enabled devices present unique privacy challenges through automatic geotagging of photos and
videos, location data embedded in shared images, and potential exploitation by criminals through
social media posts, photo-sharing platforms, and location metadata. This comprehensive surveillance
ecosystem represents a significant shift in privacy expectations, creating new vulnerabilities in our
connected world.
Facial recognition technology has evolved from limited checkpoint use to widespread application, with
companies like IBM and Microsoft developing smart billboards that track consumer behavior and
enable personalized in-store experiences. China's Social Credit System (SCS) represents an extreme
application of surveillance technology, monitoring citizen behavior across spending patterns, bill
payment history, and social interactions, with scores impacting access to employment, mortgages, and
education opportunities.
Social media platforms have integrated sophisticated facial recognition features, with Google and Meta
employing automated photo tagging systems, facial feature indexing, and cross-reference capabilities
that often operate without user awareness. These systems create significant privacy implications, as
photos can be matched across internet databases and potentially misused for commercial profiling,
unauthorized information gathering, and loss of public anonymity.
The emergence of affordable drone technology has introduced new surveillance challenges, with
complex legal implications as the FAA's authority extends to ground level, making privacy violations
difficult to prove within an unclear liability framework. The convergence of these various surveillance
technologies creates unprecedented privacy challenges, fundamentally altering how personal
information can be collected and used in modern society.
Questions
[Link] both ethical considerations and legal frameworks surrounding the use of automated
license plate reading technology. How do these systems align with current laws and moral
standards?
2. Assess the ethical implications and legal considerations of Vigilant Solutions' business model -
specifically their practice of providing free LPR technology to law enforcement in exchange for
access to enforcement data and collection fees. What are the potential conflicts of interest?
3. What privacy concerns arise from the systematic collection and analysis of license plate data?
Consider both immediate and long-term implications of this data gathering and processing
capability
The legal framework surrounding workplace surveillance heavily favors employer interests, providing
minimal protection for employee privacy. Courts consistently support employer rights to monitor email
communications, review electronic documents, and track internet usage patterns. This has led to
widespread adoption of monitoring practices, with over 75% of organizations tracking employee
internet activity and two-thirds implementing URL filtering to block inappropriate sites. Organizations
typically justify this surveillance as necessary for enhanced security against malware, improved
productivity, and network protection. A telling case example involved a CIO's monitoring initiative that
tracked 13,000 employees over three months, sharing findings with CEO, HR, and Legal departments.
The results revealed questionable website visits and significant time spent on non-work activities,
ultimately leading to the implementation of URL filtering. These surveillance concerns extend beyond
the workplace, as multiple entities—including corporate organizations, government agencies, and
criminal actors—engage in various forms of monitoring. The United States continues to grapple with
finding an appropriate balance between privacy and security, determining surveillance limitations, and
addressing national security considerations, highlighting the persistent tension between organizational
security needs and individual privacy rights in the modern workplace.
In today's digital world, numerous institutions maintain vast databases of personal information. Credit
reporting agencies represent the most visible of these data custodians, but they are just one part of a
complex network of information holders. This network includes financial institutions, utility
companies, healthcare providers, educational institutions, retailers, and various government agencies,
all maintaining detailed records of individual information.
Beyond basic data management issues, deeper concerns exist about how organizations use and share
personal information. The scope of institutional data collection becomes evident when considering
examples like India's Aadhaar system, which stands as the world's largest biometric database. This
system exemplifies both the potential benefits and privacy challenges inherent in large-scale personal
data management systems.
These concerns reflect a growing awareness of the value and vulnerability of personal information in
our increasingly connected world. The challenge lies in balancing the practical needs of institutions to
maintain accurate records with the individual's right to privacy and data security. This balance becomes
more critical as organizations continue to collect and store expanding volumes of personal information.
The Freedom of Speech vs. Privacy Dilemma A fundamental challenge emerges in regulating content
on these platforms. Society faces the complex task of balancing two crucial rights: freedom of
expression and individual privacy. This tension creates particular difficulties in controlling potentially
offensive or false information while preserving free speech principles.
Impact on Personal Reputation The internet has become a powerful medium where anonymous users
can post derogatory content about individuals, often leaving the targeted persons with limited options
for recourse. This situation becomes especially problematic in the employment context, as most
American companies now routinely:
Professional Consequences Negative online information can significantly impact career opportunities.
The widespread corporate practice of internet-based background checking means that unfavorable
online content, whether true or false, can seriously diminish an individual's employment prospects.
This situation highlights how the internet's role in information sharing creates new challenges for
protecting individual privacy while maintaining open communication platforms.
[Link]
Corporate Privacy Guidelines Organizations increasingly recognize their responsibility to protect the
vast amounts of personal data they collect. These protections take the form of formal privacy policies
and codes that outline how customer, client, and employee information is safeguarded.
Consent Models for Data Collection: Two primary approaches exist for managing user consent:
1. Opt-out Model Currently common practice where companies collect data until customers
explicitly request them to stop. This approach automatically includes customers in data
collection.
2. Opt-in Model Preferred by privacy advocates, requiring explicit customer authorization before
any data collection begins. This approach prioritizes customer choice from the outset.
Technological Privacy Tools The Platform for Privacy Preferences (P3P) represents a significant
advancement in privacy protection. This protocol:
o Individual preferences
Security Implementation Privacy policies, while essential, must be backed by robust security measures.
Even the most comprehensive privacy guidelines prove ineffective without proper security
enforcement mechanisms. This critical connection between privacy and security highlights the need
for both policy development and practical protection measures.
This framework demonstrates how organizations balance data collection needs with privacy
protection, emphasizing the importance of both policy creation and practical implementation.
.
Figure 3.2: Privacy policy guidelines (Adapted from Rainer and Prince, 2022).
GLOBAL PRIVACY REGULATIONS AND CHALLENGES
Current Global Privacy Landscape The expansion of internet usage globally has led to diverse and often
conflicting privacy regulations. About 50 countries maintain various forms of data protection laws,
while others have none. This inconsistency creates significant challenges for international businesses
navigating multiple regulatory frameworks.
The General Data Protection Regulation (GDPR), implemented in May 2018, represents the world's
most comprehensive data protection law. This regulation modernized the 1995 Data Protection
Directive to address rapid technological advancement, establishing a robust framework for data
protection and privacy rights.
The regulation defines crucial distinctions between data categories, separating basic personal
identifying information from more sensitive data such as genetic, racial, religious, and political
information. It also establishes key roles within the data protection framework: data controllers who
manage user relationships, data processors who handle data for controllers, and data subjects whose
information is being processed.
Individual rights under GDPR are extensive and include information access and transparency, data copy
requests, understanding data retention policies, correction of inaccurate information, and the "right
to be forgotten" through data deletion. These rights represent a significant advancement in personal
data protection and control.
Organizations face substantial compliance challenges, including high implementation costs averaging
$10 million for many companies, the required appointment of data protection officers, significant fines
for violations, and complex security requirements. These challenges are further complicated by
transborder data flow issues, including jurisdictional challenges in international data transfer,
questions of legal authority across borders, and the complexity of multi-country data transmission.
U.S.-EU privacy relations present particular challenges due to different approaches to privacy
protection. This has led to the development of "safe harbor" frameworks and specific regulations
governing U.S. companies handling European data. Looking to the future, the increasing complexity of
international data transfer and privacy protection necessitates continued development of
international standards, enhanced cooperation between nations, adaptation to technological changes,
and a careful balance between business needs and privacy rights.
This comprehensive framework demonstrates the ongoing challenge of protecting individual privacy
in an increasingly interconnected global digital environment, highlighting the need for continued
evolution and adaptation of privacy protection measures..
Each business discipline faces unique ethical challenges in the digital age, requiring specific attention
to privacy, security, and ethical considerations. The common thread across all fields is the need to
balance operational efficiency with ethical responsibilities and privacy protection.
SUMMARY
Definition and Core Principles Ethics comprises principles guiding right/wrong choices in human
behavior. Three fundamental tenets:
Organizations can mitigate legal risks through comprehensive privacy policies addressing:
• Information accuracy
• Confidentiality measures
Ethics comprises principles guiding right/wrong choices in human behavior, founded on three
fundamental tenets: responsibility (accepting consequences of actions), accountability (identifying
who bears responsibility), and liability (legal framework for damage recovery). In the context of IT, four
major ethical categories emerge: privacy (protection of personal data), accuracy (information
correctness), property (ownership rights, including intellectual property), and access (information
availability and control).
Organizations can mitigate legal risks through comprehensive privacy policies that address data
collection practices, information accuracy, and confidentiality measures. These policies become
increasingly important as technology advances and creates new challenges for privacy protection.
Privacy, defined as the individual right to avoid unreasonable intrusion and maintain personal space,
faces several primary threats in today's digital landscape. Technological advancement presents the first
major challenge through improved data collection capabilities, enhanced surveillance methods, and
increased storage capacity. Electronic surveillance constitutes another significant threat, manifesting
through workplace monitoring, public space observation, and digital tracking systems.
Database security represents a third critical concern, encompassing personal information storage, data
aggregation risks, and unauthorized access concerns. Online platforms present the fourth major
threat, where social media exposure, information oversharing, and public accessibility of personal data
create additional vulnerabilities.
This comprehensive overview emphasizes how modern technology creates both new ethical
challenges and privacy vulnerabilities, requiring careful consideration of protection measures and
proper data management. The intersection of these various threats highlights the complex nature of
privacy protection in our increasingly digital [Link] summary emphasizes how modern technology
creates both new ethical challenges and privacy vulnerabilities, requiring careful consideration of
protection measures and proper data management.
Problem-Solving Activities
a) What are the ethical implications of managers monitoring employee web activity, even
though it's legal?
b) How should we evaluate employees accessing inappropriate "sinful six" websites from an
ethical standpoint?
c) Was the security manager's decision to report browsing histories to management ethically
sound?
e) What are the best approaches for companies to handle this type of situation?
2. How might the Computer Ethics Institute's "Ten Commandments" be improved or expanded?
3. Evaluate the ACM's code of ethics - is it comprehensive enough for today's technology landscape?
Explain your reasoning.
4. What practical privacy protection measures does the Electronic Frontier Foundation recommend for
individual users?
6. Does the netiquette code provide adequate ethical guidance? Should it be more specific or broader?
8. Should universities have the right to monitor institutional email systems? Defend your position.
Reference
Davis, J. (2018) '1.4 Million Patient Records Breached in UnityPoint Health Phishing Attack',
HealthcareIT News, 31 July.
Davis, J. (2018) 'Hackers Breach 1.5 Million Singapore Patient Records, Including the Prime Minister's',
HealthcareIT News, 20 July.
GasBuddy (2019) 'GasBuddy Debuts New Consumer Ranking for Fuel and Convenience Brands with
4,000+ Locations', News Release, 9 July.
Hartmans, A. (2017) 'An App for Finding Nearby Gas Stations Is No. 2 in the App Store as Florida
Prepares for Hurricane Irma', Business Insider, 8 September.
Mearian, L. (2018) 'Smartphones Becoming Primary Device for Physician and Patient Communications',
Computerworld, 4 April.
Sider, A. (2017) 'A Fuel App Scores Big During Shortage', Wall Street Journal, 11 September.
Tangermann, V. (2017) 'Five Ways Cell Phones Are Literally Changing Global Health', Futurism, 25
October.
Tuttle, B. (2017) 'Florida Is Running out of Gasoline. This App Is Helping People Find It', Time, 7
September.
Wattles, J. (2017) 'Dallas' Gas Panic Was Totally Preventable. Here's Why', CNN, 1 September.
Wethe, D. (2017) 'As Florida Fuel Grew Scarce, GasBuddy App Change Filled Gap', Bloomberg, 12
September.
Wireless Technologies and Mobile Computing: Altaeros (2019) Available at: [Link]
(Accessed: July 2019).
Cellular and Network Services: AT&T (2019) Available at: [Link] (Accessed: July 2019). Huawei
(2019) Available at: [Link] (Accessed: July 2019). Motorola (2019) Available at:
[Link] (Accessed: July 2019). Samsung (2019) Available at: [Link]
(Accessed: July 2019). Verizon (2019) Available at: [Link] (Accessed: July 2019).
Satellite and GPS Services: Globalstar (2019) Available at: [Link] (Accessed: July 2019).
Iridium (2019) Available at: [Link] (Accessed: July 2019). Planet Labs (2019) Available at:
[Link] (Accessed: July 2019). SpaceX (2019) Available at: [Link] (Accessed: July
2019).
Mobile Payment and Commerce: Alipay (2019) Available at: [Link] (Accessed: July 2019).
Ant Financial Services Group (2019) Available at: [Link] (Accessed: July 2019). WeChat
(2019) Available at: [Link] (Accessed: July 2019).
Healthcare Technology: AliveCor (2019) Available at: [Link] (Accessed: July 2019). Cupris
Health (2019) Available at: [Link] (Accessed: July 2019). UnityPoint Health (2019) Available
at: [Link] (Accessed: July 2019).
Environmental and Resource Monitoring: Global Fishing Watch (2019) Available at:
[Link] (Accessed: July 2019). Imazon (2019) Available at:
[Link]/en/ (Accessed: July 2019). Orbital Insight (2019) Available at:
[Link] (Accessed: July 2019).
IoT and Smart Technology: Nest Labs (2019) Available at: [Link] (Accessed: July 2019). General
Electric (2019) Available at: [Link] (Accessed: July 2019). Humatics (2019) Available at:
[Link] (Accessed: July 2019).
Research Reports: First Orion (2019) 'Mobile Traffic Report', First Orion Research Report, July 2019. RTI
International (2019) 'Global GPS Impact Study', RTI Research Report, 2019.
News Sources: Bloomberg BusinessWeek (2017) 'Digging Deep to Find the Future of Mobile', 5
November. Computerworld (2018) 'Smartphones Becoming Primary Device for Physician and Patient
Communications', 4 April. Wall Street Journal (2017) 'A Fuel App Scores Big During Shortage', 11
September
Chapter 4: Computer Security
LEARNING OUTCOMES
After reading this Section of the guide, the learner should be able to:
• Define the term, digital (computer), security risks, and briefly describe the types of
cybercriminals.
• Describe various types of Internets and network attacks and explain ways to safeguard
against these attacks.
• Discuss techniques to prevent unauthorized computer access and use.
• Discuss the types of devices available that protect computers from system failure.
• Explain the ways that software manufacturers protect against software piracy.
• Discuss how encryption, digital signatures, and digital certificates work.
• Explain the option available for backing up computer resources.
• Identify safeguards against hardware theft, vandalism, and failure.
4.1 Digital Security Risks
Digital Security Risks denote the possible risks and vulnerabilities that may jeopardize the
confidentiality, integrity, and availability of information and systems within the digital realm. These
dangers originate from multiple sources, including malware, phishing attempts, data breaches, and
others. They can impact individuals, businesses, and governments, resulting in financial losses,
reputational harm, and legal repercussions.
• Hackers: Often categorized into white-hat (ethical hackers), black-hat (malicious hackers), and
gray-hat (in between). Black-hat hackers exploit vulnerabilities for personal gain, while white-hat
hackers work to improve security.
• Cybercriminals: Individuals or groups engaged in illegal activities through the internet. They may
steal personal information, financial data, or intellectual property.
• Phishers: These attackers use deceptive emails or websites to trick individuals into providing
sensitive information, such as login credentials or credit card numbers.
• Ransomware Attackers: Cybercriminals who deploy malware that encrypts a victim’s files,
demanding a ransom for the decryption key. This has become a prevalent threat in recent years.
• Insider Threats: Employees or contractors who exploit their access to systems and data for
malicious purposes, whether for personal gain or due to negligence.
• State-Sponsored Hackers: Cybercriminals employed by government agencies to conduct
espionage or sabotage against other nations or organizations.
[Link]
Figure 4.1: Types of cybercrime Diagram (Adapted from Rainer and Prince, 2022).
As of 2023, cybercrime has continued to evolve with the advancement of technology. Trends
include:
Internet and network assaults denote malicious actions aimed at computer systems, networks, and
data, intending to disrupt, damage, or obtain unauthorized access.
Phishing: Attackers disseminate deceptive emails to manipulate users into disclosing personal
information, including passwords and credit card data (Verizon, 2024).
Implement safeguards by educating users on identifying phishing efforts, utilizing email filters, and
adopting multi-factor authentication (MFA).
DDoS (Distributed Denial of Service) Attacks: Attackers inundate a target with traffic from numerous
sources, making it inaccessible to genuine users (Cloudflare, 2024).
Implement traffic monitoring and filtering, utilize load balancers, and consider DDoS mitigation
services as safeguards.
Man-in-the-Middle (MitM) Attacks: Attackers intercept and may modify communication between
two parties without their awareness (Zhang et al., 2023).
Implement safeguards by utilizing encryption protocols such as SSL/TLS, securing Wi-Fi networks with
WPA3, and employing VPNs for secure communication.
SQL Injection: Attackers exploit vulnerabilities in web applications by injecting malicious SQL code into
input fields (OWASP, 2024).
Implement safeguards such as parameterized queries, input validation, and routine security
assessments to identify vulnerabilities.
Credential Stuffing: Attackers exploit compromised usernames and passwords from one breach to
illicitly access accounts on different services (Akamai, 2023).
Implement safeguards by promoting the utilization of distinct passwords for various services and
enforcing multi-factor authentication (MFA).
Malicious Software: Malicious software that masquerades as genuine while undermining system
security (McAfee, 2023).
Safeguards: Acquire software exclusively from official sources, ensure security software is regularly
updated, and inform users about the risks associated with malicious software.
Insider Threats: Employees or contractors exploit their access to adversely affect the organization,
whether deliberately or inadvertently (IBM, 2024).
Implement stringent access controls, monitor user activities, and provide frequent security awareness
training.
[Link]
Figure 4.2 Cybersecurity Diagram (Adopted from Rainer and Prince, 2022)
• Firewalls: Employ both hardware and software firewalls to oversee and regulate incoming and
outgoing network traffic (Cisco, 2024).
• Encryption: Secure sensitive data with encryption during transmission and storage to prevent
unwanted access (NIST, 2023).
• Periodic Audits: Perform systematic security audits and vulnerability assessments to detect and
address deficiencies (SANS Institute, 2023).
• Patch Management: Maintain all software and systems current to safeguard against identified
vulnerabilities (Microsoft, 2024).
• Incident Response strategy: Formulate and consistently revise an incident response strategy to
promptly and efficiently manage possible breaches (CIS, 2024).
Preventing unauthorized computer access and use is crucial for maintaining the security and integrity
of information systems. Here are several effective techniques:
• Strong Password Policies: Implementing strong password requirements can significantly reduce
the risk of unauthorized access. Passwords should be complex, with a mix of letters, numbers, and
special characters, and users should be encouraged to change their passwords regularly
(Mansfield-Devine, 2020).
• Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring users to
provide two or more verification factors to gain access. This could include something they know
(password), something they have (a smartphone or token), or something they are (biometric
verification) (Cheng et al., 2021).
• User Access Controls: Limiting user access based on roles within the organization can prevent
unauthorized use. Implementing the principle of least privilege ensures that users have only the
access necessary for their job functions (Tso et al., 2021).
• Firewalls and Intrusion Detection Systems (IDS): Firewalls act as barriers between trusted internal
networks and untrusted external networks, while IDS can monitor network traffic for suspicious
activities and alert administrators to potential breaches (Bertino & Islam, 2021).
• Regular Software Updates: Keeping operating systems and applications up to date is vital for
security. Many updates include patches for vulnerabilities that could be exploited by unauthorized
users (Kumar et al., 2020).
• Employee Training: Conducting regular training sessions for employees on security best practices
can reduce the likelihood of human errors that lead to unauthorized access. This includes educating
them about phishing attacks and safe browsing habits (Furnell, 2020).
• Data Encryption: Encrypting sensitive data ensures that even if unauthorized individuals gain
access to the data, they cannot read it without the proper decryption keys (Wang & Li, 2022).
4.5 Software manufacturers employ several strategies to protect against software piracy.
Licensing Agreements: Manufacturers often use End User License Agreements (EULAs) that legally
bind users to terms of use. This includes restrictions on copying, sharing, or modifying the software.
• Activation Keys and Serial Numbers: Many software products require users to enter a unique
activation key or serial number during installation. This key is verified against a database, and its use
can be limited to a specific number of installations.
• Digital Rights Management (DRM): DRM technologies restrict how software can be used and
shared. They may limit the number of devices on which the software can be installed, require
periodic online verification, or restrict access to certain features if the software is deemed
unlicensed.
• Obfuscation Techniques: Software developers use code obfuscation to make it difficult for
unauthorized users to understand or modify the software. This involves transforming the code into
a version that is harder to read while maintaining its functionality.
• Software Updates and Patches: Regular updates can help prevent piracy by improving security and
introducing features that may require a valid license. If users have to keep updating their software,
it creates a stronger incentive to purchase legitimate versions.
• Cloud-Based Software Models: Increasingly, manufacturers are moving to cloud-based software
solutions (Software as a Service, or SaaS), where the software is accessed online rather than installed
locally. This model reduces piracy opportunities since users need to authenticate with a server to
access the software.
• Fingerprinting and Tracking: Some software uses fingerprinting technology to identify unique
hardware configurations or usage patterns. This allows manufacturers to detect unusual activity that
may indicate unauthorized use.
• Legal Action and Education: Manufacturers often pursue legal action against major offenders and
invest in education campaigns to inform users about the risks and consequences of using pirated
software.
• Community Reporting Tools: Some companies encourage their user communities to report piracy.
This helps them identify and take action against unauthorized users or distributors.
❖ Uninterruptible Power Supply (UPS): A UPS provides backup power during outages, allowing
computers to continue operating temporarily and safely shut down to prevent data loss or
corruption (Khan et al., 2020).
❖ Surge Protectors: These devices protect computers from voltage spikes and surges that can
occur due to lightning strikes or power fluctuations, which can damage hardware components
(Johnson & Smith, 2021).
❖ RAID (Redundant Array of Independent Disks): RAID systems use multiple hard drives to store
data redundantly. If one drive fails, the data can still be accessed from the remaining drives,
minimizing the risk of data loss (Patel et al., 2022).
❖ Backup Drives and NAS (Network Attached Storage): Regularly backing up data to external
drives or NAS systems ensures that critical information can be recovered in case of a system
failure (Lee & Chen, 2023).
❖ Cooling Systems: Proper cooling devices, such as fans or liquid cooling systems, help prevent
overheating, which can lead to system failures (Nguyen, 2024).
❖ Redundant Power Supplies: Some servers come with redundant power supplies, which means
that if one power supply fails, the other can take over without interrupting the system
(Anderson, 2021).
❖ Disk Imaging Software: This software creates a complete image of the system's hard drive,
allowing for quick recovery in the event of a system failure (O'Brien, 2023).
❖ Cloud Backup Services: Utilizing cloud-based solutions for data backup provides an offsite
recovery option, ensuring that data is safe even if local hardware fails (Williams & Patel, 2022).
❖ Firewalls and Security Software: Protecting systems from malware and cyber threats helps
prevent failures caused by malicious attacks (Miller, 2024).
❖ System Monitoring Tools: These tools provide alerts about potential hardware issues or
system performance problems, allowing for proactive maintenance before failures occur
(Davis, 2023).
Software manufacturers use a variety of methods to protect against software piracy, including:
Encryption
Encryption is the process of converting plain text into ciphertext to prevent unauthorized access to
the information. It ensures that only authorized users can read the data. There are two main types of
encryptions:
1. Symmetric Encryption: This method uses the same key for both encryption and decryption.
It’s efficient for large amounts of data but requires secure key management to prevent
unauthorized access. Common algorithms include AES (Advanced Encryption Standard) and
DES (Data Encryption Standard) (Rivest, 2021).
2. Asymmetric Encryption: This method uses a pair of keys: a public key for encryption and a
private key for decryption. The public key can be shared openly, while the private key is kept
secret. This approach is often used for secure communications and digital signatures. RSA
(Rivest-Shamir-Adleman) is a widely used asymmetric encryption algorithm (Kumar & Singh,
2023).
Digital Signatures
Digital signatures provide a way to verify the authenticity and integrity of a message or document. The
process involves:
1. Hashing: The sender generates a hash (a fixed-size string of characters) from the message
using a hash function, which converts the original data into a unique representation (Zhang,
2022).
2. Signing: The sender encrypts the hash with their private key to create the digital signature.
This signature is unique to both the message and the sender.
3. Verification: The recipient decrypts the digital signature using the sender's public key to
retrieve the hash. They also generate a new hash from the received message. If both hashes
match, it confirms that the message is authentic and has not been altered (Patel et al., 2024).
[Link]
Digital Certificates
Digital certificates are electronic documents used to prove the ownership of a public key. They are
issued by trusted entities known as Certificate Authorities (CAs) and contain the following information:
1. Public Key: The public key of the entity to whom the certificate is issued.
2. Identity Information: Details about the entity, such as name, organization, and address.
3. Expiration Date: The date when the certificate will no longer be valid.
4. Signature of the CA: The CA’s digital signature, which verifies that the certificate has not been
tampered with and is legitimate (Smith & Johnson, 2023).
[Link]
Backing up computer resources is crucial for safeguarding data and facilitating recovery in the event
of hardware malfunctions, data corruption, or cyberattacks. A variety of options exist for backing up
computer resources:
• Local Backups: This approach entails storing data copies on tangible devices, like external hard
drives, USB flash drives, or network-attached storage (NAS). Local backups are readily
accessible and efficient but may be susceptible to physical damage or theft (Klein & Schneider,
2020).
• Cloud Backups: Cloud-based solutions let customers to store data on remote servers managed
by third-party suppliers. This approach provides scalability, distant accessibility, and
safeguards against local calamities. Nonetheless, it necessitates a reliable internet connection
and may incur recurring membership fees (Jones & Smith, 2021).
• Incremental Backups: This method exclusively preserves alterations made since the
preceding backup, hence reducing storage requirements and backup duration. It is frequently
utilized alongside full backups, in which a comprehensive copy is created regularly, succeeded
by routine incremental backups (Brown et al., 2022).
• Full Backups: A full backup generates a comprehensive duplicate of all designated data at a
particular moment in time. This method is thorough and facilitates restoration, but it
necessitates considerable storage capacity and time to execute (Lee, 2023).
• Snapshot Backups: This feature records the condition of a system at a particular instant,
facilitating rapid recovery of data and system configurations. Snapshots are especially
beneficial in virtualized settings (Garcia, 2024).
• Disaster Recovery Solutions: These are holistic strategies that integrate many backup
methodologies and encompass preparations for data restoration after a major incident. They
are crucial for enterprises to guarantee company continuity (Anderson & Peters, 2022).
Safeguard Measures
• Physical Security Measures: Utilize locks, security cameras, and access control systems to prevent
unauthorized access to hardware (Higgins, 2020). This includes securing server rooms and using
cable locks for laptops and other portable devices.
• Environmental Controls: Ensure that hardware is protected from environmental factors such as
extreme temperatures, humidity, and dust. This can be achieved through proper HVAC systems and
regular maintenance (Smith, 2021).
• Data Encryption and Backup: Implement encryption for sensitive data stored on hardware to
prevent data breaches in case of theft. Regular backups can also protect against data loss due to
hardware failure (Jones, 2022).
• Asset Management: Maintain an inventory of all hardware assets, which helps in tracking and
recovering stolen items. Regular audits can also help identify any discrepancies (Lee, 2023).
• Insurance: Investing in insurance policies that cover hardware theft and damage can provide
financial protection and facilitate recovery (Taylor, 2022).
Case study
Phishing Attack on a Financial Institution
Scenario: In 2019, a major bank reported that thousands of its customers had fallen victim to phishing
attacks. Cybercriminals sent fake emails that appeared to come from the bank, prompting recipients
to click on malicious links. Once clicked, users were redirected to a fake login page, where their
account credentials were stolen.
Risk: Phishing is a form of social engineering that targets individuals by tricking them into providing
sensitive information, such as usernames, passwords, and credit card details. Once attackers gain
access to the accounts, they can siphon funds or commit identity fraud.
Impact: The bank had to reimburse customers for stolen funds and invest in further security measures,
such as multifactor authentication, to prevent future incidents.
Scenario: In 2021, a ransomware attack hit a hospital system in Ireland, encrypting critical data,
including patient records. The attackers demanded a large ransom to unlock the data. The hospital's
operations were severely disrupted, causing delays in surgeries and treatments.
Risk: Ransomware attacks involve malware that encrypts a victim's data, making it inaccessible.
Attackers demand a ransom to restore access, and often target healthcare institutions because their
services are critical, and they may be more likely to pay.
Impact: The hospital faced operational downtime, loss of critical patient data, and the risk of
compromised patient privacy. Some patients had to be transferred to other facilities, and the hospital
incurred significant recovery costs.
Scenario: In 2018, Facebook experienced a major data breach where the personal information of 50
million users was compromised. Hackers exploited a vulnerability in the "View As" feature, allowing
them to take over user accounts and access private messages, photos, and posts.
Risk: A data breach occurs when unauthorized individuals access confidential information. In this case,
hackers exploited a software vulnerability, gaining access to sensitive user data.
Impact: Facebook faced legal and reputational consequences, with the breach leading to public
outrage, governmental scrutiny, and hefty fines under data protection laws. The company also had to
address security vulnerabilities and improve user privacy controls.
Scenario: The SolarWinds attack in 2020 affected multiple organizations, including government
agencies and private companies. Hackers compromised SolarWinds' software update system,
inserting malicious code that was then distributed to thousands of customers via a legitimate software
update.
Risk: A supply chain attack targets third-party suppliers or service providers to infiltrate organizations
indirectly. In this case, the malicious code allowed attackers to spy on targeted organizations, gaining
access to sensitive data and communications.
Impact: The attack compromised several high-profile U.S. government agencies and private firms,
leading to a significant national security incident. The financial costs of remediation and reputational
damage were enormous, and the breach raised awareness about the need for tighter security across
the supply chain.
Scenario: In 2019, an employee at a financial firm stole sensitive client data and sold it on the dark
web. The employee had legitimate access to the information as part of their role but abused their
access for personal gain.
Risk: An insider threat arises when someone within an organization, such as an employee or
contractor, intentionally or unintentionally compromises security. In this case, the threat was
intentional.
Impact: The breach resulted in financial losses for clients, legal consequences for the employee, and
a tarnished reputation for the firm. It also highlighted the importance of implementing robust internal
security measures like monitoring, data access controls, and employee training.
Scenario: In 2020, a hacker accessed a family's smart home devices, including security cameras and
thermostats. The hacker gained access by exploiting weak passwords, causing distress for the family
as the hacker took control of the devices remotely and spied on their activities.
Risk: IoT (Internet of Things) devices, such as smart cameras, thermostats, and voice assistants, are
vulnerable to hacking if they are not properly secured. Often, these devices lack robust security
measures, making them easy targets for cybercriminals.
Impact: The family experienced a violation of privacy and had to replace or update the compromised
devices. This incident raised awareness about the need for stronger password protection and network
security for smart home devices.
Key Takeaways
• Phishing and ransomware attacks are common, particularly in sectors like finance and
healthcare.
• Data breaches can result in significant financial and reputational damage, especially for
companies that handle large amounts of user data.
• Supply chain attacks highlight the importance of securing not just internal systems, but also
third-party partners.
• Insider threats remain a significant challenge, emphasizing the need for internal security
measures.
• IoT vulnerabilities show that even personal devices in homes can be targeted, stressing the
need for vigilance across all digital platforms.
Class activity
• Define the term, digital (computer) security risks, and briefly describe the types of
cybercriminals.
• Describe various types of Internet and network attacks and explain ways to safeguard against
these attacks.
• Discuss techniques to prevent unauthorized computer access and use.
• Discuss the types of devices available that protect computers from system failure.
• Explain the ways that software manufacturers protect against software piracy.
• Discuss how encryption, digital signatures, and digital certificates work.
• Explain the options available for backing up computer resources.
• Identify safeguards against hardware theft, vandalism, and failure.
References:
1. Akamai Technologies (2023) 'Global Security Report', Credential Stuffing Analysis, 15(2), pp.
34-49.
2. Anderson, J. and Peters, M. (2022) 'Modern Disaster Recovery Solutions', Journal of
Information Security, 8(4), pp. 156-171.
3. Bertino, E. and Islam, N. (2021) 'Network Security Fundamentals', International Journal of
Computer Security, 12(3), pp. 89-104.
4. Cisco Systems (2024) 'Network Security Report', Firewall Implementation Guide, 10(1), pp. 45-
62.
5. Cloudflare (2024) 'DDoS Attack Trends', Annual Security Review, 7(1), pp. 12-28.
6. IBM Corporation (2024) 'Insider Threat Report', Security Intelligence Review, 9(2), pp. 78-93.
7. McAfee (2023) 'Malware Analysis Report', Digital Security Review, 11(4), pp. 167-182.
8. NIST (2023) 'Cryptography Standards', Special Publication Series, 6(3), pp. 90-105.
9. OWASP Foundation (2024) 'Web Application Security Guide', OWASP Top 10, 2024 Edition.
10. SANS Institute (2023) 'Security Audit Framework', Information Security Reading Room,
Technical Report Series.
11. Symantec Corporation (2023) 'Internet Security Threat Report', Annual Cybersecurity Review,
24(1).
12. Verizon (2024) 'Data Breach Investigations Report', Annual Security Analysis, 2024 Edition.
Chapter 5: Data and Knowledge Management
LEARNING OUTCOMES
After reading this Section of the guide, the learner should be able to:
5.1 Discuss ways that common challenges in managing data can be addressed using data
governance.
5.2 Identify and assess the advantages and disadvantages of relational databases.
5.3 Define Big Data and explain its basic characteristics.
5.4 Explain the elements necessary to successfully implement and maintain data warehouses.
5.5 Describe the benefits and challenges of implementing knowledge management systems in
organizations.
5.6 Understand the processes of querying a relational database, entity-relationship modeling, and
normalization and joins.
1.1 Introduction
In today's digital age, every click, transaction, and interaction generate data, making the ability to
manage and leverage this information crucial for modern organizations. Think of data management
like organizing a vast digital library - it's not just about storing books (or in this case, data), but making
sure you can find exactly what you need when you need it.
Organizations worldwide are investing heavily in data management - and for good reason. Just as a
well-organized library helps students find resources efficiently, effective data management helps
businesses make smarter decisions, serve customers better, and stay ahead of competitors. However,
this comes with its challenges: imagine trying to organize and make sense of a library that grows by
thousands of volumes every second!
Modern businesses are like learning organisms, constantly absorbing and processing information
through sophisticated database systems and knowledge management tools. These systems help
transform raw data into useful insights, much like how a skilled researcher synthesizes information
from multiple sources to form valuable conclusions.
Whether you're planning to work in technology, marketing, finance, or any other field, understanding
data management is no longer optional - it's as fundamental as knowing how to read and write. Poor
data management can be as problematic as a disorganized filing system, while good practices can give
organizations a powerful competitive edge.
This chapter will walk you through the essential concepts of data and knowledge management,
helping you understand how organizations capture, organize, and use information in today's digital
world. We'll explore database design, knowledge management systems, and how businesses turn vast
amounts of data into actionable insights.
Data management encompasses the comprehensive practices and procedures organizations employ
to handle their information assets. This involves not just storing data, but ensuring its accessibility,
accuracy, and security throughout its entire lifecycle. The impact of poor data management can be
severe and far-reaching, affecting everything from daily operations to strategic planning.
Consider a hospital's patient records system. Every aspect of patient care depends on accurate,
accessible data. When a patient arrives in the emergency room, medical staff need immediate access
to their medical history, allergies, and current medications. Any delay or inaccuracy in this data could
have life-threatening consequences. This exemplifies why data quality must be measured across
multiple dimensions:
Accuracy refers to the correctness of stored information. In financial systems, even a small decimal
point error could result in significant monetary discrepancies. Banks employ sophisticated validation
systems to ensure transaction amounts are recorded correctly across all systems.
Completeness ensures all necessary information is present. A customer order isn't truly complete
without shipping address, payment information, and product details. Missing any of these elements
could halt the entire fulfillment process.
Timeliness relates to how current the information is and how quickly it's available when needed. Stock
trading systems require microsecond-level timeliness, as even slight delays can result in significant
financial losses.
Consistency means that data remains uniform across different systems and locations. When a
customer updates their address, this change should reflect across all relevant databases and
applications simultaneously.
One of the primary challenges of data management is the sheer volume of data that organizations
must handle. Major corporations, like Google and Facebook, deal with exabytes of data daily,
necessitating robust data storage and processing solutions. This overwhelming volume requires
organizations to implement scalable technologies that can accommodate growth while ensuring
efficient data retrieval. [Link]
Another challenge is data scattering, which occurs when data is distributed across multiple systems
and locations. This fragmentation can hinder an organization’s ability to access and analyze
comprehensive datasets, thereby impacting decision-making processes.
Organizations also contend with multiple data sources, which may include internal records such as
customer databases and external sources such as social media. New data sources, such as Internet of
Things (IoT) devices, further complicate the data landscape by generating real-time data streams that
need immediate processing.
The concept of data rot refers to the degradation of storage media over time and the obsolescence of
formats and systems, posing a significant risk to data integrity. Furthermore, organizations must
establish robust data security protocols to safeguard data quality and integrity across global
operations.
For instance, a bank that loses critical customer data due to a lack of proper data governance policies
can face severe repercussions, including loss of customer trust and regulatory fines.
Data Governance
Data governance is defined as a formal approach to managing data across an organization, ensuring
its accuracy, security, and accessibility. Effective data governance involves the establishment of
policies, procedures, and standards that guide how data is managed and utilized. It ensures that data
is reliable and meets regulatory requirements.
For example, pharmaceutical companies must adhere to stringent data governance protocols to
comply with regulations such as the Health Insurance Portability and Accountability Act (HIPAA). By
implementing effective data governance practices, organizations can mitigate risks associated with
data mismanagement and enhance data quality.
Master data management is a process that spans allof an organization’s business processes and
applications. It provides companies with the ability to store, maintain, exchange, and synchronize a
consistent, accurate, and timely “single version of the truth” for the company’s master data.
Master data are a set of core data, such as customer, product,employee, vendor, geographic location,
and so on, that span the enterprise’s information systems. It is important to distinguish between
master data and transactional data. Transactional data,which are generated and captured by
operational systems, describe thebusiness’s activities, or transactions. In contrast, master data are
applied to multiple transactions, and they are used to categorize,aggregate, and evaluate the
transactional data.
From the mid-1950s, when businesses first adopted computer applications, until the early 1970s,
organizations managed their data in a file management environment. This environment evolved
because organizations typically automated their functions one application at a time. Therefore, the
various automated systems developed independently from one another, without any overall planning.
Each application required its own data, which were organized in a data file.
[Link]
A data file is a collection of logically related records. In a file management environment, each
application has a specific data file related to it. This file contains all of the data records the application
requires. Over time, organizations developed numerous applications, each with an associated,
application-specific data file.
For example, imagine that most of your information is stored in your university’s central database. In
addition, however, a club to which you belong maintains its own files, the athletics department has
separate files for student athletes, and your instructors maintain grade data on their personal
computers. It is easy for your name to be misspelled in one of these databases or files. Similarly, if you
move, then your address might be updated correctly in one database or file but not in
the others.
Using databases eliminates many problems that arose from previous methods of storing and accessing
data, such as file management systems. Databases are arranged so that one set of software
programs—the database management system—provides all users with access to all of the data.
Database systems minimize the following problems:
Data security: Because data are “put in one place” in databases, there is a risk of losing a lot of data
at one time. Therefore, databases must have extremely high security measures in place to minimize
mistakes and deter attacks.
Data integrity: Data meet certain constraints; for example, there are no alphabetic characters in a
Social Security number field.
Data independence: Applications and data are independent of one another; that is, applications and
data are not linked to each other, so all applications are able to access the same data.
The data hierarchy refers to the organized structure of data within a database, ranging from the
smallest units of data, such as bits and bytes, to larger collections, such as databases. Understanding
this hierarchy is essential for effective data management.
At the base level, bits and bytes represent the fundamental building blocks of data. Moving up the
hierarchy, bytes group into fields (individual data elements), which form records (complete sets of
related data). Records are then grouped into files (related records), and multiple files comprise a
database, which is a comprehensive collection of related data organized for easy access and
management.
For example, in a hospital setting, each patient’s medical record can be broken down into fields like
patient ID, name, and medical history. These fields are grouped into records representing each
patient, and all patient records are stored within the hospital’s central database.
A database management system (DBMS) is a set of programs that provide users with tools to create
and manage a database. Managing a database refers to the processes of adding, deleting, accessing,
modifying, and analysing data that are stored in a database. An organization can access these data by
using query and reporting tools that are part of the DBMS or by utilizing application programs
specifically written to perform this function. DBMSs also provide the mechanisms for maintaining the
integrity of stored data, managing security and user access, and recovering information if the system
fails. Because databases and DBMSs are essential to all areas of business, they must be carefully
managed. There are a number of different database architectures, but we focus on the relational
database model because it is popular and easy to use. Other database models—for example, the
hierarchical and network models—are the responsibility of the MIS function and are not used by
organizational employees. Popular examples of relational databases are Microsoft Access and Oracle.
Most business data—especially accounting and financial data— traditionally were organized into
simple tables consisting of columns and rows. Tables enable people to compare information quickly
by row or column. Users can also retrieve items rather easily by locating the point of intersection of a
particular row and column.
[Link]
The relational database model is based on the concept of two dimensional tables. A relational
database generally is not one big table usually called a flat file that contains all of the records and
attributes. Such a design would entail far too much data redundancy. Instead, a relational database is
usually designed with a number of related tables. Each of these tables contains records (listed in rows)
and attributes (listed in columns).
To be valuable, a relational database must be organized so that users can retrieve, analyze, and
understand the data they need. A key to designing an effective database is the data model. A data
model is a diagram that represents entities in the database and their relationships. An entity is a
person, a place, a thing, or an event such as a customer, an employee, or a product about which an
organization maintains information. Entities can typically be identified in the user’s work environment.
A record generally describes an entity. An instance of an entity refers to each row in a relational table,
which is a specific, unique representation of the entity. For example, your university’s student
database contains an entity called “student.” An instance of the student entity would be a particular
student. Thus, you are an instance of the student entity in your university’s student database.
Each characteristic or quality of a particular entity is called an attribute. For example, if our entities
were a customer, an employee, and a product, entity attributes would include customer name,
employee number, and product colour.
Every record in the database must contain at least one field that uniquely identifies that record so that
it can be retrieved, updated, and sorted. This identifier field (or attribute) is called the primary key.
For example, a student record in a U.S. university would use a unique student number as its primary
key. (Note: In the past, your Social Security number served as the primary key for your student record.
However, for security reasons, this practice has been discontinued.)
In some cases, locating a particular record requires the use of secondary keys. A secondary key is
another field that has some identifying information but typically does not identify the record with
complete accuracy. For example, the student’s major might be a secondary key if a user wanted to
identify all of the students majoring in a particular field of study. It should not be the primary key,
however, because many students can have the same major. Therefore, it cannot uniquely identify an
individual student. A foreign key is a field (or group of fields) in one table that uniquely identifies a
row of another table. A foreign key is used to establish and enforce a link between two tables.
Organizations implement databases to efficiently and effectively manage their data. There are a
variety of operations that can be performed on databases. As we noted earlier in this chapter,
organizations must manage huge quantities of data. Such data consist of structured and unstructured
data and are called Big Data. Structured data is highly organized in fixed fields in a data repository such
as a relational database. Structured data must be defined in terms of field name and type (e.g.,
alphanumeric, numeric, and currency).
Unstructured data is data that does not reside in a traditional relational database. Examples of
unstructured data are e-mail messages, word processing documents, videos, images, audio files,
PowerPoint presentations, Facebook posts, tweets, snaps, ratings and recommendations, and Web
pages. Industry analysts estimate that 80 to 90 percent of the data in an organization is unstructured.
To manage Big Data, many organizations are using special types of databases, which we also discuss
in the next section. Because databases typically process data in real time (or near real time), it is not
practical to allow users access to the databases. After all, the data will change while the user is looking
at them! As a result, data warehouses have been developed to allow users to access data for decision
making.
Organizations today are collecting data from an increasing number of diverse sources at an
unprecedented rate. This includes data from events that were once not considered measurable, such
as a person’s location, engine temperatures, and even the stress levels on a bridge. Modern data
collection has evolved to capture almost everything for analysis.
According to IDC, more than a zettabyte of data is generated globally each year, and this amount is
increasing by 50% annually. In 2000, only 25% of the world’s stored information was in digital form,
while by 2019, more than 98% of stored data was digital.
Big Data systems excel because they handle vast amounts of data, improving over time by identifying
key patterns as more data is fed into the system. According to Gartner, Big Data is defined as diverse,
high-volume, and high-velocity information that requires innovative processing techniques to
support decision-making, uncover insights, and optimize business processes. The Big Data Institute
adds that Big Data is made up of structured, unstructured, and semi-structured data, which is
generated at a rapid pace and does not fit neatly into traditional relational databases.
Traditional enterprise data: This consists of customer information from customer relationship
management (CRM) systems, transactional data from enterprise resource planning (ERP) systems,
web store transactions, operations data, and general ledger data.
Machine-generated or sensor data: Examples include data from smart meters, sensors in
smartphones, airplane engines, and industrial machinery.
Social data: This includes feedback from customers, microblog posts (such as those on Twitter), and
content from social media platforms such as Facebook, YouTube, and LinkedIn.
Images and video: Collected by billions of devices worldwide, including digital cameras, camera
phones, medical scanners, and security cameras.
For example, Facebook’s 2.4 billion users upload more than 350 million photos every day. Twitter
users send approximately 550 million tweets each day, and YouTube receives over 300 hours of video
uploads per minute from its 1.3 billion users.
Big Data can be identified by three main characteristics: volume, velocity, and variety. Volume refers
to the sheer scale of data being generated, such as the vast amount of data produced by sensors in
a single jet engine—up to 10 terabytes in just 30 minutes. Velocity describes the speed at which data
flows into organizations, which has increased dramatically, allowing companies to quickly analyse
customer interactions and offer real-time recommendations. Variety refers to the multiple forms that
data can take, ranging from structured financial records to unstructured content like satellite images,
audio streams, and social media posts.
Even if some forms of data may not seem immediately valuable, they can provide deep insights when
properly analysed. A good example is Google's decision to harness satellite imagery and street views;
although their potential wasn’t immediately recognized, they have since proven to be invaluable data
sources.
While Big Data offers immense value, it also presents several challenges:
Data from untrusted sources: Since Big Data can come from many diverse sources, including social
media and external websites, not all data is reliable. For example, tweets or user-generated content
may come from unverified or inaccurate sources.
Dirty data: Dirty data refers to incomplete, incorrect, or duplicate data. This can include typographical
errors, duplicate information from press releases or social media shares, or even incorrect data from
user input. These inaccuracies can distort the analysis and lead to incorrect conclusions.
Rapid changes in data: Data streams can change rapidly, which presents a challenge when trying to
maintain data quality. For example, a utility company analyzing smart-meter data may encounter
incomplete data in real-time, complicating predictions about power usage.
Organizations are now developing strategies to manage Big Data effectively and extract valuable
insights. Traditional relational databases are often insufficient for handling the complexities of Big
Data, which has led to the adoption of NoSQL databases. Unlike relational databases, which organize
data into rows and columns, NoSQL databases can handle structured and unstructured data without
requiring a rigid schema. This flexibility makes them particularly useful for handling Big Data.
An example of this is Hadoop, a collection of open-source programs that enables the storage and
processing of large datasets using massively parallel processing. MapReduce is another tool that
helps by distributing large analyses across multiple servers and then collecting and integrating the
results into a single report.
Another significant development is Google’s Cloud Spanner, a globally distributed database that
provides consistency and high availability across multiple regions. It offers organizations the benefits
of a traditional relational database combined with the scalability of NoSQL systems, making it a
powerful tool for managing Big Data at a global scale.
Organizations can extract significant value from Big Data by making it accessible to stakeholders,
conducting experiments, micro segmenting customers, creating new business models, and analysing
more data. Making Big Data available to the public, for example, has led to innovations in sectors
such as healthcare, urban planning, and environmental protection.
Micro segmentation is another application, where companies divide their customer base into smaller
groups based on specific characteristics or behaviours, allowing for more personalized marketing
strategies. This approach helps companies like Paytronix Systems create tailored loyalty programs
based on customer data from multiple sources.
Lastly, analysing all available data, rather than relying on random sampling, can lead to more accurate
insights. This method of processing entire datasets, known as real-time analytics, provides better
precision compared to traditional sampling techniques, where biases and errors can skew results.
Human Resources: Companies like Caesars Entertainment use Big Data to manage healthcare costs
by analysing employee health claims. Similarly, Catalyse uses Big Data in recruitment to better assess
job candidates based on online assessments that gather data points about their responses and
behaviours.
Big Data has also impacted hiring practices, where companies can now analyse data about how
candidates answer questions, rather than just what they answer, allowing for more accurate
predictions of job performance and fit.
1.5 Data Warehouses and Data Marts
In today’s fast-paced business environment, successful organizations are those that can respond
quickly and flexibly to changes in the market. The key to such agility is how effectively analysts and
managers use data to make informed decisions. A crucial part of this process is providing users with
access to the right corporate data so they can analyze it efficiently. For instance, if the manager of a
bookstore wanted to know the profit margin on used books in her store, she could retrieve this
information from her database using SQL or a query-by-example (QBE). However, determining the
trend in profit margins for used books over the past decade would require a more complex query,
highlighting the need for more sophisticated tools such as data warehouses and data marts.
Data warehouses and data marts are critical components of business analytics. They support decision-
making by providing a repository of historical data that is organized by subject. A data warehouse
stores large volumes of historical data, supporting decision-makers across the organization, while data
marts are smaller, department-specific versions of data warehouses. Due to the high costs of data
warehouses, they are primarily used by large companies. On the other hand, data marts, which are
quicker and cheaper to implement, are often used in specific business units or departments. Data
marts can be created in less than 90 days and are designed to meet the needs of a smaller group of
users. [Link]
Organized by business dimension or subject: Data in warehouses and marts are organized by subjects
such as customer, product, and region, making it easier for users to query and analyze data.
Use of online analytical processing (OLAP): Unlike online transaction processing (OLTP) systems that
handle daily transactions, OLAP systems in data warehouses are designed to support analysis by end
users. This type of processing is critical for decision-making purposes.
Integrated data: Data warehouses integrate data from multiple sources into a unified format, creating
a comprehensive view of each business dimension. For example, customer data may be collected from
various systems and then integrated to form a single view of the customer.
Time-variant: Warehouses maintain historical data, making it possible to analyze trends over time.
Unlike transactional systems, which store only current data, a warehouse can store data for years,
enabling long-term analysis.
Nonvolatile: Once data is loaded into the warehouse, it cannot be updated by users. This ensures that
the data reflects a historical record and is used only for analysis.
[Link]
Source systems: These systems provide the raw data that is stored in the warehouse. Source systems
can range from transactional databases to ERP systems, and they often include external sources such
as demographic data from third-party providers.
Data integration technologies: The process of extracting, transforming, and loading (ETL) data into the
warehouse is a key part of this environment. Data from different systems may require transformation
to ensure consistency.
Data storage architectures: Organizations must choose how to store their decision-support data,
whether through a centralized warehouse or multiple independent data marts. Some organizations
use a "hub-and-spoke" architecture, where a central warehouse stores data for multiple dependent
marts.
Metadata and data governance: Metadata, which describes the data in the warehouse, is critical for
both IT staff and users. Data governance ensures that the data warehouse meets the organization’s
needs in terms of quality and compliance.
Data Integration
Data integration is essential for creating a unified view of the data stored in a warehouse. The ETL
process ensures that data from different systems is standardized and transformed into a usable
format. This may involve changing formats, aggregating data, or cleansing the data to remove
duplicates.
Organizations can choose from different architectures for storing data in the warehouse. Some
organizations use a centralized warehouse, while others may implement independent data marts for
specific departments. However, independent data marts can lead to inconsistencies in data,
prompting many companies to move toward integrated data warehouse solutions.
Maintaining metadata is essential for ensuring that data in the warehouse is well-documented and
usable. Data quality is another crucial aspect, as poor-quality data can lead to inaccurate analysis.
Organizations may use data-cleansing tools to improve the quality of their data before it is loaded into
the warehouse.
Some organizations are moving toward real-time data warehousing, where data is loaded into the
warehouse almost instantly after being captured by source systems. This allows for up-to-date
analysis, as seen in companies like Walmart, where sales data is available for analysis within minutes
of a transaction.
Governance
Data governance involves ensuring that the data warehouse is used effectively. Organizations often
create governance structures, such as senior-level committees, to align business and BI strategies,
prioritize projects, and allocate resources.
There are many types of users for data warehouses, ranging from IT developers to executives. Some
users are responsible for producing reports and analyses, while others use the data for decision-
making purposes. The key benefit of data warehouses is that they provide users with fast access to
consolidated, high-quality data that can improve decision-making and provide a competitive
advantage.
As we have noted throughout this text, data and information are vital organizational assets.
Knowledge is a vital asset as well. Successful managers have always valued and used intellectual
assets. These efforts were not systematic, however, and they did not ensure that knowledge was
shared and dispersed in a way that benefited the overall organization. Moreover, industry analysts
estimate that most of a company’s knowledge assets are not housed in relational databases. Instead,
they are dispersed in e-mail, word processing documents, spreadsheets, presentations on individual
computers, and in people’s heads. This arrangement makes it extremely difficult for companies to
access and integrate this knowledge. The result frequently is less-effective decision making.
Knowledge management (KM) is a process that helps organizations manipulate important knowledge
that comprises part of the organization’s memory, usually in an unstructured format. For an
organization to be successful, knowledge, as a form of capital, must exist in a format that can be
exchanged among persons. It must also be able to grow. Knowledge differs from data and information
in IT contexts. While data consists of raw facts and measurements, and information is processed data
that's timely and accurate, knowledge represents information that can be applied in context. It's
essentially information in action, also known as intellectual capital.
There are two main types of knowledge in organizations. Explicit knowledge is objective and
technical, including documented items like policies, manuals, reports, and strategies that can be easily
shared. In contrast, tacit knowledge is subjective and based on experience, encompassing things like
insights, expertise, skills, and organizational culture. Unlike explicit knowledge, tacit knowledge is
difficult to document and transfer.
Knowledge Management Systems (KMS) use modern technology like intranets, extranets, and
databases to organize and share knowledge within organizations. These systems help companies
preserve expertise, especially when facing challenges like employee turnover or downsizing. KMS
make best practices available throughout the organization, improving performance and customer
service while supporting better product development.
However, implementing KMS comes with several challenges. Organizations must create a culture
where employees willingly share their tacit knowledge, maintain and update the knowledge base
regularly, and commit necessary resources to the system. Success requires both technological
infrastructure and organizational commitment to knowledge sharing.
A functioning KMS follows a cycle that consists of six steps. The reason the system is cyclical is that
knowledge is dynamically refined over time. The knowledge in an effective KMS is never finalized
because the environment changes over time and knowledge must be updated to reflect these
changes.
Figure 5.2 Six steps of the KMS Cycle (Adopted from Rainer and Prince, 2022)
Relational databases consist of tables with rows and columns. Each row represents a record, and each
column represents an attribute (or field) of that record. Every record in a relational database must
have a unique identifier called a primary key, which allows it to be retrieved, updated, and sorted. A
foreign key is a field in one table that corresponds to the primary key in another table, creating
relationships between tables.
Query Languages
The most commonly performed database operation is searching for information. Structured query
language (SQL) is the most popular query language used for interacting with a database. SQL allows
people to perform complicated searches by using relatively simple statements or key words. Typical
key words are SELECT (to choose a desired attribute), FROM (to specify the table or tables to be used),
and WHERE (to specify conditions to apply in the query). To understand how SQL works, imagine that
a university wants to know the names of students who will graduate cum laude (but not magna or
summa cum laude) in May 2018. The university IT staff would query the student relational database
with an SQL statement such as the following:
SELECT Student_Name
FROM Student_Database
The SQL query would return John Jones and Juan Rodriguez. Another way to find information in a
database is to use query by example (QBE). In QBE, the user fills out a grid or template—also known
as a form—to construct a sample or a description of the data desired. Users can construct a query
quickly and easily by using dragand-drop features in a DBMS such as Microsoft Access. Conducting
queries in this manner is simpler than keying in SQL commands.
Entity-Relationship (ER) Modeling
Designing a relational database involves Entity-Relationship (ER) Modeling, which helps visualize
entities (such as people, places, or things) and the relationships between them. In an ER diagram,
entities are shown as rectangles, relationships as diamonds, and attributes as lists of fields associated
with each entity. The primary key of each entity is underlined in the diagram.
Business rules are used to define relationships between entities. These rules describe the policies or
procedures that dictate how data is managed within the organization. For example, at a university, a
student might register for multiple classes, while a class may have multiple students. ER diagrams
help database designers ensure that all entities and relationships are properly captured and
understood.
Normalization
Normalization is a method used to optimize a database by reducing data redundancy and ensuring
data integrity. The goal is to break down large tables into smaller, related tables so that each piece of
data is stored only once. This process improves the efficiency and performance of the database.
In the normalization process, tables are progressively refined through several stages: First Normal
Form (1NF) eliminates duplicate records; Second Normal Form (2NF) ensures that non-key attributes
are fully dependent on the primary key; and Third Normal Form (3NF) removes transitive
dependencies, meaning non-key attributes cannot define other non-key attributes.
For instance, in a pizza shop database, the order number and customer information may be repeated
for each pizza in the order. By normalizing the data, separate tables for orders, customers, and pizzas
are created, reducing redundancy and making the database more manageable.
Joins
The join operation allows users to combine records from multiple tables based on related fields. For
example, to display a student's name along with their course details, a join between the student and
course tables would be performed. Joins are crucial for retrieving meaningful information from
databases that store data in separate but related tables.
Chapter Questions
b) Problems with storage media and inability to access data due to obsolete technology
a) Accuracy
b) Completeness
c) Complexity
d) Timeliness
c) Time of purchase
References
1. Cheng, K.C. et al. (2021) 'Management of Data Quality in Modern Organizations', Journal of
Data Management, 15(3), pp. 78-92.
2. Gartner Research (2024) 'Big Data Analytics Trends', Technology Research Series, 8(2), pp. 45-
60.
3. IDC (2024) 'Global Data Growth Analysis', Digital Storage Review, 12(1), pp. 23-38.
4. Mansfield-Devine, S. (2020) 'Data Security and Privacy Challenges', Computer Fraud &
Security, 2020(4), pp. 12-20.
7. Paytronix Systems (2024) 'Customer Data Analytics Report', Business Intelligence Quarterly,
6(1), pp. 34-49.
8. The Big Data Institute (2024) 'Understanding Big Data Architecture', Data Analytics Review,
11(3), pp. 112-126.
9. Wang, L. and Li, X. (2022) 'Data Encryption in Modern Computing', Journal of Information
Security, 7(2), pp. 67-82.
Chapter 6: Telecommunications & Networking
LEARNING OUTCOMES
After reading this Section of the guide, the learner should be able to:
2.1 Introduction
In addition to being crucial in our personal lives, there are three key aspects of network computing
to understand. First, in today’s organizations, computers operate collaboratively, continuously
sharing data with one another. Second, this data exchange, enabled by telecommunications
technologies, offers significant advantages to companies. Third, this exchange can occur over any
distance and across networks of various sizes.
Without networks, your desktop computer would merely serve as another productivity tool, much
like a typewriter. However, networks enhance your computer’s capabilities, allowing access to
information from countless sources, thus increasing productivity for both you and your organization.
Regardless of the type of organization—whether for-profit or non-profit, large or small, global or
local—networks, especially the Internet, have transformed and will continue to reshape how we
conduct business.
Networks facilitate innovative business practices in areas such as marketing, supply chain
management, customer service, and human resources. Notably, the Internet and private intranets—
internal networks using Internet software and TCP/IP protocols—significantly impact our
professional and personal lives.
For all organizations, regardless of size, having a telecommunications and networking system is now
essential for survival, not just a competitive edge. Networked systems enhance organizational
flexibility, allowing adaptation to rapid business changes. They enable hardware, applications, and
data sharing within and between organizations. Additionally, networks facilitate collaboration among
geographically dispersed employees and teams, fostering teamwork, innovation, and efficient
interactions. They are also vital links between businesses, partners, and customers.
Clearly, networks are indispensable for modern businesses. Understanding networks is crucial
because if you run or work in a business, you cannot operate without them. Rapid communication
with customers, partners, suppliers, and colleagues is necessary. Until around 1990, businesses relied
on postal services or phone systems for communication. Today, the business pace is nearly real-time,
requiring the use of computers, email, messaging, the Internet, smartphones, and other mobile
devices, all connected through networks for effective global communication and collaboration.
Networking and the Internet are foundational to 21st-century commerce. A key goal of this book is
to help you become an informed user of information systems, and knowledge of networking is vital
for modern business literacy. The Internet's global significance cannot be overstated; it’s often
described as the world's nervous system. Fast broadband access is increasingly crucial for success.
For example, in 2017, New York City sued Verizon for inadequate fibre-optic services, leading to a
2018 agreement to expand high-speed services. Similarly, in 2018, New York's Attorney General sued
Charter Communications (now Spectrum) for failing to deliver promised Internet speeds, resulting in
a $62.5 million payout to affected customers in 2019.
This chapter starts with an introduction to computer networks and the different types that exist.
Next, you’ll explore fundamental concepts related to networks. After that, the focus shifts to the
essentials of the Internet and the World Wide Web. Finally, you’ll look at the various network
applications that benefit both individuals and organizations, highlighting what networks enable you
to accomplish.
2.2 What Is a Computer Network?
A computer network is a system that links computers and other devices, such as printers, using
communication media to facilitate the transmission of data and information between them. Voice and
data networks are consistently becoming faster and more affordable, meaning their bandwidth is
increasing. Bandwidth refers to a network's transmission capacity, measured in bits per second. It can
vary from narrowband, which has lower capacity, to broadband, which offers higher capacity.
The telecommunications industry finds it challenging to consistently define "broadband." The Federal
Communications Commission (FCC) classifies broadband as a communication medium that offers a
download speed greater than 25 megabits per second (Mbps)—for example, when streaming a Netflix
movie—and an upload speed of 3 Mbps, which applies to data sent to an Internet server, such as a
Facebook post or YouTube video.
Interestingly, some FCC members propose raising the download speed threshold for broadband to
100 Mbps. Nevertheless, this definition is fluid and will likely adapt as transmission capacities improve
over time.
You may be familiar with common broadband connections like Digital Subscriber Line (DSL) and cable,
which are typically found in homes and dormitories. Both DSL and cable fall within the specified
transmission speed range, thus qualifying as broadband.
Computer networks come in various sizes, from small personal networks to expansive global ones.
These include, from smallest to largest: personal area networks (PANs), local area networks (LANs),
metropolitan area networks (MANs), wide area networks (WANs), and the Internet, which is the
largest. PANs are short-range networks, usually covering just a few meters, designed for
communication between nearby devices and can be either wired or wireless. MANs are larger
networks that span a metropolitan area, falling between LANs and WANs in size. WANs generally cover
vast geographic areas, sometimes stretching globally and even reaching from Earth to Mars and
beyond.
Networks, regardless of their size, involve a trade-off among three key factors: speed, distance, and
cost. Organizations usually need to prioritize two out of these three objectives. For long-distance
coverage, they can achieve fast communication if they are willing to invest, or they can opt for lower-
cost communication at the expense of speed. Another possible combination is fast, cost-effective
communication with limited distance, which is the concept behind local area networks (LANs).
A local area network (LAN) connects two or more devices within a restricted geographical area,
typically within a single building, enabling all devices on the network to communicate with each other.
Most modern LANs utilize Ethernet technology (which will be discussed later in this chapter). For
example, an Ethernet LAN may consist of four computers, a server, and a printer, all connected
through a shared cable. Each device in the LAN is equipped with a network interface card (NIC) that
facilitates its physical connection to the LAN’s communication medium, which is commonly unshielded
twisted-pair wire (UTP).
Figure 6.1 Ethernet local area (Adopted from Rainer and Prince, 2022)
While not mandatory, many local area networks (LANs) include a file server or network server. This
server usually holds various software and data for the network. Additionally, it contains the LAN's
network operating system, which oversees the server and handles the routing and management of
communications within the network.
WANs have substantial capacities and often integrate multiple channels, such as fibre-optic cables,
microwave links, and satellite connections. They also utilize routers, which are communication
processors that direct messages from a LAN to the Internet, across several interconnected LANs, or
across a WAN like the Internet itself, which serves as a prime example of a WAN.
Enterprise Networks
Modern organizations often operate several local area networks (LANs) and may also have multiple
wide area networks (WANs). These various networks are interconnected to create an enterprise
network. For example, Figure 6.2 illustrates a model of enterprise computing. In this model, the
enterprise network features a backbone network, which is a high-speed central network that links
multiple smaller networks, such as LANs and smaller WANs. The LANs that connect to the backbone
WAN are referred to as embedded LANs.
Figure 6.2 Enterprise network (Adopted from Rainer and Prince, 2022)
Unfortunately, traditional networks can be inflexible and unable to adapt quickly to growing business
networking demands. This rigidity arises because the functions of traditional networks are spread
across physical routers and devices (i.e., hardware). As a result, implementing changes requires
individually configuring each network device, and in some cases, this must be done manually.
Software-defined networking (SDN) is an emerging technology gaining traction to help organizations
manage their data flows across enterprise networks. In an SDN, software centrally controls the
decisions governing how network traffic is routed through devices. This software can dynamically
adjust data flows to align with business and application needs.
You can think of traditional networks like a city's road system in 1920. Data packets represent cars
navigating through the city, while traffic officers (physical network devices) manage each intersection,
directing traffic based on turn signals and the types of vehicles passing through. However, these
officers only oversee their specific intersections and lack awareness of overall traffic patterns or
volume across the city, making it challenging to manage traffic effectively during peak hours. When
issues arise, the city must communicate with each officer individually via radio.
In contrast, consider an SDN as the road system of a modern city. Here, traffic officers are replaced by
traffic lights and electronic vehicle counters connected to central monitoring and control software.
This system allows for instant, centralized traffic management. The control software can adjust traffic
patterns at different times of day (such as during rush hour) and continuously monitor traffic flow,
automatically changing the traffic lights to facilitate smoother movement through the city with
minimal disruption.
In this section, you'll explore the fundamental principles of network operation. You'll start with
wireline communication media, which allow computers in a network to send and receive data. The
section will wrap up with an examination of network protocols and various types of network
processing.
Today, computer networks use digital signals, which are distinct on-off pulses that represent bits (0s
and 1s). This allows digital signals to transmit information in binary form that computers can interpret.
The U.S. public telephone system, known as the "plain old telephone system" (POTS), was initially
designed as an analog network for transmitting voice signals through continuous waves. These analog
signals convey information by varying the amplitude and frequency of the waves. POTS requires dial-
up modems to convert signals between analog and digital formats, but such modems are becoming
rare in most developed regions.
Cable modems, which operate over coaxial cables like those used for cable TV, provide broadband
access to the Internet or corporate intranets. Their speeds can vary significantly, with most providers
offering download speeds between 1 and 6 million bits per second (Mbps) and upload speeds between
128 and 768 thousand bits per second (Kbps). Since cable modem services share bandwidth among
multiple local subscribers, heavy simultaneous usage by neighbours can lead to slower speeds.
DSL modems, on the other hand, use the same lines as voice telephones and dial-up modems. They
maintain a constant connection, allowing for immediate access to the Internet
Transmitting data from one location to another requires a pathway or medium, known as a
communications channel. This channel comprises two types of media: cable (such as twisted-pair wire,
coaxial cable, or fibre-optic cable) and broadcast (including microwave, satellite, radio, or infrared).
Wireline media, or cable media, use physical wires or cables to carry data and information. Twisted-
pair and coaxial cables are constructed from copper, while fibre-optic cables are made from glass.
Alternatively, communication can occur through broadcast or wireless media. The cornerstone of
mobile communication in today’s fast-paced society is data transmission over electromagnetic
media—commonly referred to as the “airwaves.” In this section, you will explore the three types of
wireline channels.
Twisted-Pair Wire
Twisted-pair wire is the most common type of communications wiring and is utilized for nearly all
business telephone installations. As its name implies, it consists of pairs of copper wire twisted
together (see Figure 6.3). This type of wire is relatively inexpensive, readily available, and easy to
handle. However, it has notable drawbacks: it has slower data transmission speeds, is susceptible to
interference from other electrical sources, and can be easily intercepted by unauthorized individuals
seeking to access data.
Coaxial Cable
Coaxial cable is made up of insulated copper wire. It is significantly less prone to electrical interference
than twisted-pair wire and can transmit much larger amounts of data. This makes it a popular choice
for high-speed data transmission and television signals, which is why it's often associated with cable
TV. However, coaxial cable is more expensive and more challenging to handle than twisted-pair wire,
and it is also somewhat rigid.
Fiber Optics
Fiber-optic cable is made up of thousands of extremely thin glass filaments that carry information
using light pulses generated by lasers. The cable is encased in cladding, a protective layer that prevents
the light from escaping the fibres.
Fiber-optic cables are much smaller and lighter than traditional cable media. They can transmit
significantly more data and offer enhanced protection against interference and unauthorized tapping.
Typically, fiber-optic cable serves as the backbone of a network, while twisted-pair wire and coaxial
cable connect individual devices to this backbone. In 2016, the FASTER cable, a 5,600-mile undersea
fiber-optic connection between Japan and the United States, became operational, achieving data
transmission speeds of 60 terabits (trillions of bits) per second across the Pacific Ocean. By 2018,
organizations installed more fiber-optic cable than in any other year in the past two decades.
Network Protocols
Devices connected to a network need to access and share the network to send and receive data. These
devices are commonly referred to as network nodes. They collaborate by following a shared set of
rules and procedures, known as a protocol, which allows them to communicate effectively. The two
primary protocols are Ethernet and Transmission Control Protocol/Internet Protocol (TCP/IP).
Ethernet
A widely used LAN protocol is Ethernet. Many organizations utilize 100-gigabit Ethernet, which offers
data transmission speeds of 100 gigabits (100 billion bits) per second. In 2018, 400-gigabit Ethernet
was introduced.
Before data are sent over the Internet, they are broken down into small, fixed units called packets.
The technology used to fragment data into packets is known as packet switching. Each packet contains
information necessary for reaching its destination, including the sender's IP address, the recipient's IP
address, the total number of packets in the message, and the sequence number of that particular
packet. Each packet travels independently across the network and can take different routes. Once all
packets arrive at their destination, they are reassembled into the original message.
Packet-switching networks are known for their reliability and fault tolerance. For instance, if a network
path is congested or fails, packets can be dynamically rerouted. Additionally, if any packets are lost in
transit, only those specific packets need to be resent.
Organizations prefer packet switching primarily for its ability to ensure reliable end-to-end message
transmission, even over networks that may experience intermittent issues.
Packets utilize the TCP/IP protocol for data transport, functioning across four layers (see Figure 6.6).
The application layer allows client applications to access the other layers and defines the protocols for
data exchange, such as the Hypertext Transfer Protocol (HTTP), which outlines how messages are
formatted and interpreted by recipients. The transport layer provides communication and packet
services to the application layer, including TCP and other protocols. The Internet layer manages
addressing, routing, and packaging of data packets, with IP being one of its key protocols. Finally, the
network interface layer handles placing packets onto the network medium and receiving them from
various networking technologies.
Two computers can communicate using TCP/IP even if they have different hardware and software.
When data is sent from one computer to another, it passes down through all four layers, starting with
the application layer of the sending computer and proceeding to the network interface layer. Once
the data reach the receiving computer, they move back up through the layers.
TCP/IP allows users to transmit data over sometimes unreliable networks while ensuring that the data
arrives intact and uncorrupted. Its reliability and ability to support intranets and related functions
make TCP/IP a popular choice among business organizations.
Let’s consider an example of packet switching over the Internet. Figure 6.7 shows a message being
sent from New York City to Los Angeles via a packet-switching network. Notice that the packets,
represented in different colours, take various routes to reach their destination in Los Angeles, where
they are then reassembled into the complete message.
Figure 6.3: Packet switching (Adopted from Rainer and Prince, 2022)
Types of Network Processing
Organizations often utilize multiple computer systems throughout the company. Distributed
processing allocates processing tasks across two or more computers, allowing machines in different
locations to communicate via telecommunications links. A common form of distributed processing is
client/server processing, with a specific variant known as peer-to-peer processing.
Client/Server Computing
Client/server computing connects two or more computers in a setup where certain machines, called
servers, provide computing services to user PCs, referred to as clients. Typically, organizations carry
out most of their processing and data storage on powerful servers that can be accessed by less capable
client machines. Clients request applications, data, or processing from the server, which then fulfils
these requests.
This setup leads to the concepts of "fat" clients and "thin" clients. As mentioned in Technology Guide
1, fat clients have substantial storage and processing capabilities, allowing them to run local programs
(like Microsoft Office) even if the network goes down. Conversely, thin clients may lack local storage
and have limited processing power, relying on the network to run applications, making them less
useful when the network is unavailable.
Peer-to-Peer Processing
Peer-to-peer (P2P) processing is a type of distributed processing where each computer acts as both a
client and a server. Each machine can access files from all other computers, depending on assigned
security or integrity permissions.
There are three main types of peer-to-peer processing. The first type utilizes unused CPU power from
networked computers. An example of this is SETI@home ([Link]), an
open-source project that users can download for free.
The second type involves real-time, person-to-person collaboration, such as Microsoft SharePoint
Workspace ([Link]/en-us/sharepoint-workspace). This tool offers P2P
collaborative applications that use buddy lists for establishing connections and enabling real-time
collaboration.
The third category focuses on advanced search and file sharing, featuring natural language searches
across millions of peer systems. This allows users to find other users in addition to data and web pages.
BitTorrent ([Link]) is a notable example of this type. It is an open-source, free peer-to-
peer file-sharing application that simplifies sharing large files by breaking them into smaller pieces, or
"torrents." BitTorrent addresses common file-sharing challenges: (1) downloads slow down when
many users access a file simultaneously, and (2) some users download without sharing. By allowing
users to share small parts of a file simultaneously—a method known as swarming—BitTorrent
alleviates bottlenecks. Additionally, users must upload a file while downloading, preventing leeching.
Thus, popular content travels more efficiently across the network.
The Internet, often referred to as "the Net," is a global wide area network (WAN) that links around 1
million organizational computer networks across more than 200 countries on every continent. Its
extensive reach has integrated it into the daily lives of approximately 5 billion people.
Many people mistakenly believe that most Internet traffic occurs wirelessly, but in reality, only about
1 percent of it is transmitted via satellites. So, what does the infrastructure of the Internet actually
look like?
The Internet is quite physical, comprising 300 underwater cables that stretch a total of 550,000 miles.
These cables vary in thickness, from the size of a garden hose to about three inches in diameter, and
they come ashore at cable landing points. From these points, the cables are buried underground and
routed to large data centers (discussed in Technology Guide 3). In the United States alone, there are
542 underground cables connecting 273 different locations, primarily along major roads and railways.
One of the most concentrated hubs of Internet connectivity is in Lower Manhattan, New York City.
As a network of networks, the Internet enables users to access data from other organizations and
facilitates seamless communication, collaboration, and information exchange worldwide, quickly and
affordably. This capability has made the Internet essential for modern businesses.
The Internet originated from an experimental project by the Advanced Research Projects Agency
(ARPA) within the U.S. Department of Defense, which began in 1969 as ARPAnet. Its goal was to
explore the feasibility of a WAN for sharing data, exchanging messages, and transferring files among
researchers, educators, military personnel, and government entities.
Today, Internet technologies are used both within and between organizations. An intranet is a
network that employs Internet protocols, allowing users to utilize familiar applications and work
habits. Intranets facilitate discovery, communication, and collaboration within an organization. In
contrast, an extranet connects portions of different organizations' intranets, enabling secure
communication between business partners over the Internet using virtual private networks (VPNs),
which are explained in Chapter 4. Extranets provide limited access to the intranets of participating
companies and support essential interorganizational communications. They are commonly used in
business-to-business (B2B) electronic commerce and supply chain management.
The Internet is not managed by any central authority; instead, the operational costs are shared among
hundreds of thousands of nodes, resulting in minimal expenses for any single organization.
Organizations only need to pay a small fee to register their names and must install their own hardware
and software to manage their internal networks. They are responsible for transferring any data or
information entering their network, regardless of its source, to the intended destination at no cost to
the senders, who in turn cover the telephone costs associated with using either the backbone or
standard phone lines.
ISPs connect with each other at network access points (NAPs), which serve as exchange points for
Internet traffic and determine how that traffic is routed. NAPs are crucial components of the Internet
backbone. Figure 6.8 illustrates the structure of the Internet, with white links representing the
backbone and brown dots indicating the NAPs where these links intersect.
Figure 6.4: Internet schematic (backbone in white) (Adopted from Rainer and Prince, 2022)
Connecting Through Other Means
Various efforts have been made to make Internet access more affordable, faster, and user-friendly.
For instance, Internet kiosks have been placed in public locations like libraries and airports—and even
in convenience stores in some countries—to provide access for individuals without their own
computers. Additionally, using smartphones and tablets to connect to the Internet has become
widespread, and fiber-to-the-home (FTTH) is experiencing rapid growth. FTTH involves directly
connecting fiber-optic cables to individual homes.
Every computer connected to the Internet has a unique identifier known as the Internet Protocol (IP)
address, which distinguishes it from other computers. An IP address is made up of four sets of numbers
separated by dots. For example, one computer might have the IP address [Link], which can
be entered into a browser's address bar to access a website.
There are currently two IP addressing systems in use. The first, IPv4, is the most prevalent and consists
of 32 bits, allowing for 2^32 potential IP addresses, or 4,294,967,295 unique addresses. The IP address
mentioned earlier ([Link]) is an example of an IPv4 address. When IPv4 was created, the
number of computers needing addresses was far fewer than today, which led to the development of
a new system, IPv6, as we have exhausted the available IPv4 addresses.
IPv6 addresses are composed of 128 bits, providing an enormous number of potential addresses—
2^128 distinct possibilities. This new system is being adopted to meet the growing demand for IP
addresses from an increasing array of devices, including smartphones and those part of the Internet
of Things.
IP addresses need to be unique so that computers can locate each other on the Internet. The Internet
Corporation for Assigned Names and Numbers (ICANN) ([Link]) manages these unique
addresses globally. Without this coordination, a unified Internet would not be possible.
Since numeric IP addresses can be hard to remember, computers also have names. ICANN authorizes
certain companies, known as registrars, to register these names, which come from a system called the
domain name system (DNS). Domain names consist of multiple components separated by dots, read
from right to left. For example, in the domain name [Link], the rightmost part
represents its top-level domain (TLD).
The World Wide Web is built on universally accepted standards for storing, retrieving, formatting, and
displaying information through a client/server model. It accommodates various types of digital
content, including text, hypermedia, graphics, and sound, and features graphical user interfaces (GUIs)
for easy navigation.
The concept of hypertext is fundamental to the Web's structure. Hypertext refers to text displayed on
a device that contains hyperlinks—references to other text that can be accessed instantly or revealed
progressively for more detail. A hyperlink connects a hypertext file to another location or file, typically
activated by clicking on highlighted words or images or by touching the screen.
Organizations that want to share information online need to create a home page, which serves as a
welcome screen displaying basic information about the organization. Usually, the home page links to
additional pages, and all the pages belonging to a specific company or individual together form a
website. Most web pages include contact information for the organization or individual, and the
person managing a website is known as the webmaster (a gender-neutral term).
To visit a website, users must enter a uniform resource locator (URL), which specifies the address of a
particular resource on the Web. For instance, the URL for Microsoft is [Link]. The
"HTTP" in the URL stands for hypertext transport protocol, while the rest indicates the domain name
that identifies the web server hosting the site.
Users primarily access the Web through software applications called browsers, which offer a graphical
interface that allows them to navigate the Web by pointing and clicking—often referred to as surfing.
Web browsers provide a consistent interface across different operating systems. As of July 2019,
Google Chrome was the most popular browser, followed by Apple Safari, Firefox, Microsoft Internet
Explorer, and Microsoft Edge.
Now that you understand the basics of networks and how to access them, an important question
arises: How do businesses leverage networks to enhance their operations? In the next four sections
of this chapter, we will examine four network applications: discovery, communication, collaboration,
and education. These applications represent just a small selection of the numerous network
applications currently available. Even if this list were comprehensive today, it would likely change
tomorrow as new applications are developed. Additionally, categorizing network applications can be
challenging due to overlaps; for instance, telecommuting involves both communication and
collaboration.
The Internet allows users to access and discover information stored in databases globally. By browsing
and searching the Web, users can utilize this discovery feature in various fields, including education,
government, entertainment, and commerce. While having access to such a wealth of information is
advantageous, it’s crucial to recognize that the quality of web content is not guaranteed. The Internet
is democratic, meaning anyone can publish information. Thus, the key principle for web users is
"Caution is advised!"
Consider the process of discovery in the 1960s: finding information typically required a trip to the
library to borrow a physical book. In contrast, today’s methods of information discovery have
transformed significantly. Notably:
• In the past, people had to physically go to libraries to find information; now, the Internet
brings information directly to users.
• Previously, only one person could access a book at a time; today, multiple users can access
the same information simultaneously.
• Accessing needed information could be challenging if a book was checked out; now,
information is broadly available to everyone at once.
• In the past, language barriers could complicate access to foreign texts; now, automated
translation tools are rapidly advancing.
However, the vast amount of information available on the Web, which doubles approximately every
year, can be overwhelming. This growing volume makes navigating the Web and finding specific
information increasingly difficult. As a result, many people rely on search engines, directories, and
portals.
For more comprehensive searches, metasearch engines can be utilized. These tools query multiple
search engines simultaneously and combine their results. Examples include Surf-wax, Metacrawler,
Mamma, KartOO, and Dogpile.
Given the vast array of information online in multiple languages, accessing it often involves using
automatic translation tools. These translations are available for all major languages, and their quality
continues to improve.
Portals
Many organizations and their managers face information overload, as data is dispersed across
numerous documents, emails, and databases in various locations and systems. This fragmentation
makes it time-consuming for users to find relevant and accurate information, often requiring them to
navigate multiple systems.
One effective solution to this challenge is the use of portals. A portal serves as a web-based,
personalized gateway to information and knowledge, aggregating relevant data from different IT
systems and the Internet through advanced search and indexing methods. After this section, you'll be
able to identify four types of portals: commercial, affinity, corporate, and industrywide, each catering
to different audiences.
A commercial (public) portal is the most common type found on the Internet. It is designed for a wide
range of users and offers general content, including some real-time information (e.g., stock tickers).
Examples include Lycos and Microsoft Network.
On the other hand, an affinity portal provides a centralized entry point for a specific community of
shared interests, such as a hobby group or political organization. For instance, many universities have
affinity portals for their alumni. Examples of affinity portals include TechWeb and ZDNet.
2.7 Network Applications: Communication
The second primary category of network applications is communication. This encompasses various
communication technologies, such as email, call centers, chat rooms, and voice services. Additionally,
we will explore a noteworthy application of communication: telecommuting.
Electronic Mail
Email is the most widely used application on the Internet. Research indicates that nearly all companies
conduct business transactions via email, and a significant number link it directly to their revenue
generation. However, the volume of emails that managers receive can be overwhelming, potentially
decreasing productivity.
Voice Communication
Traditional telephone services have largely been replaced by Internet telephony, or Voice-over-
Internet Protocol (VoIP). This technology digitizes analog voice signals, breaks them into packets, and
transmits them over the Internet. For example, Skype offers various VoIP services for free, including
voice and video calls, instant messaging, and conference calls.
Unified Communications
Previously, organizational networks for voice, data, and video communication operated separately,
managed by the IT department. This fragmented approach led to increased costs and reduced
efficiency. Unified communications (UC) integrates all forms of communication—voice, voicemail, fax,
chat, email, and videoconferencing—onto a single platform. This integration allows for a streamlined
user experience; for instance, a voicemail can be read in an email inbox. UC facilitates seamless
collaboration across different locations, enabling users to easily find and communicate with each
other through various methods in real time.
Telecommuting
Knowledge workers are now part of the distributed workforce, or "digital nomads," able to work from
anywhere at any time, a practice known as telecommuting. These workers often do not have a
permanent office and may work from home, client locations, or other remote spaces. The rise of
telecommuting is fueled by globalization, long commutes, widespread broadband access, and
advanced computing devices.
Telecommuting offers several benefits, such as reduced stress for employees and improved work-life
balance, as well as greater productivity and employee retention for employers. However, it also comes
with drawbacks, including feelings of isolation for employees, potential loss of benefits, and challenges
in supervision and data security for employers. Research has shown that telecommuting workers may
receive fewer promotions due to less visibility with management, and they often face difficulties in
setting boundaries with family members regarding work time.
Collaboration
The third key category of network applications is collaboration. This involves multiple entities—such
as individuals, teams, groups, or organizations—working together to achieve specific tasks. The term
"workgroup" specifically describes two or more individuals collaborating to complete a task.
When group members are situated in different locations, they form a virtual team. These teams hold
virtual meetings, allowing them to "meet" electronically. Virtual collaboration, or e-collaboration,
involves using digital technologies to enable geographically dispersed individuals or organizations to
collaboratively plan, design, develop, manage, and research products, services, and innovations.
Employees often collaborate virtually, and some organizations extend this collaboration to customers,
suppliers, and business partners to enhance productivity and competitiveness.
Collaboration can occur synchronously, where all team members meet simultaneously, or
asynchronously, where members work together without needing to be online at the same time. Virtual
teams, especially those spread across the globe, typically collaborate asynchronously.
Although various software products support collaboration, many organizations feel overwhelmed by
the number of tools available. They prefer a centralized platform that allows them to track what has
been shared, with whom, and when, along with smarter tools that can anticipate their needs.
Some popular collaborative software includes Google Drive, Microsoft Office 365 Teams, Jive, Glip,
Slack, Atlassian, and Facebook Workplace, among others. These tools generally offer features for
online collaboration, group email, distributed databases, document management, workflow
capabilities, instant virtual meetings, application sharing, instant messaging, and tools for consensus
building and application development.
Two tools that incorporate analytics for better collaboration are IBM’s Verse, which combines email,
social media, calendars, and file sharing into one package aimed at enhancing productivity, and
Microsoft’s Delve, which uses analytics to present the most relevant information to each user.
For example, BNY Mellon, a multinational banking and financial services firm, utilizes its own
enterprise social networking tool, MySource Social, to share ideas and expertise. This tool integrates
with the company’s communication and collaboration systems, including email and instant messaging.
MySource Social serves as an intranet site where users can access business partner groups, blogs, and
special-interest groups. Over 90 percent of BNY Mellon’s 55,000 employees globally have accessed
the site, with 40 percent actively participating.
Crowdsourcing
Crowdsourcing is a form of collaboration where an organization outsources tasks to a large, undefined
group of people through an open call. This approach offers several advantages for organizations. First,
crowds can quickly and cost-effectively explore and often solve problems. Second, it allows access to
a broader talent pool beyond the organization’s employees. Third, by engaging with the crowd,
organizations gain direct insights into customer preferences. Finally, crowdsourcing fosters innovation
by tapping into a global network of ideas. Here are a few examples of crowdsourcing in action:
1. Crowdsourcing Help Desks: IT help desks on college campuses are vital since students rely heavily
on technology for their studies. At Indiana University, the IT help desk utilizes crowdsourcing to
manage the high volume of inquiries. Students and faculty post their IT issues on an online forum,
where fellow students and tech-savvy individuals can provide assistance.
2. Recruitment: Champlain College in Vermont launched the Champlain For Reel program, inviting
students to share their experiences at the college through YouTube videos. This channel serves to
attract prospective students and keeps alumni informed about campus and community events.
3. Scitable: This platform combines social networking with academic collaboration, allowing students,
professors, and researchers to discuss problems, find solutions, and share resources. Scitable is free
to use and encourages individuals to both seek help and assist others through crowdsourcing.
4. Procter & Gamble (P&G): P&G uses InnoCentive, a platform where researchers can post their
challenges and offer cash rewards to those who provide solutions.
5. SAP’s Idea Place: This initiative generates ideas for software improvements. Users can view and
categorize ideas, vote on them, and provide feedback. A team of experts reviews submissions to assess
their feasibility, giving more attention to the most popular ideas.
Despite the numerous success stories associated with crowdsourcing, there are significant questions
and concerns, such as:
Telepresence systems vary from high-end, on-premise setups to cloud-based solutions. High-end
systems are expensive, requiring dedicated rooms with large high-definition screens and advanced
audio capabilities to allow simultaneous communication without interference. These systems also
necessitate technical support for operation and maintenance, such as Cisco’s TelePresence system.
E-learning and distance learning are distinct yet overlapping concepts. E-learning involves Web-
supported learning, which can enhance traditional classroom experiences or take place entirely in
virtual settings, where all coursework is done online. In this context, e-learning is a component of
distance learning (DL), which encompasses any educational scenario where instructors and students
do not meet in person. The Web facilitates a multimedia interactive environment for self-study,
making knowledge accessible anytime and anywhere, thus benefiting both formal education and
corporate training.
E-learning offers numerous advantages, such as providing up-to-date, high-quality content created by
experts, and allowing students to learn at their own pace and location. In corporate training settings,
e-learning can shorten learning time, enabling more individuals to be trained efficiently, which reduces
costs and the need for physical training spaces.
However, e-learning has its challenges. Students need to be proficient with computers, and they may
miss out on face-to-face interactions with instructors and peers. Additionally, evaluating students'
work can be difficult, as instructors may not know who has completed assignments. Rather than
replacing traditional classrooms, e-learning complements them by utilizing new content and delivery
technologies. Platforms like Blackboard enhance conventional education in higher learning.
A recent development in distance learning is the emergence of massive open online courses (MOOCs),
which aim to democratize higher education. Their growth is driven by advancements in technology
and rising tuition costs at traditional universities. MOOCs are largely automated and feature
computer-graded assessments. However, they have yet to demonstrate effective teaching for the
large numbers of students who enroll, and they do not generate revenue for universities. MOOCs
attract a diverse student body, including high school students, retirees, faculty, and working
professionals, making it challenging to design courses that cater to everyone's needs. Additionally,
while initial enrollments in MOOCs may exceed 100,000, completion rates often drop below 10%.
Nonetheless, they provide opportunities for many around the world to gain valuable skills and secure
high-paying jobs without incurring tuition costs or obtaining degrees.
As of 2018, the leading MOOC providers included Coursera (U.S.) with 23 million users and 2,329
courses, edX (U.S.) with 10 million users and 1,319 courses, XuetangX (China) with 6 million users and
380 courses, FutureLearn (U.K.) with 5.3 million users and 485 courses, and Udacity (U.S.) with 4
million users and 172 courses.
Virtual universities are online institutions where students attend classes via the Internet from home
or other locations. Many established universities now offer some form of online education.
Institutions like the University of Phoenix, California Virtual Campus, and the University of Maryland
provide a wide range of online courses and degrees globally. Other universities may offer limited
online options while incorporating innovative teaching techniques and multimedia support in
traditional settings.
The fourth major category of network applications includes educational applications. This section
covers e-learning, distance learning, and virtual universities.
E-learning and distance learning are related but distinct concepts. E-learning refers to Web-based
learning, which can occur in traditional classrooms as a supplement to in-person teaching or entirely
in virtual classrooms where all coursework is done online. In such cases, e-learning is a subset of
distance learning (DL), which encompasses any educational setting where instructors and students do
not meet face-to-face. The Web now offers a multimedia interactive environment for self-directed
study, making knowledge readily accessible anytime and anywhere, benefiting both formal education
and corporate training.
E-learning has many advantages. Online resources can provide current, high-quality content created
by experts and deliver it consistently. It allows students the flexibility to learn from any location, at
any time, and at their own pace. In corporate training environments, e-learning typically shortens
learning durations, enabling more people to be trained in a given timeframe, which reduces costs and
eliminates the need for physical training spaces.
However, e-learning also has some limitations. Students need to be computer literate, and they may
miss face-to-face interactions with instructors and peers. Additionally, assessing students' work can
be challenging since instructors may not always know who completed the assignments. Rather than
replacing traditional classrooms, e-learning enhances them by utilizing new content and delivery
methods. Platforms like Blackboard add significant value to higher education.
A recent innovation in distance learning is massive open online courses (MOOCs), designed to
democratize access to higher education. The rise of MOOCs has been fueled by advancements in
technology and rising tuition costs at traditional universities. MOOCs are largely automated, featuring
computer-graded assignments and exams. However, they have yet to demonstrate effective teaching
for the large numbers of enrolled students and do not generate revenue for universities. They attract
a diverse mix of participants, including high school students, retirees, faculty, and working
professionals, making course design challenging. Furthermore, while initial MOOC enrollments may
exceed 100,000, completion rates often fall below 10%. Despite these challenges, many students
worldwide who lack access to traditional universities are using MOOCs to gain valuable skills and
pursue high-paying jobs without incurring tuition fees or obtaining degrees.
In 2018, the leading MOOC providers included Coursera (U.S.) with 23 million users and 2,329 courses,
edX (U.S.) with 10 million users and 1,319 courses, XuetangX (China) with 6 million users and 380
courses, FutureLearn (U.K.) with 5.3 million users and 485 courses, and Udacity (U.S.) with 4 million
users and 172 courses.
Virtual universities are institutions where students attend classes online from home or other locations.
Many established universities offer some form of online education. For instance, the University of
Phoenix, California Virtual Campus, and the University of Maryland provide thousands of courses and
degrees entirely online. Other universities may offer a limited selection of online courses but also
incorporate innovative teaching methods and multimedia support in their traditional classrooms.
Revision Questions
1. Discuss the four business decisions that companies must make when they acquire new
applications.
2. Enumerate the primary tasks and the importance of each of the six processes involved in the
systems development life cycle.
3. Describe alternative development methods and the tools that augment development
methods
6. Differentiate the various types of input and output technologies and their uses.
11. Identify a use case scenario for each of the four types of clouds
References
1. Deloitte (2024) 'Network Computing Trends Report', Digital Infrastructure Analysis, 8(2), pp.
45-60.
3. ICANN (2024) 'Domain Name System Overview', Internet Governance Report. Available at:
[Link] (Accessed: 11 December 2024).
5. McKinsey & Company (2024) 'The Future of Network Technologies', Digital Transformation
Review, 12(3), pp. 78-92.
8. The World Bank (2024) 'Digital Content Language Distribution Analysis', Global Technology
Report, 9(2), pp. 67-81.
10. World Economic Forum (2024) 'Global Information Technology Report', Digital Infrastructure
Series, 7(1), pp. 23-38.