Client/Server Software Testing: Hongmei Yang
Client/Server Software Testing: Hongmei Yang
Hongmei Yang
Contents
Acknowledgement
Introduction
Summary
References
2
Client/Server Software Testing
Introduction
The first part of this essay is the introduction to Client/Server architecture, which includes three
sections: What is the Client/Server Computing, Architectures for Client/Server System, and
Critical Issues Involved in Client/Server System Management.
Client/Server computing is a current reality for professional system developers and for
sophisticated departmental computing users. The section, What is the Client/Server Computing,
points out the definition and major characteristics of Client/Server computing. Netcentric (or
Internet) computing, as an evolution of Client/Server model, has brought new technology to the
forefront. Hence, the major characteristics and differences between Netcentic and traditional
Client/Server computing are also presented in this section.
Both traditional and Netcentric computing are tiered architectures. The brief introduction for three
popular architectures, namely, 2-tiered architecture, modified 2-tiered architecture, and 3-tiered
architecture are found in the section -- The Architecture for Client/Server Computing.
The second part of this essay is about Client/Server software testing. There are four sections in this
part: Introduction to Client/Server Software Testing, Testing Plan for Client/Server Computing,
Client/Server Testing in Different Layers, and Special Concerns for Internet Computing—Security
Testing.
In the section Introduction to the Client/Server Software Testing, we present some basic
characteristics of Client/Server software testing from different points of view.
Because of the difference between traditional and Client/Server software testing, a practical testing
plan based on application functionality is attached in section 2 Testing Plan for Client/Server
Software Testing. We also give some detailed explanation for different test plans, such as, system
test plan, operational plan, acceptance test plan, and regression test plan, which are parts of a
Client/Server testing plan.
As mentioned in Part I, a Client/Server system has several layers, which can be viewed
conceptually and physically. Viewed physically, the layers are client, server, middleware, and
network. In section 3 Client/Server Testing in Different Layers, specific concerns related to client,
server and network problems, testing techniques, testing tools and some activities are addressed
separately in Testing on the Client Side, Testing on the Server Side, and Network Testing.
For Internet-based Client/Server systems, security is one of the major concerns. Hence, this essay
also includes some security risks that need to be tested in the Part II, section 4 Special Concerns
for Internet Computing—Security Testing.
3
Client/Server Software Testing
Netcentric computing ---- as an evolution of Client/Server model, has brought new technology to
the forefront, especially in the area of external presence and access, ease of distribution, and media
capabilities. Some of new technologies are [3]:
4
b. Direct supplier-to-customer relationships. The external presence and access enabled by
connecting a business node to the Internet has opened up a series of opportunities to reach an
audience outside a company’s traditional internal users.
c. Richer documents. Netcentric technologies (such as HTML, documents, plug-ins, and Java)
and standardization of media information formats enable support for complex documents,
applications and even nondiscrete data types such as audio and video.
d. Application version checking and dynamic update. The configuration management of
traditional Client/Server applications, which tend to be stored on both the client and server
sides, is a major issue for many corporations. Netcentric computing can checking and update
application versions dynamically.
Both traditional Client/Server as well as netcentric computing are tiered architectures. In both
cases, there is a distribution of presentation services, application code, and data across clients and
servers. In both cases, there is a networking protocol that is used for communication between
clients and servers. In both cases, they support a style of computing where processes on different
machines communicate using messages. In this style, the “client” delegates business functions or
other tasks (such as data manipulation logic) to one or more server processes. Server processes
respond to messages from clients.
A Client/Server system has several layers, which can be visualized in either a conceptual or a
physical manner. Viewed conceptually, the layers are presentation, process, and database. Viewed
physically, the layers are server, client, middleware, and network.
2-tiered architecture is also known as the client-centric model, which implements a “fat” client.
Nearly all of the processing happens on the client, and client accesses the database directly rather
than through any middleware. In this model, all of the presentation logic and the business logic are
implemented as processes on the client.
2-tiered architecture is the simplest one to implement. Hence, it is the simplest one to test. Also, it
is the most stable form of Client/Server implementation, making most of the errors that testers find
independent of the implementation. Direct access to the database makes it simpler to verify the test
results.
The disadvantage of this model is the limit of the scalability and difficulties for maintenance.
Because it doesn’t partition the application logic very well, changes require reinstallation of the
software on all of the client desktops.
Because of the nightmare of maintenance of the 2-tiered Client/Server architecture, the business
logic is moved to the database side, implemented using triggers and procedures. This kind of
model is known as modified 2-tiered architecture.
5
In terms of software testing, modified 2-tiered architecture is more complex than 2-tiered
architecture for the following reasons:
a. It is difficult to create a direct test of the business logic. Special tools are required to implement
and verify the tests.
b. It is possible to test the business logic from the GUI, but there is no way to determine the
numbers of procedures and/or triggers that fires and create intermediate results before the end
product is achieved.
c. Another complication is dynamic database queries. They are constructed by the application and
exist only when the program needs them. It is very difficult to be sure that the test generates a
query “correctly”, or as expected. Special utilities that show what is running in memory must
be used during the tests.
For 3-tiered architecture, the application is divided into a presentation tier, a middle tier, and a data
tier. The middle tier is composed of one or more application servers distributed across one or more
physical machines. This architecture is also termed the “the thin client—fat server” approach.
This model is very complicated for testing because the business and/or data objects can be invoked
from many clients, and the objects can be partitioned across many servers. The characteristics
make the 3-tiered architecture desirable as a development and implementation framework at the
same time make testing more complicated and tricky.
Hurwitz Consulting Group, Inc. has provided a framework for managing Client/Server systems
that identifies eight primary management issues [4]:
a. Performance
b. Problem
c. Software distribution
d. Configuration and administration
e. Data and storage
f. Operations
g. Security
h. License
Software testing for Client/Server systems (Desktop or Webtop) presents a new set of testing
problems, but it also includes the more traditional problems testers have always faced in the
mainframe world. Atre describes the special requirements of Client/Server testing [5]:
a. The client’s user interface
b. The client’s interface with the server
c. The server’s functionality
d. The network (the reliability and performance of the network)
6
We can view the Client/Server software testing from different perspectives:
In many instances, testing Client/Server software cannot be planned from the perspective of
traditional integrated testing activities because this view either is not applicable at all or is too
narrow, and other dimensions must be considered. The following are some specific considerations
needing to be addressed in a Client/Server testing plan.
• Must include consideration of the different hardware and software platforms on which the
system will be used.
• Must take into account network and database server performance issues with which mainframe
systems did not have to deal.
• Has to consider the replication of data and processes across networked servers
In the test plan, we may address or construct several different kinds of testing:
a. The system test plan: System test scenarios are a set of test scripts, which reflect user behaviors
in a typical business situation. It’s very important to identify the business scenarios before
constructing the system test plan.
See attached CASE STUDY: The business scenarios for the MFS imaging system
7
b. The user acceptance test plan: The user acceptance test plan is very similar to the system test
plan. The major difference is direction. The user acceptance test is designed to demonstrate the
major system features to the user as opposed to finding new errors.
See attached CASE STUDY: Acceptance test specification for the MFS imaging system
c. The operational test plan: It guides the single user testing of the graphical user interface and of
the system function. This plan should be constructed according to subsection A and B of
Section II in the testing plan template -- Client/Server test plan based on application
functionality. (See attached Appendix I)
d. The regression test plan: The regression test plan occurs at two levels. In Client/Server
development, regression testing happens between builds. Between system releases, regression
testing also occurs postproduction. Each new build/release must be tested for three aspects:
• To uncover errors introduced by the fix into previously correct function.
• To uncover previously reported errors that remain.
• To uncover errors in the new functionality.
3.1.1 The complexity for Graphic User Interface Testing is due to:
a. Cross-platform nature: The same GUI objects may be required to run transparently
(provide a consistent interface across platforms, with the cross-platform nature
unknown to the user) on different hardware and software platforms
b. Event-driven nature: GUI-base applications have increased testing requirements
because they are in an event-driven environment where user actions are events that
determine the application’s behavior. Because the number of available user actions is
very high, the number of logical paths in the supporting program code is also very high.
c. The mouse, as an alternate method of input, also raises some problems. It is necessary
to assure that the application handles both mouse input and keyboard input correctly.
d. The GUI testing also requires testing for the existence of a file that provides supporting
data/information for text objects. The application must be sensitive to the existence, or
nonexistence.
e. In many cases, GUI testing also involves the testing of the function that allows end-
users to customize GUI objects. Many GUI development tools give the users the ability
to define their own GUI objects. The ability to do this requires the underlying
application to be able to recognize and process events related to these custom objects.
8
3.1.2 GUI testing techniques: Many traditional software testing techniques can be used in GUI
testing.
a. Review techniques such as walkthroughs and inspections [8]. These human testing
procedures have been found to be very effective in the prevention and early correction
of errors. It has been documented that two-thirds of all of the errors in finished
information systems are the results of logic flaws rather than poor coding [9].
Preventive testing approaches, such as walkthroughs and inspections can eliminate the
majority of these analysis and design errors before they go through to the production
system.
b. Data validation techniques: Some of the most serious errors in software systems have
been the result of inadequate or missing input validation procedures. Software testing
has powerful data validation procedures in the form of the Black Box techniques of
Equivalence Partitioning, Boundary Analysis, and Error Guessing. These techniques are
also very useful in GUI testing.
c. Scenario testing: It is a system-level Black Box approach that also assure good White
Box logic-level coverage for Client/Server systems.
d. The decision logic table (DLT): DLT represents an external view of the functional
specification that can be used to supplement scenario testing from a logic-coverage
perspective. In DLTs, each logical condition in the specification becomes a control path
in the finished system. Each rule in the table describes a specific instance of a pathway
that must be implemented. Hence, test cases based on the rules in a DLT provide
adequate coverage of the module’s logic independent of its coded implementation.
There are several situations that scripts can be designed to invoke during several tests: load
testing, volume tests, stress tests, performance tests, and data-recovery tests.
9
Client/Server systems must undergo two types of testing: single-user-functional-based
testing and multiuser loading testing.
Multiuser loading testing is the best method to gauge Client/Server performance. It is
necessary in order to determine the suitability of application server, database server, and
web server performance. Because multiuser load test requires emulating a situation in
which multiple clients access a single server application, it is almost impossible to be done
without automation.
10
According to Hamilton [10], the performance problems are most often the result of the
client or server being configured inappropriately.
The best strategy for improving client-sever performance is a three-step process [11]. First,
execute controlled performance tests that collect the data about volume, stress, and loading
tests. Second, analyze the collected data. Third, examine and tune the database queries and,
if necessary, provide temporary data storage on the client while the application is
executing.
SQL Inspector and ODBC Inspector are tools for testing the link between the client and the
server. These products monitor the database interface pipeline and collect information
about all database calls or a selected subset of them.
SQL Profiler, is used for tuning database calls. It stores and displays statistics about SQL
commands embedded in Client/Server applications.
SQLEYE is an NT-based tool, offered by Microsoft. It can track the information passed
through the SQL Server and its client. Client application connect indirectly to SQL server
through SQLEYE, which allows users to view the queries sent to SQL Server, the returned
results, row counts, message, and errors
Testing the network is beyond the scope of an individual Client/Server project as it may serve
more than a single Client/Server project. Thus, network testing falls into the domain of the
network management group. As Robert Buchanan [12] said: “If you haven’t tested a network
solution, it’s hard to say if it works. It may ‘work’. It may execute all commands, but it may
be too slow for your needs”.
11
• Application response time measures
• Application functionality
• Throughput and performance measurement
• Configuration and sizing
• Stress testing and performance testing
• Reliability
For internet-based Client/Server systems, security testing for the web server is important. The
web server is your LAN’s window to the world and, conversely, is the world’s window to
your LAN.
The following excerpt is taken from the WWW Security FAQ [14]:
It’s a maxim in system security circles that buggy software opens up security holes. It’s a maxim in software
development circles that large, complex programs contain bugs. Unfortunately, web servers are large, complex
programs that can contain security holes. Furthermore, the open architecture of web server allows arbitrary CGI
scripts to be executed on the server’s side of the connection in response to remote requests. Any CGI script
installed at your site may contain bugs, and every such bug is a potential security hole.
1. The primary risk is errors in the web server side misconfiguration that would allow
remote users to:
• Steal confidential information
• Execute commands on the server host, thus allowing the users to modify the system
• Gain information about the server host that would allow them to break into the system
• Launch attacks that will bring the system down.
2. The secondary risk occurs on the Browser-side
• Active content that crashes the browser, damages your system, breaches your
company’s privacy, or creates an annoyance.
• The misuse of personal information provided by the end user.
12
3. The tertiary risk is data interception during data transfer.
The above risks are also the focuses of web server security testing. As a tester, it is your
responsibility to test if the security extends provided by the server meet the user’s expectation
for the network security.
Summary:
The complexity for GUI (Graphic User Interface) testing is increase because of some
characteristics of GUIs, for instance, its cross-platform nature, event-driven nature, and an
additional input method—mouse. Many traditional software testing techniques can be used in
GUI testing. Currently, a number of companies have begun producing structured
capture/playback tools that address the unique properties of GUIs.
There are several situations that scripts can be designed to be invoked during server tests: load
testing, volume tests, stress tests, performance tests, and data-recovery tests. These types of
testing are nearly impossible without automation. Some sophisticated testing tools used in
server side testing already emerged in the market, such as LoadRunning/Xl, SQL Inspector,
SQL profiler, and SQLEYE.
Network test is a necessary but difficult series of tasks. Its difficulty is compounded by the
fact that Client/Server development may be targeted for an exiting network or for one that is
yet to be installed. Proactive network management and proper capacity planning will be very
helpful. In addition, performance and stress testing can ease the network testing burden.
For internet-based Client/Server systems, security testing for the web server is important. The
web server is your LAN’s window to the world and, conversely, is the world’s window to your
LAN. As a tester, it is your responsibility to find weakness in the system security
13
References:
1. Goodyear, Mark. Enterprise System Architectures. CRC Press LLC, 2000, p.1-1.
2. Mosley, Daniel J.. Client/Server Software Testing on the Desktop and the Web, Prentice Hall
PTR, Upper Saddle River, NJ, 2000, pXV.
3. Goodyear, Mark. Enterprise System Architectures. CRC Press LLC, 2000, pp.1-4.
4. Bourne, Kelly. SQL Process. DAMS, Vol. 8, No.12, November 1995, p.34 (3).
5. Shaku, Atre. Client/Server Application Development Testing: A Special Report by Atre
Associates, Inc., 222 Grace Church Street, Port Chester, NY 10573-5155.
6. Binder, Robert A. Test Case Design for Object-Oriented Programming: The FREE Approach.
Robert Binder Systems Consulting, Inc., Chicago, 1992.
7. Mosley, Daniel J. Client/Server Software Testing on the Desktop and the Web, Prentice Hall
PTR, Upper Saddle River, NJ, 2000, pp.72-4.
8. Mosley, Daniel J. The Handbook of MIS Application Software Testing: Methods, Techniques,
and Tools for Assuring Quality Through Testing. Prentice-Hall Yourdon Press, Englewood
Cliffs, NJ, 1993.
9. Myers, Glenford. Software Reliability, John Wiley &Sons, New York, 1977.
10. Hamilton, Dennies. Don’t Let Client/Server Performance Gotchas Getcha. Datamation, Vol.
40; No. 21; November 1, 1994, p.39.
11. Mosley, Daniel J. Client/Server Software Testing on the Desktop and the Web, Prentice Hall
PTR, Upper Saddle River, NJ, 2000, p.143.
12. Buchanan,Robert. Weird Science (Proactive Testing for Network Systems). LAN Magazine,
Vol.9, No. 7, July 1994, pp.115-9.
13. Nemzow, Marty. Keeping a Lid on Network Capacity. LAN Magazine, Vol. 9, No. 13,
December 1994, pp. 61-4.
14. Massachusetts Institute of Technology, Institute National de Researche en Informatique et en
Automatique, Keio University). All rights reserved. http://www.w3.org/consortium/legal/.
15. Mosley, Daniel J. Client/Server Software Testing on the Desktop and the Web, Prentice Hall
PTR, Upper Saddle River, NJ, 2000, p.276.
14
Appendix I
* Source: Mosley, Daniel J. Client/Server Software Testing on the Desktop and the Web, Prentice Hall PTR, Upper
Saddle River, NJ, 2000, p72-4.
Document ID
Document locator
I. Introduction
Scope
General Testing Objectives
Level of Tests
Types of Tests
Areas Not Being Tested
Supporting Documents
Methods
Test Case Design and Construction
Standards
II Environment Requirements
Hardware
Software
Personal
15
Test Execution
Test Failure
Configuration Management
Document Control
16
Gray Box
Decision Logic Tables
White Box
Basis Testing
Functional Verification Testing
Functional Testing Tools
X. Regression Testing
Regression Testing Objectives
Regression Testing Methods
Regression Testing Tools
17
Appendix II: CASE STUDY
• According to the Client/Server test plan template, this document belongs to the Business
Scenarios in the system test plan
To construct the Acceptant Test Specification for the MFS imaging system, we identify the
business scenarios first, then develop the acceptance test based on this scenarios.
Usage scenarios:
a. Scanning: In mailroom, students open the mail (and sort invoices in alphabetic order by
vendors’ name?), then scan the invoices into FDD.
b. Data entry: There are three different situations for data entry
1. The processors open the image file, and perform corresponding data entry.
2. Import data entry from library. In this situation,
-----how to attach image file to MFS record?
-----how to deal with one-to-many relationship between records and one batch of
invoice?
3. Credit charge: may have information of any number of people or department, but we only
have one batch.
c. Imaging Research:
1. Grant invoice. About imaging Batch Reports (from library) and imaging direct invoice
voucher approval
2. To retrieve the image file for further verification:
• when the vendor claims that an invoice isn’t paid
• when the department claims that ordered product aren’t received.
• when the PO # on the invoice doesn’t match or there is no PO# at all, we have to
forward the image file to the department for further approval or request for more
information.
3. Verify the account number
4. For file tracking: Keep live files for one and half year’s online access for any request
purpose, and keep archived file for additional five and half year
5. For internal or external audit: Retrieval of image files per any particular audit request.
f. Quality assurance:
1. Recheck the legibility of the image file. If we decided to rescan the file, we should
consider:
18
a. How to find the original document? In effect, the question is about after we scan the
file, where, how and how long to keep the hard copy?
b. Do the rescanned images belong to the same batch or to a different batch?
2. Linking test (Is the imaging file linked to the records correctly)?
2.2: Under what conditions are invoices looked at outside the Comptroller’s Office?
a. Forward image files to the corresponding department for further approval or request for
more information. In this case, we should define how to forward the imaging file to the
department.
• when the department claims that the ordered product are not received
• when the PO # on the invoice doesn’t match or no PO# at all
b. Facility service office should have permission to research and access to the image files
related to this department.
c. Purchase department: when the purchasing director requests the item and product
price detail, she needs to look at image file.
d. Per any client’s request, sent image files to the client for further verification. This could
be done by attaching a URL link or an image file in the email.
19
Appendix III: CASE STUDY
* According to the Client/Server test plan template, this document belongs to part of System
test Plan, it specifies the test scripts for business scenarios.
4. Demonstrate how to enter data records into MFS for a corresponding imaged document stored
in the Image server.
5. Viewing images locally or remotely
• View an imaged document via the browser or via FDD client software with appropriate rights.
• Demonstrate the other viewing functions provided by Feith such as: magnifying, displaying
multiple pages in a window, navigating through page control display bars ...
• Quantify the image download time in order to estimate the performance
20
6: Archiving
• Assign a lifetime to an imaged document, after which it is automatically archived or deleted.
Eventually, we want to archive invoice images after 18 months. Hence, as an acceptance test, we
want to see a demonstration that has the functionality of assigning a lifetime to a document. After
a lifetime of 18 months, the document may be raised in an exception report for human action,
automatically archived and removed from the online storage, or some other action we specify.
• Archive images off line automatically
• Search for off-line archived files
• Retrieve off-line archived files
7: Administration:
• Set up accounts for different users and groups with different access rights.
• For each allowed right, demonstrate that the user can exercise that right both locally and
remotely.
• For each denied right, demonstrate that the user is denied that right.
8: Testing the capability of notifying the users electronically if the image should be verified by the
outside of the Controller’s Office
Two possibilities:
1. Export an image to TIFF, other graphic format, or text files, attach it to an e-mail message
and send it out, receive it, extract and view it.
2. Place in an email message a link (a URL) to the imaged file
If we use a URL link, we should make sure only authorized users can view the images
through the attached URL.
9: Add additional imaged pages to an existing document.
10: Delete a logical document from a batch
(If we decide that images are not legible, we want to delete them from their batch and rescan
them.)
11: Add additional documents to an existing batch
(After rescanning, we may need to put the document back into its original batch)
21