SG 247584
SG 247584
Philip Monson
Albert Leigh
Dileep Kumar
Ilari Ahtiainen
Dennis Martins
Joe Reinwald
ibm.com/redbooks
International Technical Support Organization
March 2008
SG24-7584-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Contents v
7.11 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Contents vii
Appendix A. WebSphere Application Server V6.0.2 considerations . . . . . . . . . . . . . 447
What is new in WebSphere Application Server V6.0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Java performance tuning guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Silent installation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
viii IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or
its affiliates.
AMD, AMD Opteron, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro
Devices, Inc.
CoolThreads, Enterprise JavaBeans, EJB, Java, Java HotSpot, JavaBeans, JDBC, JDK, JMX, JNI, JRE, JSP,
JVM, J2EE, J2SE, NetBeans, N1, OpenSolaris, Solaris, Sun, Sun BluePrints, Sun Fire, Sun Java, and all
Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or
both.
Active Directory, Expression, Microsoft, SQL Server, Windows, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
IBM® WebSphere® Application Server is a key building block of the IBM SOA reference
architecture, providing a Java™ 2 Platform, Enterprise Edition (J2EE™) offering for
assembling, deploying and managing applications. With standards-based messaging and
support for the latest Web services standards, WebSphere Application Server enables you to
reuse existing assets and helps you increase your return on existing investments.
This IBM Redbooks® publication highlights how WebSphere Application Server can be
optimized with one of the many operating systems it supports: the Solaris™ 10 Operating
System (OS).
The Solaris OS, developed by Sun™ Microsystems and now the foundation of the
OpenSolaris™ open source project, complements the IBM WebSphere application
environment by providing powerful, secure, and flexible infrastructure. Solaris is the leading
operating environment to host demanding enterprise applications, including WebSphere
Application Server, in today’s challenging business environments. Solaris 10 is the most
advanced and open UNIX®-based operating system that powers the most business-critical
enterprise systems.
Philip Monson is a Project Leader for the IBM ITSO. Phil has been with Lotus® / IBM for 17
years, joining the company when the early versions of Notes were rolled out for internal use.
He has served in management, technical, and consulting roles in the IT, Sales, and
Development organizations.
Albert Leigh is a Solution Architect in the Sun ISV Solution group at Sun Microsystems, Inc.
Albert has been with Sun for ten years. He provides technical consultation worldwide in the
development and deployment of customer solutions pertaining to enterprise Java application
infrastructures, virtualization, and system performance for IBM WebSphere products on
Solaris. He holds a M.S. degree in Computer Science from the University of Houston, Clear
Lake. Prior to joining Sun, Albert worked on software development projects at NASA Johnson
Space Center and has taught undergraduate courses at UHCL. He has written articles on
Sun Web sites, technical journals, and also frequently blogs about WebSphere on Solaris at
http://blogs.sun.com/sunabl.
Dileep Kumar is a Staff Engineer in the ISV Engineering Group at Sun Microsystems, Inc. He
has over ten years of experience in the computer industry and now works on IBM WebSphere
products on Solaris. His area of expertise includes Java and J2EE based system design and
development, performance enhancement, and implementation. He holds a M.S. degree in
Engineering Management from Santa Clara University, Santa Clara. Prior to joining Sun,
Dileep worked on various software development projects at Netscape and various solution
provider companies in India. Read Dileep's professional blog at
http://blogs.sun.com/dkumar.
xii IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Ilari Ahtiainen is an IT Specialist who works closely with WebSphere Application Server and
IBM HTTP Server products for IBM Finland. He has over nine years of IT experience, ranging
from sales and support to designing and implementing complex J2EE infrastructures for
customers. His specialties are WebSphere Application Server installations, WebSphere
Application Server Plug-ins, and IHS (Web tier). He has a Bachelor of Science degree in
Software Engineering from EVTEK University of Applied Sciences.
Dennis Martins is an IT Specialist working from Integrated Technology Delivery SSO, Brazil.
He has 10 years of experience working in IT. He is an expert in IBM WebSphere Application
Server and is certified in WebSphere Application Server V5 and V6. He has been working
with WebSphere Application Server since V3.5 and mostly designs e-business solutions
focused on using the WebSphere product family. His areas of expertise include the
architecture, design, and development of J2EE applications, WebLogic Application Server,
Solaris, AIX®, and Windows® platforms. Dennis holds a degree in Computer Science, a MBA
in IT Strategic management, and a specialization in developing software degree with new
technologies from Pontifical Catholic University, Rio de Janeiro.
Joe Reinwald is a Course Developer and Instructor for WebSphere Education, a team within
the IBM Software Services for WebSphere organization. Over the past eight years, Joe has
developed courses for WebSphere Application Server Administration, WebSphere Portal
Administration, and WebSphere RFID Premises Server. Most recently, he has been the lead
technical developer for both the WebSphere Application Server V6 and V6.1 Problem
Determination courses. Joe has contributed to the development of several versions of the IBM
Certification Tests for both WebSphere Application Server and WebSphere Portal System
Administrators. In addition to development work, Joe has taught classes for WebSphere
Application Server Administration, WebSphere Portal Server Administration, and WebSphere
Application Server Problem Determination. Joe has delivered these classes to IBM
customers and Business Partners, and has also enabled other instructors to teach the
courses. Prior to joining IBM, Joe was a member of the Education Department at Transarc
Corporation, which was acquired by IBM in 1999. Joe holds a BS degree in Mathematics and
Physics from the University of Pittsburgh.
Mike Nelson, Global Account Executive IBM SW Partner Sales, Sun Microsystems
Daniel Edwin, Technical Account Manager - IBM, ISV Engineering, Sun Microsystems
Preface xiii
Tim Fors, Install Architect, IBM Canada
Vinicius de Melo Patrão, Web Hosting Support Team Leader, ITD - Global Delivery, IBM Brazil
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Discover more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review book form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xiv IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
1
In this chapter, we take a first look at WebSphere Application Server V6.1 and then,
throughout the remainder of this IBM Redbooks publication, show how it can be deployed and
optimized with one of the many operating systems it supports: the Solaris 10 Operating
System (OS).
The Solaris OS, developed by Sun Microsystems and the foundation of the OpenSolaris open
source project (see http://www.opensolaris.org), complements the IBM WebSphere
application environment by providing powerful and flexible infrastructure. The Solaris OS is
the leading operating environment to host demanding enterprise applications, including
WebSphere, in today’s challenging business environments. Solaris 10 is the most advanced
and open UNIX-based operating system that powers the most business-critical enterprise
systems. It delivers proven innovative features, security, performance, scalability, and
reliability, availability, and serviceability (RAS).
Based upon open standards, the Solaris OS is architected for distributed network computing
infrastructures, and has many features and functionalties to provide a highly reliable and
scalable foundation for business computing. Solaris is supported on over 900 different types
of x86 and SPARC systems from leading vendors, such as Sun, IBM, Fujitsu, HP, and Dell,
including AMD64 and Intel® 64-based systems. Sun guarantees application binary
compatibility for existing applications on forward releases, as well as source compatibility
between SPARC and x86 through the Solaris Application Guarantee Program (its terms and
conditions can be found at http://sun.com/solaris/guarantee).
The technology that powers WebSphere products is Java. Over the years, many software
vendors have collaborated on a set of server-side application programming technologies that
help build Web accessible, distributed, and platform-neutral applications. These technologies
are collectively branded as the Java 2 Platform, Enterprise Edition (J2EE) platform. This
contrasts with the Java 2 Standard Edition (J2SE™) platform, with which most clients are
familiar. J2SE supports the development of client-side applications with rich graphical user
interfaces (GUIs). The J2EE platform is built on top of the J2SE platform. J2EE consists of
application technologies for defining business logic and accessing enterprise resources, such
as databases, Enterprise Resource Planning (ERP) systems, messaging systems, e-mail
servers, and so forth.
The potential value of J2EE to clients is tremendous. Among the benefits of J2EE are:
An architecture-driven approach to application development helps reduce maintenance
costs and allows for construction of an information technology (IT) infrastructure that can
grow to accommodate new services.
Application development is focused on unique business requirements and rules, such as
security and transaction support. This improves productivity and shortens development
cycles.
Industry standard technologies allow clients to choose among platforms, development
tools, and middleware to power their applications.
Embedded support for Internet and Web technologies allows for a new breed of
applications that can bring services and content to a wider range of customers, suppliers,
and others, without creating the need for proprietary integration.
Another exciting opportunity for IT is Web services. Web services allow for the definition of
functions or services within an enterprise that can be accessed using industry standard
protocols that most businesses already use today, such as HTTP and XML. This allows for
easy integration of both intra- and inter-business applications that can lead to increased
productivity, expense reduction, and quicker time to market.
Clients WebSphere
Application Existing
Server Systems
IBM HTTP (CICS, IMS,
Server DB2, SAP,
Edge Application J2EE etc.)
Components Server Applications
Portlet Msg
IBM HTTP Applications Queue Messaging
Server Application
Server
Networks
SIP
Applications
Application Server Toolkit Service
Rational Application Developer Providers
Rational Web Developer
The application server is the key component of WebSphere Application Server, providing the
runtime environment for applications that conform to the J2EE 1.2, 1.3, and 1.4 specifications.
Clients access these applications through standard interfaces and APIs. The applications, in
turn, have access to a wide variety of external sources, such as existing systems, databases,
Web services, and messaging resources that can be used to process the client requests.
V6.1 extends the application server to allow it to run JSR 168 compliant portlets and Session
Initiation Protocol (SIP) applications written to the JSR 116 specification.
With the Base and Express packages, you are limited to single application server
environments. The Network Deployment package allows you to extend this environment to
include multiple application servers that are administered from a single point of control and
can be clustered to provide scalability and high availability environments.
WebSphere Application Server supports asynchronous messaging through the use of a JMS
provider and its related messaging system. WebSphere Application Server includes a fully
integrated JMS 1.1 provider called the default messaging provider. This messaging provider
complements and extends WebSphere MQ and application server. It is suitable for
messaging among application servers and for providing messaging capability between
WebSphere Application Server and an existing WebSphere MQ backbone.
WebSphere Application Server works with a Web server (such as the IBM HTTP Server) to
route requests from browsers to the applications that run in WebSphere Application Server.
Web server plug-ins are provided for installation with supported Web browsers. The plug-ins
direct requests to the appropriate application server and perform workload management
among servers in a cluster.
WebSphere Application Server Network Deployment includes the Caching Proxy and Load
Balancer components of Edge Component for use in highly available, high volume
environments. Using these components can reduce Web server congestion, increase content
availability, and improve Web server performance.
Solaris can be deployed in diverse computing environments. Supported on over 900 Intel,
AMD™, and SPARC platforms, it can be used on systems ranging from a single-CPU mobile
or desktop computer to a enterprise-class server with hundreds of CPUs, yet exploit the price,
performance, scale, or availability features of each platform. Solaris is designed to run
efficiently on a single CPU desktop yet scale up to the largest Sun SPARC Enterprise Server
with 64 dual core CPUs. Solaris is built from a single code base, even though it is built for
different chip architectures, ensuring that system functionality is the same across different
platforms.
This compatibility is a key Solaris feature. Solaris provides a high level of compatibility that
lets customers leverage software investments over multiple generations of hardware and
Solaris versions, without “binary breaks” that render existing production binaries unusable.
Solaris provides an upwards-compatible application binary interface (ABI) engineered to
ensure that binaries compiled for older versions of the operating system continue to work on
later versions of that platform architecture. Sun provides a compatibility testing tool to detect
application use of nonstandard APIs not belonging to the ABI, and guarantees that
ABI-conforming applications will run on later versions of Solaris.
Compatibility is provided over time, so a binary compiled several years ago on an older
UltraSPARC machine and an older version of Solaris will work on the latest SPARC-family
processor and current Solaris, and over scale; applications running on a single-CPU SPARC
workstation or low-end server will work on a 144-CPU Sun Fire™ E25K. Permitting
customers to control when they upgrade their software portfolio, and minimizing the frequency
with which they have to upgrade their applications, is an important benefit for cost savings
and stability. Additionally, Solaris provides a common application programming interface (API)
for both x86 and SPARC, so moving an application between the two instruction set
architectures is typically no more than a re-compile. These considerations are directed
towards compiled programs in languages like C or C++. Applications in Java run in the Java
Virtual Machine, and are expected to run without change even across different processor
families.
Tip: From a Solaris console, issue the command man standards for further information
about standards in Solaris.
At the same time, Sun continues to provide industry-leading operating systems innovations in
Solaris. Solaris is designed in a modular fashion that permits innovation without invalidating
existing parts of the OS. Operating systems are a key component of system infrastructure.
Sun invests in research and development to invent new capabilities that add new value to the
Solaris OS and help solve customer problems. Just a few of the innovative features added to
Solaris 10 are:
Solaris Containers: A form of light-weight virtualization, known as Solaris Zone, that
creates private virtual environments without the impact seen in traditional virtual machine
implementations, permitting hundreds or even thousands of virtual environments on the
same Solaris instance.
DTrace: A breakthrough tool for real-time system performance and problem diagnosis,
which has made it possible for customers and Sun engineers to understand the dynamic
behavior of their systems, netting dramatic improvements in debugging and performance
improvement.
Solaris ZFS: A new file system that provides superior data integrity, simpler management,
capacity, and performance compared to traditional file systems.
Configurable privileges and role-based access control (RBAC): Lets the systems manager
control and audit access to system privileges, without the “all or nothing” privilege model
of traditional UNIX.
Predictive Self Healing: The Solaris Fault Management Architecture and Service
Management Facility work together to add robustness to Solaris environments by
providing the ability to proactively detect and mitigate hardware and software failures.
These features provide the reliability, availability, and serviceability needed for
mission-critical applications.
This is just a small subset of the innovations in Solaris, and Sun continues to do research to
create more functionality.
At the same time, Sun continues to preserve the ABI that provides upwards compatibility. This
makes Solaris the ideal robust, stable environment platform needed for production computing,
while providing the innovation and added value needed for current and future business needs.
WebSphere Application Server - Express is unique from the other packages in that it is
bundled with an application development tool. Although there are WebSphere Studio and
Rational® Developer products designed to support each WebSphere Application Server
package, normally they are ordered independently of the server. WebSphere Application
Server - Express includes the Rational Web Developer application development tool. It
provides a development environment geared toward Web developers and includes support for
most J2EE 1.4 features with the exception of Enterprise JavaBeans™ (EJB™) and J2EE
Connector Architecture (JCA) development environments. However, keep in mind that
WebSphere Application Server - Express V6 does contain full support for EJB and JCA, so
you can deploy applications that use these technologies.
This package includes two tools for application development and assembly:
The Application Server Toolkit, which has been expanded in V6.1 to include a full set of
development tools. The toolkit is suitable for J2EE 1.4 application development as well as
the assembly and deployment of J2EE applications. It also supports Java 5 development.
In addition, the toolkit provides tools for the development, assembly, and deployment of
JSR 116 SIP and JSR 168 portlet applications.
This package also includes a trial version of Rational Application Developer, which
supports the development, assembly, and deployment of J2EE 1.4 applications.
To avoid confusion with the Express package in this IBM Redbooks publication, we refer to
this as the Base package.
The addition of Edge components provides high performance and high availability features.
For example:
The Caching Proxy intercepts data requests from a client, retrieves the requested
information from the application servers, and delivers that content back to the client. It
stores cacheable content in a local cache before delivering it to the client. Subsequent
requests for the same content are served from the local cache, which is much faster and
reduces the network and application server load.
The Load Balancer provides horizontal scalability by dispatching HTTP requests among
several, identically configured Web server or application server nodes.
Packaging summary
Table 1-1 shows the features included with each WebSphere Application Server packaging
option available for Solaris.
Database IBM DB2 Universal IBM DB2 Universal IBM DB2 UDB
Database™ Express Database Express Enterprise Server
V8.2 V8.2 (development Edition V8.2 for
use only) WebSphere
Application Server
Network Deployment
Note: Not all features are available on all platforms. See the System Requirements Web
page for each WebSphere Application Server package for more information.
Note: This packaging may be different (for example, without the SPARC DVD) when IBM
distributes with their own Solaris media kit for x86-based IBM System x™ servers and
BladeCenter® servers.
You can also download Solaris DVD images at no charge from http://sun.com/solaris.
Sun also offers a kit with additional disks, including Sun Java™ System products. The Solaris
10 OS includes licenses to run some of these products. For example, Sun Java System
Directory Server is the industry’s most widely deployed, general-purpose, LDAP-based
directory server. Solaris 10 includes a license for 200,000 directory entries. For more details,
look at:
http://sun.com/software/solaris/what_you_get.jsp
IMS™ IMS V8 or V9
Sun Java System Directory Server Enterprise 6.0, 6.1 and 6.2
Edition
Your system should have the following minimum hardware requirements to support
WebSphere Application Server:
SPARC, AMD Opteron™ or Intel EM64T, or compatible processor at 1 GHz or faster.
1 GB of physical memory.
DVD-ROM drive.
The disk space requirements defined in 2.1.3, “Procedure for preparing the operating
system” on page 17.
You can get the up-to-date information about these Sun servers at http://sun.com/servers.
For more information, refer to Implementing Sun Solaris on IBM BladeCenter Servers, found
at:
http://www.redbooks.ibm.com/abstracts/redp4259.html
WebSphere Application Server is available with 32-bit or 64-bit JVM™ on SPARC systems
while 64-bit JVM is available on x64 (AMD64 and Intel 64).
Table 1-5 WebSphere Application Server and the Java Platform Versions
WebSphere J2EE J2SE Comment
Application Server (JDK)
Note that the JDK update and version in each of these WebSphere Application Server
versions can be upgraded when IBM releases updates called “Fix Pack” and “Service
Release” (SR).
WebSphere Application Server for the Solaris OS is bundled with Sun’s JDK with the
exception of some modifications made by IBM. The JDK also provides the Java Runtime
Environment (JRE™) that includes the Java Virtual Machine (JVM). The Sun JDK bundled
with WebSphere Application Server slightly differs from the standard Sun JDK that can be
publicly downloaded from http://java.sun.com. The differences are in the following areas:
Object Request Broker (ORB) libraries
XML Processing libraries
Security Framework
For example, executing the java -version command from the installed directory of
WebSphere Application Server v6.1 produces the following output:
Java(TM) 2 Runtime Environment, Standard Edition (IBM build 1.5.0_06-erdist-2006
0404 20060511)
Java HotSpot(TM) Server VM (build 1.5.0_06-erdist-20060404, mixed mode)
IBM Java ORB build orb50-20060511a (SR2)
XML build XSLT4J Java 2.7.4
XML build IBM JAXP 1.3.5
XML build XML4J 4.4.5
Attention: The Solaris OS installation process is beyond the scope of this book; however,
detailed procedures are available at http://docs.sun.com/app/docs/coll/1236.1
The Solaris OS lets you install a subset of the full operating system, by specifying which
Solaris "Software Groups" to install. This makes it possible to reduce the disk space or time
needed to do an install. The Solaris software groups are collections of Solaris packages. The
current software groups in Solaris 10 today include:
Entire Distribution plus OEM support
Entire Distribution
Developer System Support
End User System Support
Core System Support
Reduced Networking Core System Support
“Reduced Networking Core System Support” contains the minimal number of packages and
requires the least amount of disk space while “Entire Distribution Plus OEM Support” contains
all available packages for the Solaris OS requiring the most amount of disk space. Detailed
descriptions of the Solaris Software Group and recommended space are available in the
publication Solaris 10 Installation Guide: Basic Installations, found at:
http://docs.sun.com/app/docs/doc/817-0544
Figure 2-1 on page 17 shows an example screen for choosing the necessary “Software
Groups” during the Solaris OS installation.
Figure 2-1 Selecting Solaris Software Group during the Solaris OS installation
Each software group includes support for different functions and hardware drivers:
For an initial Solaris installation, select “Entire Distribution” as the minimum set of
Software Group to support WebSphere Application Server V6.1.
If you have a Solaris system pre-installed with the Software Group less than “Entire
Distribution”, upgrade the Solaris installation to this group.
Once the Solaris OS has been properly installed and configured, you can boot up the Solaris
system and prepare the operating system to be ready for WebSphere Application Server
installation and configuration.
Attention: If you are using Solaris 10 8/07 or later, the Mozilla Firefox browser is
located in the default location /usr/bin/firefox.
If your Web browser is not in your command path, you should set the location of it using
the command that identifies the actual location of the browser. For example, if the Mozilla
package is in the /usr/sfw/bin/mozilla directory, use the following command:
BROWSER=/usr/sfw/bin/mozilla
export BROWSER
3. Optional: In the previous step, the browser requires that you run it in a graphical window. If
your Sun server does not have a graphical display terminal, you can either set your display
to a remote X display terminal or put the X server capability on the Sun server that your
are working with using products like the VNC Server (http://www.realvnc.com).
To display a remote X display terminal, run the following commands:
DISPLAY=your_X_display_terminal:ID
export DISPLAY
4. Stop any other WebSphere Application Server-related Java processes on the machine
where you are installing the product.
5. Stop any Web server process such as the IBM HTTP Server.
6. Provide adequate disk space:
The Network Deployment product requires the following disk space:
– 730 MB for the app_server_root directory before creating profiles
The installation root directory includes the core product files. This size does not include
space for profiles or applications. Profiles require 40 MB of temp space in addition to
the sizes shown. Profiles have the following space requirements:
• 30 MB for the Deployment manager profile
This size does not include space for Sample applications that you might install. The
size also does not include space for applications that you might deploy.
• 200 MB for an Application Server profile with the Sample applications
This size does not include space for applications that you might develop and install.
• 10 MB for an unfederated custom profile
This size does not include space for applications that you might develop and install.
The requirement does include space for the node agent. However, you must
federate a custom profile to create an operational managed node.
After federating a custom profile, the resulting managed node contains a functional
node agent only. Use the deployment manager to create server processes on the
managed node.
– 100 MB for the /tmp directory
The temporary directory is the working directory for the installation program.
If the /tmp directory does not have enough free space, the installation program stops
the installation and displays a message such as Prerequisite checking has failed.
Not enough space.
Attention: The installation wizard for each component displays required space on the
confirmation panel before you install the product files and selected features. The
installation wizard also warns you if you do not have enough space to install the product.
If you plan to migrate applications and the configuration from a previous version, verify that
the application objects have enough disk space. As a rough guideline, plan for space equal
to 110 percent of the size of the application objects:
For Version 4.0.x: The size of enterprise archive (EAR) files
For Version 5.0.x: The size of EAR files
7. IBM recommended parameters for the Solaris OS are shown in Example 2-1. These
kernel parameters have been part of WebSphere Application Server since Version 5 for
embedded messaging based on the WebSphere MQ product. With Service Integration
Bus (SIB) for Java messaging, WebSphere Application Server V6.1 departed from
WebSphere Application Server V5. SIB does not require modifications of these
recommended parameters in /etc/system except for rlim_fd_cur.
Current Session Set the necessary parameter(s) in your current shell session before you execute your
application, such as WebSphere Application Server. This will work for the current session only
and will not persist for the next session. The relevant command is:
# prctl -n process.max-file-descriptor -r -v 1024 $$
System Wide Set the necessary parameter(s) for system wide in the /etc/project. You may not want to do this
as you are increasing the default value(s) for the entire system. The relevant command is:
# projmod -sK 'process.max-file-descriptor=(privileged,1024,deny)' system
Root User Set the necessary parameter(s) for the root user in the /etc/project. This is for all processes
owned by the root user. The relevant command is:
# projmod -sK 'process.max-file-descriptor=(privileged,1024,deny)' user.root
Non-Root User Set the necessary parameter(s) for the WebSphere Application Server user, who runs the server
processes, in the /etc/project. This is a best practices because you are changing the default
value(s) specifically to the WebSphere Application Server user based upon the IBM
recommendation for the WebSphere processes. The relevant commands are:
# projadd user.wasuser1
# projmod -sK 'process.max-file-descriptor=(privileged,1024,deny)' user.wasuser1
If you use Solaris Containers, you can also set the kernel parameters at the zone
configuration level, as explained in “Resource control for Solaris Zones” on page 148.
8. Verify that the prerequisites and corequisites are at the required release levels.
Although the installation wizard checks for prerequisite operating system patches with the
prereqChecker application, you should review the prerequisites on the supported
hardware and software Web site at
http://www-1.ibm.com/support/docview.wss?rs=&uid=swg27006921 and ensure that your
system meets them prior to launching the installation wizard.
Refer to the documentation for non-IBM prerequisite and corequisite products to learn how
to migrate to their supported versions.
The above list of steps results in preparing the operating system for installing the product.
After preparing the operating system for installation, you can install the WebSphere
Application Server product.
Because the IPC facilities are now controlled by resource controls, their configurations can be
modified while the system is running. Many applications that previously required system
tuning to function properly may now run without any tuning because of increased default
values and the automatic allocation of resources.
Most new values of the default kernel parameters are typically sufficient and a good baseline
to support WebSphere Application Server. In Solaris 10, many kernel values have been
increased to accommodate an application’s increased demand for more system resources.
For installation and verification tests, you can verify the values of these tunable parameters
(installed as “root” user) and achieve successful installation and execution of WebSphere
Application Server.
Prior to Solaris 10, the sysdef command was used to provide the IPC Module related
settings:
bash-# sysdef -i
But, when this command is executed on Solaris 10 system, the output shows that these
modules do not have system-wide limits. Example 2-2 shows a portion of the output from
sysdef -i command on Solaris 10.
To obtain the IPC and other settings for the current shell environment where the WebSphere
Application Server is to be installed, use the following command:
bash-3.00# prctl $$
The prctl command with the $$ option lists all the System V related IPC and file descriptor
settings for the current shell that will be applied to any process started within that shell. To
make any changes, change the settings, close the current shell, and log back in to get the
new settings into effect.
You can use the id command to get the current project id of the root user:
bash-# id -p
uid=0(root) gid=0(root) projid=1(user.root)
Table 2-2 lists the comparison between the IBM recommended parameters and the new
tunable kernel parameters, such as SYS V IPC, in the Solaris 10 for WebSphere Application
Server installation. It shows the differences between the original /etc/system file and the
Solaris 10 /etc/project file. It also lists the new default values and obsoleted parameters.
As you go through the IBM recommended /etc/system settings in Solaris 10, ignore the
obsoleted parameters; thus, you only need to modify the following parameters:
project.max-shm-memory
project.max-shm-ids
project.max-sem-ids
process.max-sem-nsems
process.max-sem-ops
process.max-file-descriptor
If a parameter's current value is less than the recommended threshold, update the current
value with the recommended value. By using the projmod command, the settings are stored in
the /etc/project file. The commands to modify the resource control parameters for the root
user are shown in Example 2-3.
You can now examine the new tunable parameter settings in the /etc/project file. Example 2-4
shows the new parameter settings for the root user. These project setting modifications
should be performed on each zone where WebSphere is being installed. See Chapter 6 5.1.3,
“Solaris Containers” on page 118 for a discussion of Solaris zones.
You can also make these changes using the prctl command, but the settings will not persist
through a system reboot:
bash-3.00# prctl -n project.max-shm-memory -r -v 4gb -i project 1
In Solaris 10, you can enable logging using rctladm so the system will notify you when you
are running out of these resources. For example, if a user wants to be notified by the system
when it is running out of process.max-file-descriptor, then the user can issue the following
command from the shell prompt:
bash-3.00# rctladm -e syslog process.max-file-descriptor
If the system happens to run out of file descriptors, it will be reported in the
/var/adm/messages file, as shown in Example 2-5.
This can also be achieved by directly updating the /etc/rctladm.conf file. To get notification
about all the resources, you can change all the lines shown in Example 2-5 for the file
descriptor and then examine the /var/adm/messages file for notifications.
You can set or get limitations on the system resources available to the current shell and its
descendents (for example, file-descriptors limits) using the ulimit command.
Network interfaces can also be multiplexed as multiple logical devices by running the
following command:
# ifconfig e1000g1:1 ip_address net_mask plumb
Example 2-7 Solaris package query for WebSphere Application Server V6.1
bash# pkginfo -l WSBAA61 WSBAA61LI
PKGINST: WSBAA61
NAME: IBM WebSphere Application Server
CATEGORY: application
ARCH: sparc
VERSION: 6.1.0.0.DSP=6.1.0.0
BASEDIR: /opt/IBM/WebSphere/AppServer
VENDOR: IBM
DESC: IBM WebSphere Application Server V6.1
STATUS: completely installed
PKGINST: WSBAA61LI
NAME: LAP Component
CATEGORY: application
ARCH: sparc
VERSION: 6.1.0.0.DSP=6.1.0.0
BASEDIR: /opt/IBM/WebSphere/AppServer
VENDOR:
DESC: LAP
STATUS: completely installed
Before you can begin installation, preparation tasks must be performed on the underlying
Solaris system by creating the necessary user id, group id, and its home directory for
successful installation. Example 2-8 shows how to create the home directory, add a group
and user ID, project entry, and file descriptor limit per IBM recommendation for the
WebSphere Application Server user.
Example 2-8 Preparing a non-root user for WebSphere Application Server installation
# mkdir -p /ex1/home/wasuser1
# groupadd wasgrp1
# useradd -c “WAS User” -d “/ex1/home/wasuser1” \
-g wasgrp1 -s /usr/bin/bash wasuser1
# chown -R wasuser1:wasgrp1 /ex1/home/wasuser1
# passwd wasuser1
Important: For the non-root user to be able to perform successful profile creation in the
root installed environment, you must first grant write permission of files and directories to
this user. Refer to:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.n
d.doc/info/ae/ae/tpro_nonrootpro.html
Example 2-9 Preparing a non-root user for WebSphere Application Server execution
# projadd user.wasuser1
# projmod -sK 'process.max-file-descriptor=(privileged,1024,deny)' user.wasuser1
Example 2-10 Preparing the root user for WebSphere Application Server execution
# projmod -sK 'process.max-file-descriptor=(privileged,1024,deny)' user.root
The user also must have full read and write privileges in the WebSphere Application Server
profile and log directories. The detailed instructions of how to limit the runtime privileges is
explained in 5.6, “Process Rights Management” on page 157.
2.4 Default product locations when the root user installs the
product
The root user is capable of registering shared products and installing into system-owned
directories. The following default directories shown in Table 2-3 are system-owned directories.
These file paths are default locations. You can install the product and other components in
any directory where you have write access. You can create profiles in any valid directory
where you have write access. Multiple installations of WebSphere Application Server Network
Deployment products or components, of course, require multiple locations.
Table 2-3 Default product file system locations for root installer
Variable Description File system location
Note: The cip_uid variable is the CIP unique ID generated during creation of the build
definition file. You can override the generated value in the Build definition wizard. Use a
unique value to allow multiple CIPs to install on the system.
Table 2-4 Default product file system locations for non-root installer
Variable Description File system location
Note: The cip_uid variable is the CIP unique ID generated during creation of the build
definition file. You can override the generated value in the Build definition wizard. Use a
unique value to allow multiple CIPs to install on the system.
was_root /opt/IBM/WebSphere/AppServer
For more information about the planning and designing of a WebSphere Application Server
V6 installation, go to:
http://www.redbooks.ibm.com/abstracts/redp4305.html?Open
Tip: If you have problems starting launchpad, verify that you have a browser installed into
your system and you have exported it. Locate your browser binaries and export it and start
over. For example:
bash# export BROWSER=/usr/sfw/bin/mozilla
– Alternatively, you can start the installation wizard directly with the install command:
bash#./WAS/install
Tip: If you have problems starting the installation wizard due to Java Runtime Environment
(JRE) error, you can force the installation wizard to use its own JRE by using the following
command:
./WAS/install -is:javahome /PATH_TO_MEDIA/WAS/java/jre
6. The Installation wizard starts and you should see the Welcome window of the installation
wizard, as shown in Figure 3-3 on page 35.
7. Click Next, and the window shown in Figure 3-4 should appear.
8. Read and accept the IBM and third-party license agreement by selecting the I accept
both the IBM and the non-IBM terms radio button shown in Figure 3-4 and then click
Next to continue the installation.
The installation wizard checks for a previous installation at the same product level. The
wizard looks for an existing Version 6.1 installation. If a previous installation is detected,
the wizard shows an Existing installation panel, where you can:
– Add features to the existing installation.
– Perform a new installation to another directory.
– Perform an upgrade of a trial installation or Express installation to the full product.
The following steps assume that you do not have an existing installation that needs to be
upgraded or updated with additional features. For more information about upgrading or
migration issues, refer to the WebSphere Application Server V6 Migration Guide.
SG24-6369 and the WebSphere Application Server Information Center.
10.Click Next without the Install the sample applications check box selected (Figure 3-6 on
page 37).
Tip: For better performance, we recommend that you do not install Sample applications in
your production environment. By omitting the Samples, you can improve application server
startup time by 60 percent and save 15 percent of disk space on each application server
installation. You can also save up to 30 percent of processing footprint (based on a
maximum heap size of 256 MB).
11.Specify the destination of the installation root directory to the field using the information in
Table 3-1 on page 32 (Figure 3-7 on page 38). Click Next.
Attention:
Leaving the destination of the installation root field empty prevents you from continuing.
Do not use symbolic links as the destination directory because they are not supported.
Spaces are not supported in the name of the destination installation directory in the
Sun Solaris environment.
The installer checks for the required space before calling the installation wizard. If you do
not have enough disk space, cancel the installation program and acquire more disk space.
The disk space required for the base WebSphere Application Server product is:
– 930 MB for the destination of installation root directory
– 100 MB for the /tmp directory
If both of these directories are in the same file system, the total amount of free space
needed in that file system is 1030 MB. For full disk space requirements, refer to 2.1.3,
“Procedure for preparing the operating system” on page 17 for more information.
12.To create a stand-alone application server profile, choose the Application Server
environment, as shown in Figure 3-8 and click Next.
Attention: If you enable security at this point, you will get a file based user repository. If
you are planning to use another type of user repository, you should enable administrative
security after the installation or profile creation is complete. See 8.1.1, “Enabling security”
on page 269 for more information about enabling administrative security.
Tip: In environments where you plan to have multiple stand-alone application servers, the
security policy of each application server profile is independent of the others. Changes to
the security policies in one application server are not synchronized with other profiles.
Remember the user name and password you type here. You cannot log onto the
application server’s administrative console without it. You cannot stop the Application
Server from the command line or use the Application Server at all unless you have the user
name and password.
The Installation wizard creates the uninstaller program and then displays a progress
window that shows which components are being installed. This may take a little time. At
the end of the installation, the wizard displays the Installation completion window
(Figure 3-11 on page 41).
15.Click the Exit link on the launchpad’s left hand side as shown in Figure 3-12 on page 42 to
exit the launchpad application.
You do not need to install the Application Client unless an application that you are
deploying was designed to run as a client application.
16.Verify the installation by examining the completion window and the log.txt located in the
<was_root>/logs/install directory.
You should see the INSTCONFSUCCESS entry at the end of the log.txt file after a successful
installation.
You can also select the Launch the First steps console check box in the installation
wizard’s completion window shown Figure 3-11 on page 41 before clicking the Finish
button. For additional details about using the First steps console, refer to 10.3.1,
“Installation problem determination” on page 418.
If problems occur, consult the following applicable logs in Table 3-2 on page 43. You can also
refer to WebSphere Application Server V6.1 installation problem determination, REDP-4305,
found at:
http://www.redbooks.ibm.com/abstracts/redp4305.html?Open
This procedure results in the installation wizard installing WebSphere Application Server into
the installation root directory. The installation wizard creates a profile named AppSrv01 that
provides the runtime environment for the server1 application server. Further configuration is
not necessary at this time. However, you can create additional stand-alone application
servers with the Profile Management tool. For more information, refer to 4.2.2, “Creating
profiles with Profile Management Tool” on page 60.
Instead of displaying an installation wizard interface, the silent installation causes the
installation program to read all of your responses from a file that you provide. To specify
non-default options during the installation, you must use a customized response file. To install
silently, you must accept the license agreement in the agreement option.
If problems occur, consult the applicable logs in Table 3-2 on page 43. Correct the problem
and restart the installation from Step 6.
The uninstaller program removes registry entries, uninstalls the product, and removes all
related features. The uninstaller program does not remove log files in the installation root
directory.
The following procedure uninstalls the IBM WebSphere Application Server product from
Solaris:
1. Log in to your Solaris system as root.
2. Stop all application server processes in all profiles on the machine, for example:
bash# cd /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/bin
bash# ./stopServer.sh server1
Attention: Remember, if you have enabled administrative security on the profile, you have
to add two additional parameters to the ./stopServer.sh command:
bash# ./stopServer.sh -username <admin_username> -password <admin_password>
3. After all application server processes are stopped, change the directory to the installation
root’s uninstall directory, for example:
bash# cd /opt/IBM/WebSphere/AppServer/uninstall
4. Issue the uninstall command
bash# ./uninstall
Tip: You can also issue the uninstall command with a silent parameter to use the wizard
without the graphical user interface:
bash#./uninstall -silent (This is the default behavior, and it removes the product and
all profiles.)
OR
bash#./uninstall -silent -OPT removeProfilesOnUninstall=”false” (Leaves all
created profiles intact during the uninstallation process.)
6. Click the Next button in the Welcome window shown in Figure 3-13.
7. Click the Remove all profiles check box (Figure 3-14 on page 47) if you want to do a
complete uninstallation and click Next.
Note: When using the wizard, the window allows you to choose whether or not the
uninstaller deletes all profiles before it deletes the core product files. By default, all profiles
will be deleted, but this option can be deselected on this window (Figure 3-14).
Attention: You can always click the Cancel button to cancel the uninstallation. Be aware
that this is the last chance to cancel the uninstallation. After you click the Next button
shown in Figure 3-15, the uninstallation of WebSphere Application Server starts.
9. After the uninstallation is over, you see the uninstallation completion window (Figure 3-16
on page 49).
If you plan to reinstall, manually uninstall any remaining product files. Remove all artifacts of
the product so you can install it into the same installation root directory.
Figure 3-17 Uninstallation wizard: Uninstallation failure due to existing WebSphere Application Server
processes
Click Cancel in Figure 3-17 and then click Yes in Figure 3-18 to exit the uninstallation
wizard.
After you have exited the uninstallation wizard, stop your currently running Application
Server instances and restart the uninstallation from Step 3.
All packages in WebSphere Application Server V6.1 continue to offer graphical and silent
installation options and also use common logging and tracing for consistency across
packages.
The Version 6.1 installer provides a streamlined installation process that makes it faster and
easier to set up a complete environment. A Profile can now be created directly by the installer,
rather than requiring you to launch the profile management tool after installation. You even
have the option to create a deployment manager and an already-federated node in a single
step during the installation.
Note: See the WebSphere Application Server V6 Migration Guide, SG24-6369 for
information about migration from previous versions.
A silent installation uses the Installation wizard to install the product in silent mode, without
the graphical user interface. Instead of displaying a wizard interface, the silent installation
causes the installation program to read all of your responses from a file that you provide.
<was_home> / logs / manageprofiles/ Traces all events that occur INSTCONFFAIL: Total profile creation
<profile_name>_create.log during the creation of the failure.
named profile. INSTCONFSUCCESS: Successful
Created when using the Profile profile creation.
Management tool or the INSTCONFPARTIALSUCCESS:
manageprofiles command. Profile creation errors occurred but the
profile is still functional. Additional
information identifies the errors.
<was_home> / logs /manageprofiles/ Traces all events that occur INSTCONFFAIL: Total profile deletion
<profile_name>_delete.log during the deletion of the failure.
named profile. INSTCONFSUCCESS: Successful
Created when using the Profile profile deletion.
Management tool or the INSTCONFPARTIALSUCCESS:
manageprofiles command. Profile deletion errors occurred but the
profile is still functional. Additional
information identifies the errors.
<was_home> /logs /pmt.log Logs all profile creation events that INSTCONFFAIL: Total profile creation
occur when using the Profile failure.
Management tool. INSTCONFSUCCESS: Successful
profile creation.
INSTCONFPARTIALSUCCESS:
Profile creation errors occurred but the
profile is still functional. Additional
information identifies the errors.
WebSphere Application Server V6.1 files are divided into two categories:
Product Files: Product files include the application binaries needed to run the Application
Server.
User Files: User files contain information used by the Application Server. User files contain
defined variables, resources, and log files.
The WebSphere Application Server installation process simply lays down a set of core
product files required for the runtime processes. After installation, you need to create one or
more profiles that define the runtime to have a functional system. The core product files are
shared among the runtime components defined by these profiles.
With the Network Deployment package, you have the option of defining multiple application
servers with central management capabilities, as summarized in Figure 4-1 on page 56. The
administration domain is the cell, consisting of one or more nodes. Each node contains one or
more application servers and a node agent that provides an administration point managed by
the deployment manager.
The deployment manager can be located on the same machine as one or more of the
application servers. This would be a common topology for single machine development and
testing environments. In most production topologies, we recommend that the deployment
manager be placed on a separate dedicated machine.
The basis for this runtime environment starts with the deployment manager that provides the
administration interface for the cell. As you would expect, the deployment manager is defined
by a deployment manager profile.
Node Node
Agent Created via Agent
administrative Created via
Created via console
administrative administrative
console console
Note: You can also create stand-alone application servers with the Network Deployment
package, though you would most likely do so with the intent of federating that server into a
cell for central management.
Types of profiles
We mentioned the types of profiles available for defining the runtime. In the following sections,
we take a closer look at these profiles.
In a Network Deployment environment, you should create one deployment manager profile.
This gives you:
A cell for the administrative domain
A node for the deployment manager
A deployment manager with an administrative console
No application servers
Custom profile
A custom profile is an empty node, intended for federation to a deployment manager. This
type of profile is used when you are building a distributed server environment. Use a custom
profile in the following way:
1. Create a deployment manager profile.
2. Create one custom profile on each node on which you will run application servers.
3. Federate each custom profile to the deployment manager, either during the custom profile
creation process or later by using the addNode command.
4. Create new application servers and clusters on the nodes from the administrative console.
Cell profile
Cell profile (new): This new option allows you to quickly set up a distributed server
environment on a single system.
A cell profile is actually a combination of two profiles: a deployment manager profile and an
application server profile. The application server profile is federated to the cell. The
deployment manager and application server reside on the same system. This type of profile
lets you get a quick start with a distributed server environment and is especially useful for test
environments that typically have all nodes on one test system.
In addition to the traditional directories under the <was_home> directory (bin, config,
installedapps, and so on), you now have a profiles directory containing a subdirectory for
each profile you create and allow to use the default home location. The directory structure for
each profile resembles the primary structure. In other words, there is a bin, config,
installedApps, and other directories required for a unique runtime under each profile.
However, profiles can be stored in any folder, so we suggest storing them in a more
accessible structure (by default, there are at least six levels). We refer to the root of each
profile directory (by default <was_home>/profiles/profile_name) as <profile_home>.
Example 4-2 assumes that a profile named itsoAppServer01 exists.
Why do we emphasize this point? If you enter commands while in the <was_home>/bin
directory, they are executed against the runtime defined by the default profile. The default
profile is determined by the following items:
The profile was defined as the default profile when you created it. The last profile specified
as the default takes precedence. You can also use the manageprofiles command to
specify the default profile.
If you have not specified the default profile, it will be the first profile you created.
To make sure command-line operations are executed for the correct runtime, you need to do
one of two things:
Specify the -profileName option when using a command and execute the command from
the <was_home>/bin directory.
Execute the command from its <profile_home>/bin directory.
An application server profile is created during the installation of Express and Base. With
Network Deployment, you have the option of creating a profile of any type, including an
application server profile.
Create an application server profile on that system. Since you have an application server
automatically with the Base and Express installation, you only need to do this if you want an
additional stand-alone server environment.
Method 1
This method assumes that you do not have a stand-alone application server to federate, but
instead will create application servers from the deployment manager. This gives you a little
more control over the characteristics of the application servers during creation, including the
server name (all application servers created with the application server profile are named
server1). You can also create an application server, customize it, and then use it as a
template for future application servers you create. If you are using clustering, you can create
the cluster and its application servers as one administrative process.
When you create an application server this way, you do not automatically get the sample
applications, but can install them later if you want.
Method 2
This method assumes you will federate an application server profile to the cell. With the
application server profile, you have an existing application server (server1) and might have
applications installed, including the sample applications and any user applications you have
installed. Do the following steps:
1. Install Network Deployment on the server. If this is a multiple machine install (deployment
manager on one and application servers on one or more separate machines), install the
product on each machine.
2. Create a deployment manager profile on the deployment manager machine and start the
deployment manager.
3. Create an application server profile on the application server machine and start the
application server.
4. Open the deployment manager’s administrative console and add the node defined by the
application server profile to the cell.
5. This deletes the application server cell, and federates the node to the deployment
manager cell. If you want to keep applications that have been installed on the server, be
sure to specify this when you federate the node by checking the Include applications box
in the administrative console. Alternatively, use the -includeapps option with the addNode
command if you are federating the node from the command line.
The new node agent is started automatically by the federation process, but you need to start
the application server manually.
Important: Note that the Profile Management Tool is not available on 64-bit platforms. With
this platform, manageprofiles.sh can be used instead. For details of the manageprofiles
command, see 4.3.1, “Using the manageprofiles command” on page 90
The Profile Management Tool replaces the Profile Creation Tool from previous versions. It is
based on the Eclipse Rich Client Platform, as opposed to the previous Profile Creation Tool,
which was based on InstallShield Multi Platform.
The Profile Management Tool provides multiple profile templates, including the new cell
template, which has the ability to create a cell in a single step. It allows you to select a variety
of features during the profile creation, including the ability to enable administrative security
out of the box. It also offers enhanced port conflict resolution functionally and allows you to
optionally create a Web server definition at profile creation time.
The first steps are the same, regardless of the type of profile you will create. You can start the
Profile Management Tool in the following way:
1. Use the platform-specific command in the <was_home>/bin/ProfileManagement directory.
For example, for Solaris, use pmt.sh.
2. Use the box (directly after installation) from the install wizard to launch the Profile
Management Tool.
3. When you start the wizard, the first window you see is the Welcome window. Click Next to
select the type of profile you will create, as shown in Figure 4-2.
The rest of the wizard varies, depending on the type of profile you are creating. The steps to
create each type of profile are discussed more in the following sections.
At the end of the Profile Management Tool, you have the opportunity to start the First steps
interface. This interface helps you start the deployment manager or application server and
has other useful links, such as opening the administrative console, migration help, starting the
Profile Management Tool, and installation verification.
Each profile you create has its own First steps program located here:
<profile_home>/firststeps/firststeps.sh
If you choose not to start the First steps program at the completion of the wizard, you can
start it later from this location.
You will always have two options when using the Profile Management Tool to create a profile.
The “Typical” path will determine a set of default values to use for most settings without giving
you the option to modify them. The “Advanced” path lets you specify values for each option.
Attention: The Profile Management Tool is mostly used for creating profiles and cannot
perform functions like delete, listProfiles, or getName. Only the manageprofiles command
can perform these functions. For more details, refer to 4.3, “Managing profiles” on page 90
The administrative console is deployed by default. You can choose whether to deploy the administrative
console. We recommend that you do so.
The profile name is Dmgrxx by default, where xx is 01 for the You can specify the profile name and its location.
first deployment manager profile and increments for each
one created. The profile is stored in
<was_home>/profiles/Dmgrxx.
The cell name is <hostname>Cellxx. You can specify the node, host, and cell names.
The node name is <hostname>CellManagerxx.
The host name is pre-filled with your system’s host name.
You can enable administrative security (yes or no). If you select yes, you will be asked to specify a user name and
password that will be given administrative authority.
TCP/IP ports will default to a set of ports not used by any You can use the recommended ports (unique to the
profiles in this WebSphere installation instance. installation), use the basic defaults, or select port
numbers manually.
Figure 4-3 Creating a deployment manager profile: Enter name and location
Figure 4-4 Creating a deployment manager profile: Enter cell, host, and node names
Click Next.
Click Next.
Note: This user does not have to be defined on the OS. Remember the user name and
password you type here. You cannot log onto application server administrative console
without it.
Important: You might want to note the following ports for later use:
SOAP connector port: If you use the addNode command to federate a node to this
deployment manager, you need to know this port number. This is also the port you
connect to when using the wsadmin administration scripting interface.
Administrative console port: You need to know this port in order to access the
administrative console. When you turn on security, you need to know the Administrative
console secure port.
Tip: In the same manner, you can use the startManager command to start the
deployment manager, but the user name and password are not required.
Table 4-3 shows a summary of all steps involved in process of creating a application server
profile.
The administrative console and default application are You have the option to deploy the administrative console
deployed by default. The sample applications are not (recommended), the default application, and the sample
deployed. applications (if installed).
The profile name is AppSrvxx by default, where xx is 01 for You can specify the profile name and its location.
the first application server profile and increments for each
one created. The profile is stored in
<was_home>/profiles/AppSrvxx.
The profile is not the default profile. You can choose whether to make this the default profile.
(Commands run without specifying a profile will be run
against the default profile.)
The application server is built using the default application You can choose the default template, or a development
server template. template that is optimized for development purposes.
The node name is <host>Nodexx. You can specify the node name and host name.
The host name is pre-filled in with your system’s DNS host
name.
You can enable administrative security (yes or no). If you select yes, you will be asked to specify a user name and
password that will be given administrative authority.
TCP/IP ports will default to a set of ports not used by any You can use the recommended ports (unique to the
profiles in this WebSphere installation instance. installation), use the basic defaults, or select port
numbers manually.
Does not create a Web server definition. Allows you to define an external Web server to the
configuration.
This section takes you through the steps of creating the application server profile:
1. Start the Profile Management Tool. Click Next on the Welcome page.
2. Select the Application server profile option. Click Next.
3. Select the kind of creation process you want to run: Typical or Advanced.
– If Typical is selected, then you will only see one more option (to enable security).
– If Advanced is selected, you will continue with the next step.
4. Select whether you want to deploy the administrative console and the default application.
If you have installed the sample applications (optional during WebSphere Application
Server installation), then you can opt to deploy these as well.
5. Enter a unique name for the profile or accept the default. The profile name will become the
directory name for the profile files. See Figure 4-9 on page 71.
Click the box if you want this directory to be the default profile for receiving commands.
If the application server will be used primarily for development purposes, check the option
to create it from the development template.
Click Next.
Figure 4-10 Creating an application server profile: Enter host and node names
Note: If you are planning to create multiple stand-alone application servers for
federation later to the same cell, make sure you select a unique node name for each
application server.
7. Choose whether to enable administrative security. If you enable security here, you will be
asked for a user ID and password that will be added to a file-based user registry with the
Administrative role. Click Next.
8. The wizard will present a list of TCP/IP ports for use by the application server, as shown in
Figure 4-11 on page 73. If you already have existing profiles on the system (within this
installation), this will be taken into account when the wizard selects the port assignments,
but you should verify that these ports will be unique on the system.
Important: You might want to note the following ports for later use:
SOAP connector port: If you use the addNode command to federate a node to this
deployment manager, you need to know this port number. This is also the port you
connect to when using the wsadmin administration scripting interface.
Administrative console port: You need to know this port in order to access the
administrative console. When you turn on security, you need to know the
Administrative console secure port.
Figure 4-12 Creating an application server profile: Creating a Web server definition
10.Review the options you have chosen and click Create to create the profile. See
Figure 4-13 on page 75.
This final window indicates the success or failure of the profile creation.
If you have errors, check the log at:
<was_home>/logs/manageprofiles/<profile_name>_create.log
Note that you will have to click Finish on the window to unlock the log.
You will also find logs for individual actions stored in:
<profile_home>/logs
Note: The administrative console port of 9061 was selected during the Profile
Management Tool (see Figure 4-11 on page 73).
Click the Log in button. If you did not enable security, you do not have to enter a user
name. If you choose to enter a name, it can be any name. It is used to track changes you
make from the console. If you enabled administrative security, enter the user ID and
password you specified.
5. Display the configuration from the console, as shown in Figure 4-15. You should be able to
see the following items from the administrative console:
a. Application servers
Select Servers → Application servers. You should see server1. To see the
configuration of this server, click the name in the list.
Note: Although you cannot display the cell and node from the administrative console,
they do exist. You will see this later as you begin to configure resources and choose a
scope. You can also see them in the <profile_home> /config directory structure.
6. Stop the application server. You can do this from the First steps menu, or better yet, use
the stopServer command:
#>cd <profile_home>/bin
#>./stopServer.sh server1 -username <username> -password <password>
The administrative console and default application are You have the option to deploy the administrative console
deployed by default. The sample applications are not (recommended), the default application, and the sample
deployed. applications (if installed).
The profile name for the deployment manager is Dmgrxx by You can specify the profile name and its location.
default, where xx is 01 for the first deployment manager
profile and increments for each one created. The profile is
stored in <was_home>/profiles/Dmgrxx.
The profile name for the federated application server and You can specify the profile name and its location.
node is AppSrvxx by default, where xx is 01 for the first
application server profile and increments for each one
created. The profile is stored in
<was_home>/profiles/AppSrvxx.
Neither profile is made the default profile. You can choose to make the deployment manager
profile the default profile.
The cell name is <host>Cellxx. You can specify the cell name, the host name, and the
The node name for the deployment manager is profile names for both profiles.
<host>CellManagerxx.
The node name for the application server is <host>Nodexx.
The host name is pre-filled in with your system’s DNS host
name.
You can enable administrative security (yes or no). If you select yes, you will be asked to specify a user name and
password that will be given administrative authority.
TCP/IP ports will default to a set of ports not used by any You can use the recommended ports for each profile
profiles in this WebSphere installation instance. (unique to the installation), use the basic defaults, or
select port numbers manually.
Does not create a Web server definition. Allows you to define an external Web server to the
configuration.
As you create the profile, you will have the option to federate the node to a cell during the
wizard, or to simply create the profile for later federation. Before you can federate the custom
profile to a cell, you will need to have a running deployment manager.
Table 4-5 shows a summary of the options you have during profile creation for a a custom
node.
The profile name is Customxx. You can specify profile name and location. You can also
The profile is stored in <was_home>/profiles/Customxx. specify if you want this to be the default profile.
By default, it is not considered the default profile.
The node name is <host>Nodexx. You can specify the node name and host name.
The host name is pre-filled in with your system’s DNS host
name.
You can opt to federate the node later, or during the profile creation process.
If you want to do it now, you have to specify the deployment manager host and SOAP port (by default, localhost:8879).
If security is enabled on the deployment manager, you will need to specify a user ID and password.
TCP/IP ports will default to a set of ports not used by any You can use the recommended ports for each profile
profiles in this WebSphere installation instance. (unique to the installation), use the basic defaults, or
select port numbers manually.
Figure 4-18 Creating a custom profile: Enter host, and node names
Note: If you choose to federate now, make sure the deployment manager is started.
6. Review the options you have chosen, as shown in Figure 4-20 on page 83.
Note: You only have to do this if you created a custom profile and chose not to federate it
at the time. This requires that you have a deployment manager profile and that the
deployment manager is up and running.
An custom profile is used to define a node that can be added to a cell. To federate the node to
the cell, do the following:
1. Start the deployment manager.
2. Open a command window on the system where you created the custom profile for the new
node. Switch to the <profile_home>/bin directory (for example, cd
\RBProfiles\cstmProfiles\CstmPrf01).
3. Run the addNode command. Here you need the host name of the deployment manager
and the SOAP connector address (see Figure 4-4 on page 64 and Figure 4-6 on page 66):
addNode <dmgrhost> <dmgr_soap_port>
4. Open the deployment manager administrative console and view the node and node agent:
– Select System Administration → Nodes. You should see the new node.
– Select System Administration → Node agents. You should see the new node agent
and its status. It should be started. If not, check the status from a command window on
the custom node system:
cd <profile_home>\bin
./serverStatus.sh -all
If you find that it is not started, start it with this command:
cd <profile_home>\bin
./startNode.sh
6. Select a template to use as a basis for the new application server configuration. See
Figure 4-23.
The DeveloperServer and default templates have been created for you. The default
template is used to create a typical server for production.
New in V6.1: The DeveloperServer template is used to create a server optimized for
development. It turns off PMI and sets the JVM into a mode that disables class
verification and allows it to startup faster through the -Xquickstart command. Note that
it does not enable the “developmentMode" configuration property (run in development
mode setting on the application server window). If you would like to set this to speed up
the application server startup, you will need to configure it after server creation using
the administrative console.
You can also create templates based on existing servers. If you have not previously set up
a template based on an existing application server, select the default template. Click Next.
8. The last window summarizes your choices. See Figure 4-25. Click Finish to create the
profile.
9. In the messages box, click Save to save the changes to the master configuration.
10.Start the application server from the administrative console.
– Select Servers → Application Servers.
– Check the box to the left of the server and click Start.
Click OK.
The federation process stops the application server. It creates a new node agent for the
node, and adds the node to the cell. The application server becomes a managed server in
the cell. It then starts the node agent, but not the server.
8. You can now display the new node, node agent, and application server from the console.
You can also start the server from the console.
In turn, an entry for the new node is added to the nodes directory for each node in the cell
with a serverindex.xml entry for the new node.
The Profile Management Tool uses the simple File-Based user registry. If you use Local OS,
LDAP, or custom registry for security, you can disable security and use the Administrative
console or scripts to enable security after completing the installation.
LTPA will serve as the default authentication mechanism. An internal server identification for
interprocess communications will be automatically generated. An administrative user and
password will be added to the file-based user registry, which will exist under the profile root for
each profile, for example <profile_home>/config/cells/<cellname>/fileRegistry.xml.
You have already seen how profiles are created with the Profile Management Tool. At the
heart of this wizard is the manageprofiles command. This command provides you with the
means to do normal maintenance activities for profiles. For example, you can call this
command to create profiles natively or silently, list profiles, delete profiles, validate the profile
registry, and other functions.
Syntax
Use the following syntax for the manageprofiles command:
manageprofiles.sh -mode -arguments
-augment Augments the given profile using the given profile template.
-validateRegistry Validates the profile registry and returns a list of profiles that are not valid.
-validateAndUpdateRegistry Validates the profile registry and lists the non-valid profiles that it purges.
The following two examples show the results of the manageprofiles -<mode> - help and
manageprofiles -listProfiles modes:
Enter manageprofiles -<mode> -help for detailed help on each mode. See Example 4-3
for an example of the manageprofiles -create -help command.
Enter manageprofiles -listProfiles to see a list of the profiles in the registry. The
following is a sample output of -listProfiles:
# /was_home/bin./manageprofiles -listProfiles
[itsoDmgr01, itsoAppSrv01]
Profile templates: The profiles are created based on templates supplied with the product.
These templates are located in <was_home>/profileTemplates. Each template consists of
a set of files that provide the initial settings for the profile and a list of actions to perform
after the profile is created. Currently, there is no provision for modifying these templates for
your use, or for creating new templates. When you create a profile using manageprofiles,
you will need to specify one of the following templates:
default (for application server profiles)
dmgr (for deployment manager profiles)
managed (for custom profiles)
cell (for cell profiles)
For example, Example 4-4 shows the commands used to create an application server named
itsoServer1 on node itsoNode1 in cell itsoCell1 on host app1 from the command line.
At the completion of the command, the profile will be removed from the profile registry, and
the runtime components will be removed from the <profile_home> directory with the
exception of the log files.
If you have errors while deleting the profile, check the following log:
<was_home>/logs/manageprofile/<profile_name>_delete.log
For example, in Example 4-5, you can see the use of the manageprofiles command to delete
the profile named itsoNode1.
As you can see in Example 4-5, all seems to have gone well. But, as an additional step to
ensure the registry was properly updated, you can list the profiles to ensure the profile is gone
from the registry and validate the registry.
Note: If there are problems during the deletion, you can manually delete the profile. For
more information about this topic, refer to WebSphere Application Server V6.1 installation
problem determination, REDP-4305.
Using Installation Factory GUI (ifgui), users can create CIPs/IIPs in Connected mode or
Disconnected mode. Connected mode is used when all input is available on the local system,
so that users can browse and select. Disconnected mode is used when input is not available
on the local system, such as when users try to create a build definition file for Linux on
Independent software vendors, customers, and even other IBM products can all benefit from
an easier way to quickly install and update WebSphere Application Server.
The Installation Factory feature provides an automated method for creating custom
installation packages or Customized Installation Packages for WebSphere Application Server.
The WebSphere Application Server installation that is created is the same robust installer
normally used, supporting the same silent install options as well. The custom installer can
also be used to update an existing installation of WebSphere Application Server, providing an
easier alternative to installing multiple fix packs.
The above features provide the capability to create a custom install package that can
replicate an existing installation of WebSphere Application Server.
Note: A number of new features have been introduced for the Installation Factory in
WebSphere Application Server V6.1. The first big change is the ability to do cross-platform
generation of Customized Installation Packages. It also introduces 64-bit operating system
support.
CIPs can be created based on product. A WebSphere Application Server CIP can include
WebSphere Application Server fix packs, SDK fix packs, WebSphere Application Server
interim fixes, profile customizations, and additional user files. A Feature Pack for Web
Services CIP can include Feature Pack for Web Services fix packs, Feature Pack for Web
Services interim fixes, profile customizations, and additional user files. Users cannot include a
WebSphere Application Service Fix Pack in the Feature Pack for Web Services CIP. CIP is a
vertical bundle of a single product, while IIP is a horizontal bundle of multiple products.
V6.1.0.2
V6.1.0.2 +
Automated, iFix “A”
Install… +
customized
installation IFix “B”
package V6.1
Java
V6.1
SDK 1.5
SR2 ...
New
scratch Update existing
installation installations
Scripts
The capability to include scripts with a Custom Installation Package can provide a very
flexible mechanism for configuring an installation. The Installation Factory supports a number
of different script formats, including Java class files, JACL, Jython, Ant, and batch and shell
scripts. Scripts of different formats can be included in the same Custom Installation Package,
and associated with either the installation or uninstallation option.
Installation scripts are run after the installation is complete, while uninstallation scripts are run
before the uninstallation takes place.
100 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
8. Leave the default for the Build definition file name and CIP build directory path, as shown
in Figure 4-32. Click Next.
Chapter 4. Installation of WebSphere Application Server Network Deployment for Solaris 10 101
9. Browse to the WebSphere Application Server Version 6.1 installation image location, as
shown in Figure 4-33. Click Next.
Note: Note that this image is the installable image from a CD or download location that can
be used to install WebSphere Application Server. It is not the image that has already been
installed on a system.
102 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
10.Use the defaults in the Feature Selection window, as shown in Figure 4-34. Click Next.
Chapter 4. Installation of WebSphere Application Server Network Deployment for Solaris 10 103
11.Enter in the paths to WebSphere Application Server Fix Pack 6.1.0.11 and SDK fix pack
6.1.0.11, as shown in Figure 4-35. Click Next.
12.Use the defaults in the Installation and Uninstallation Scripts window. Click Next.
13.Use the defaults in the Profile Customization window. Click Next.
14.Use the defaults in the Additional Files window. Click Next.
15.In the Authorship window, input the Organization and Description of the CIP. Click Next.
16.Click the Save build definition file and generate customized installation package
radio button. Click the Estimated Size and Available Space button to check the available
space, as shown in Figure 4-36 on page 105. Click Finish.
104 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 4-36 Customized Installation Factory: Preview
When running the command-line option to generate a custom installation package, logging
can be enabled using the -logFile and -logLevel options. By default, the Installation Factory
will log information to the logs directory in the Installation Factory home directory. By
increasing the level, additional information can be captured about a problem with the
generation of a Customized Installation Package.
Chapter 4. Installation of WebSphere Application Server Network Deployment for Solaris 10 105
These two options are discussed in more detail here:
-logFile log_file_path_name
Identifies the log file. The default value is working_directory/logs/log.txt.
-logLevel log_level
Sets the level of logging. The default value is INFO and the valid values are:
– ALL
– CONFIG
– INFO
– WARNING
– SEVERE
– OFF (Turns off logging)
Installation logs
When using a custom install package to install WebSphere Application Server, errors will be
logged to the standard logs/install directory. It is the same as a standard WebSphere
Application Server installation. The easiest method to determine if an installation was created
from a custom installation package is to look in the <WAS_HOME>/CIP directory.
106 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
4.5 Uninstalling
This section describes how to uninstall WebSphere Application Server Network Deployment.
The uninstaller program removes registry entries, uninstalls the product, and removes all
related features. The uninstaller program does not remove log files in the installation root
directory.
The uninstaller program removes all profiles, including all of the configuration data and
applications in each profile. Before you start the uninstall procedure, back up the config folder,
the installableApps folder, and the installedApps folder of each profile, if necessary. Back up
all applications that are not stored in another location.
Chapter 4. Installation of WebSphere Application Server Network Deployment for Solaris 10 107
5. Stop the deployment manager dmgr process with the stopManager command.
Stop all dmgr processes that are running on the machine. For example, issue the following
command from the /opt/IBM/WebSphere/AppServer/profiles/dmgr_profile/bin directory:
./stopManager.sh -user user_ID -password password
6. Optional: Back up configuration files and log files to refer to later, if necessary.
The uninstaller program does not remove log files in the installation root directory. The
uninstaller program removes all profiles and all of the data in all profiles.
Back up the config folder and the logs folder of each profile to refer to it later, if necessary.
You cannot reuse profiles so there is no need to back up an entire profile.
7. Uninstall the product.
After running the uninstall command, the directory structure has only a few remaining
directories. The logs directory is one of the few directories with files.
8. Review the <was_home>/logs/uninstlog.txt file.
The <was_home>/logs/uninstlog.txt file records file system or other unusual errors. Look
for the INSTCONFSUCCESS indicator of success in the log:
Uninstall, com.ibm.ws.install.ni.ismp.actions.
ISMPLogSuccessMessageAction, msg1,
INSTCONFSUCCESS
9. Manually remove the log files and installation root directory before reinstalling.
The uninstaller program leaves some log files, including the
<was_home>/logs/uninstlog.txt file.
Manually remove all artifacts of the product so that you can reinstall into the same
installation root directory. If you do not plan to reinstall, you do not need to manually
remove these artifacts.
You can disable security in the administrative console before uninstalling the product. Then
the uninstaller program can stop all server processes. Select Security → Global security
and clear the check box for enabling global security in the administrative console.
108 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
4.5.2 Uninstalling Network Deployment
This section describes uninstalling the WebSphere Application Server Network Deployment
product.
The uninstaller program removes registry entries, uninstalls the product, and removes all
related features. The uninstaller program does not remove log files in the installation root
directory.
The uninstaller program removes all profiles, including all of the configuration data and
applications in each profile. Before you start the uninstall procedure, back up the config
folder, the installableApps folder, and the installedApps folder of each profile. Back up all
applications that are not stored in another location.
This procedure uninstalls the WebSphere Application Server Network Deployment product.
1. Log on as root or as a non-root user who belongs to the administrator group.
2. Run the uninstaller program for the Web server plug-ins for WebSphere Application
Server.
If a Web server is configured to run with the Application Server, uninstall the plug-ins to
remove the configuration from the Web server.
3. Stop the deployment manager dmgr process with the stopManager command. Stop all
dmgr processes that are running on the machine. For example, issue this command from
the /opt/IBM/WebSphere/AppServer/profiles/dmgr_profile/bin directory:
./stopManager.sh -user user_ID -password password
4. Stop the nodeagent process with the stopNode command.
Stop the nodeagent process that might be running on the machine. For example, issue the
following command from the
/opt/IBM/WebSphere/AppServer/profiles/app_server_profile/bin directory of a federated
node on a Linux machine to stop the nodeagent process:
./stopNode.sh
If servers are running and security is enabled, use the following command:
./stopNode.sh -user user_ID -password password
5. Stop each running Application Server with the stopServer command.
If security is disabled, the uninstaller program can stop all WebSphere Application Server
processes automatically. If servers are running and security is enabled, the uninstaller
program cannot shut down the servers and the uninstall procedure fails. Manually stop all
servers before uninstalling.
Stop all server processes in all profiles on the machine. For example, issue the following
command from the /opt/IBM/WebSphere/AppServer/profiles/app_server_profile/bin
directory to stop the server1 process in the profile:
./stopServer.sh server1
If servers are running and security is enabled, use the following commands:
./stopServer.sh server1 -user user_ID -password password
6. Optional: Back up configuration files and log files to refer to them later, if necessary.
The uninstaller program does not remove log files in the installation root directory. The
uninstaller program removes all profiles and all of the data in all profiles.
Back up the config folder and the logs folder of each profile to refer to later, if necessary.
You cannot reuse profiles so there is no need to back up an entire profile.
Chapter 4. Installation of WebSphere Application Server Network Deployment for Solaris 10 109
7. Issue the uninstall command located at /<was_home>/uninstall.
The command file is named uninstall.
The uninstaller wizard begins and displays the Welcome window displayed in Figure 4-37.
8. Click Next to begin uninstalling the product. The uninstaller wizard displays a confirmation
window that lists a summary of the product and features that you are uninstalling, a shown
in Figure 4-38.
110 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
9. Click Next to continue uninstalling the product.
The uninstaller deletes all profiles before it deletes the core product files, as shown in
Figure 4-39. Before using the uninstaller, back up any profile data that you intend to
preserve. The uninstaller does not retain data files or configuration data that is in the
profiles.
After uninstalling profiles, the uninstaller program deletes the core product files in
component order.
Chapter 4. Installation of WebSphere Application Server Network Deployment for Solaris 10 111
10.Click Finish to close the wizard after the wizard removes the product, as shown in
Figure 4-40.
11.Remove any configuration entries in the managed node that describe a deleted
deployment manager.
A common topology is to install the core product files on multiple machines. One machine
has the deployment manager and other machines have managed nodes created from
custom profiles or federated application server profiles. If you delete a Network
Deployment installation where you created an application server profile or a custom profile
and federated the node into a deployment manager cell in another installation, you must
remove the configuration from the deployment manager.
The official statement of support for a node configuration problem in the managed node is
that you use the backupConfig command after the initial installation. Use the command
again whenever you make significant changes to the configuration that you must save.
With a valid backup of the configuration, you can always use the restoreConfig command
to get back to a previously existing state in the configuration.
You can also use the following command to remove the node when the deployment
manager is not running. Issue the command from the
<was_home>/profiles/managed_node_profile/bin directory on the machine with the
managed node:
./removeNode.sh -force
If you must manually clean up the configuration on the managed node, you can attempt
the following unsupported procedure:
a. Rename the cell_name directory for the node to the original name if the current name
is not the original name.
Go to the <was_home>/profiles/node_profile_name/config/cells/ directory. Rename the
cell_name directory to the original name.
b. Delete the dmgr_node_name directory if it exists.
112 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Go to the
<was_home>/profiles/node_profile_name/config/cells/original_cell_name/nodes
directory to look for the dmgr_node_name directory that you must delete.
c. Edit the setupCmdLine.sh file and change the cell name to the original cell name.
The file is in the <was_home>/profiles/node_profile_name/bin directory. Change the
value of the WAS_CELL variable to the original cell name.
12.Remove any configuration entries in the deployment manager that describe a deleted
managed node.
Open the administrative console of the deployment manager and select System
administration → Nodes → node_name → Remove node.
If the administrative console cannot successfully remove the node, run the following
command with the deployment manager running:
<was_home>/bin/cleanupNode.sh node_name
The official statement of support for a node configuration problem in the deployment
manager is that you use the backupConfig command after the initial installation. Use the
command again whenever you make significant changes to the configuration that you
must save. With a valid backup of the configuration, you can always use the
restoreConfig command to get back to a previously existing state in the configuration.
If you must manually clean up the configuration, you can attempt the following
unsupported procedure:
a. Within the nodes directory of the deployment manager, remove the configuration
directory for the node that you deleted.
Go to the <was_home>/profiles/dmgr_profile_name/config/cells/cell_name/nodes
directory to find the deleted_node_name file.
b. Within the buses directory of the deployment manager, remove the configuration
directory for the node that you deleted.
Go to the <was_home>/profiles/dmgr_profile_name/config/cells/cell_name/buses
directory to find the deleted_node_name file.
c. Edit the coregroup.xml file in each subdirectory of the coregroups directory of the
deployment manager. Look for elements of type coreGroupServers. Remove any
coreGroupServers elements that have a reference to the node that you deleted.
Go to the
<was_home>/profiles/dmgr_profile_name/config/cells/cell_name/coregroups/deleted_
node_name directory to find the file.
d. Edit the nodegroup.xml file in each subdirectory of the nodegroups directory of the
deployment manager. Look for elements of type members. Remove any members
elements that have a reference to the node that you deleted.
Go to the
<was_home>/profiles/dmgr_profile_name/config/cells/cell_name/coregroups/deleted_
node_name directory to find the file.
After running the uninstall command, the directory structure has only a few remaining
directories. The logs directory is one of the few directories with files.
The uninstaller program leaves some log files, including the <was_home>/logs/uninstlog.txt
file.
Chapter 4. Installation of WebSphere Application Server Network Deployment for Solaris 10 113
Manually remove the log files and installation root directory so that you can reinstall into the
same installation root directory. If you do not plan to reinstall, you do not need to manually
remove these artifacts.
114 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
5
IHS DB2 IHS WAS DB2 IHS WAS DB2 IHS WAS DB2 App
WAS
Single OS Multiple OS
For more information about Sun virtualization technologies, refer to Sun Server Virtualization
Technology, found at:
http://sun.com/blueprints/0807/820-3023.html
With the Sun Fire Capacity on Demand (COD) program, you can purchase these systems by
paying only for the system with CPU and memory resources that you need. As your business
demand grows, your resource requirement will grow. COD enables you to instantly activate
those spare resources and utilize them only for the duration that you need. For more
information, see http://sun.com/datacenter/cod.
116 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Hard Partitions
Host1
Host1 Host2
Host2 Unassigned
Unassigned
HW
HW
Host3
Host3
Domains
IHS WAS
Resources
Resources
DB2
host Solaris
instances
Hard Partitions
Host1
Host1 Host2
Host2 Host3
Host3
Logical Domains
Host1 Host2 Host3
In addition, a Solaris container has the ability to provide support for alternate operating
environments:
Solaris 8 Migration Assistant, an optional product offering, allows an existing Solaris 8
system image to be rehosted within a Solaris Container running on Solaris 10; the
environment within the container will present applications with the same execution
environment as the original Solaris 8 system, and will be administered internally as a
Solaris 8 OS image. This is discussed in greater detail in Appendix B, “Sun Solaris 8
Migration Assistant” on page 453.
Solaris Containers for Linux Applications, included with Solaris 10 8/07 or later updates,
makes it possible to host existing Linux applications on Solaris when running on an Intel or
AMD server.
118 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Physical Host Global Zone (192.168.1.100)
Hostname: web_zone1 Hostname: app_zone1 Hostname: db_zone1
(192.168.1.201, (192.168.1.202, (192.168.1.203,
Locale: US Locale: US Locale: US
Time Zone: CST) Time Zone: CST) Time Zone: CST)
Application
Environment
Sun JS Web Server WAS v6.1 FP 11) DB2 v9.x
e1000g2:2
e1000g1:1
e1000g2:1
e100g2:2
/opt/yt
zcons
zcons
zcons
/usr
/usr
/usr
Virtual
Platform
zoneadmd zoneadmd zoneadmd
Resource Pool(s)
Physical
Platform
Solaris 10
Operating Systems
For more information, see The Sun BluePrints Guide to Solaris Containers, found at:
http://sun.com/blueprints/1006/820-0001.html
Since Sun xVM is not generally available as of this writing, it will not covered here; see Sun's
introductory article at http://sun.com/featured-articles/2007-1005/feature for more
information.
Virtual Machines
Host1
Host1 Host2
Host2 Host3
Host3
Sun Solaris OS
Some of the requirements listed in this table, such as energy efficiency, may not necessarily
be part of virtualization, but you gain those as benefits provided by the design of the
underlying hardware.
120 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Requirements Technology Platform
WebSphere Application Server configuration and installation in DSD or LDoms are exactly the
same as a stand-alone Solaris system. The configuration of DSD and LDoms must be done
as part of the preparation steps to get the Solaris environment ready for WebSphere
Application Server installation. These tasks are usually performed by Solaris system
administrators or Sun service personnel. It is transparent to the WebSphere Application
Server installation and configuration process.
Solaris Containers are also configured by the Solaris administrator. This task is performed
from the default Solaris super-user environment on a physical server, referred to as the
Global Zone. We discuss the details related to Containers in 5.2, “WebSphere in Solaris
Containers” on page 121; it is critical for the Solaris system administrator to work together
with the WebSphere Application Server administrator to create a configuration that best uses
the full benefits of Solaris for your business requirements.
It is also possible to use Solaris Containers on systems that are also running DSDs or
LDOMs; this combination can provide even greater flexibility and security.
Figure 5-4 on page 119 shows three Solaris Containers, each with its own system identity
and space, based on a single Solaris instance sharing underlying system resources.
There are many ways Solaris Resource Management controls can be imposed on Solaris:
An individual zone with its own specified resources
Multiple zones sharing specified resources
An individual process
A project for various processes (for example, Application Services processes that include
WebSphere Application Server and DB2)
To get started with Solaris Containers, refer to the Solaris Containers How To Guide, found at:
http://sun.com/software/solaris/howtoguides/containersLowRes.jsp
Figure 5-6 on page 123 shows resource controls in effect for local zones for IHS, WebSphere
Application Server, and DB2 deployments. The Web zone for IHS has a resource pool having
4 CPUs and 4 GB of RAM, while the zones for WebSphere Application Server and DB2 share
another resource pool with 24 CPUs and 16 GB of RAM.
122 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Physical Host Global Zone (192.168.1.100)
Hostname: web_zone1 Hostname: app_zone1 Hostname: db_zone1
(192.168.1.201, (192.168.1.202, (192.168.1.203,
Locale: US Locale: US Locale: US
Time Zone: CST) Time Zone: CST) Time Zone: CST)
Application
Environment
Sun JS Web Server WAS v6.1 FP 11) DB2 v9.x
e1000g2:2
e1000g1:1
e1000g2:1
e100g2:2
/opt/yt
zcons
zcons
zcons
/usr
/usr
/usr
Virtual
Platform
zoneadmd zoneadmd zoneadmd
Physical
Platform
Default Pool Solaris 10
Operating Systems
The advantage of using resource control is that no zone or business application can become
a resource hog causing resource starvation of other services on the system. The resource
control provides another dimension to contain your deployed services so that they can safely
coexist with other services on the same physical system.
You can also build a Solaris Container as a plain zone and put resource controls at later time.
We discuss further details about how you can accomplish this task in “Resource control for
Solaris Zones” on page 148.
Tip: To collaborate with others on this topic, you can join the OpenSolaris community for
zones at http://www.opensolaris.org/os/community/zones.
Host 1
Operating System
Host N
Hardware
App1
Zone1
Host 2
App2 Better
App2 Unused Capacity Zone2 System
Utilization
Consolidate
Operating System App3
Zone3
Hardware
Solaris 10
Host 3 Operating System
Operating System
Hardware
Figure 5-7 Consolidation through virtualization of many systems with unused capacity onto a Solaris 10
system with zones
Another reason, which really is a benefit you gain with Solaris zone, can be for redundancy.
You can deploy WebSphere Application Server in one or more zones on a system. While this
system is active for service, you can replicate these zones on another physical system. These
replica zones are installed and configured, but not booted; thus, they are dormant, as shown
in Figure 5-8 on page 125. If Host-A goes out of service for any reason, you can bring up your
WebSphere Application Server zones on Host-B immediately. Since these redundant zones
are replicas, they have identical host identities between the active and the dormant zones.
Booting of zones take just several seconds. Once the zones boot up and your WebSphere
Application Server services start up, the WebSphere Application Server servers are
immediately back in service.
Tip: For full dynamic and automated deployment architecture, you can use this strategy
along with WebSphere eXtended Deployment (XD), Tivoli Intelligent Orchestra, Tivoli
Provisioning system, or Sun N1™ System Manager and N1 System Provisioning system.
124 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
(1) Physical Servers A and B have identical zones where
each zone is a replica of the other. Host-B’s zones are
configured and installed but not booted (dormant)
Local Zone: Z2
ACTIVATE Local Zone: Z2
SERVER AppSrv01
THE AppSrv01
WAS v6.1 WAS v6.1
FP 15GOES
Profiles Profiles
AppSrv02 REPLICA FP 15
AppSrv02
OUT OF SERVICE ZONES
Local Zone: Z3 Local Zone: Z3
Dev
Dev Dev
Dev Dev Dev
Dev
Dev Dev
WebSphere Application Servers
Test
Test Test
Test Test
Test Test
Test
Test
Edge/Web Servers
DB Servers
QA
QA QA
QA QA
QA QA
QA
QA
Stage
Stage Stage
Stage Stage
Stage Stage
Stage
Stage
Prod Prod
Prod
Prod Prod Prod
Prod Prod
Prod Prod
Prod
Prod
126 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
With virtualization capabilities, you may take an approach to consolidate the servers based on
their life cycle phase, as shown in Figure 5-10. For example, development servers that span
across its Web, App, and Data tiers. You may create these virtual systems and bring them
online only as you need them.
Dev
Dev Dev
Dev Dev Dev
Dev
Dev Dev
DB Servers
QA
QA QA
QA QA
QA QA
QA
QA
Stage
Stage Stage
Stage Stage
Stage Stage
Stage
Stage
Prod Prod
Prod
Prod Prod Prod
Prod Prod
Prod Prod
Prod
Prod
Dev
Dev Dev
Dev Dev Dev
Dev
Dev Dev
DB Servers
QA
QA QA
QA QA
QA QA
QA
QA
Stage
Stage Stage
Stage Stage
Stage Stage
Stage
Stage
Prod Prod
Prod
Prod Prod Prod
Prod Prod
Prod Prod
Prod
Prod
Tip: By virtualizing your application environments, you gain the flexibility to bring them up
when you need them and down when they are not in use. A great example would be
Test/QA environments where you may not need to have dedicated physical systems at all
times.
The key benefits of going with a virtualized system, especially Solaris Zones, are:
Increased system utilization
Virtually no impact using virtualization with zones
Reduced the number of systems required (For example, by virtualizing Test systems in
zones, they can be booted up only as needed.)
Dynamically provision applications (For example, boot up the hosted services on a whim,
which ties in nicely with WebSphere XD deployments.)
Increased agility (For example, services can be redeployed and servers can be
re-purposed easily and quickly.)
Reduced the total operational costs (For example, reduction in need for Real Estate,
Power, Cooling, and Sys Admin)
128 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
To realize these benefits, we now need to look into the considerations needed to configure
WebSphere Application Server in Solaris Zones.
Host A
Global Zone /
zones
LOFS
/ /
Figure 5-12 The key difference between Sparse Root and Whole Root Zone file system structures
Some enterprises set specific policies favoring one type of Solaris Zone over the other. In
either case, Solaris Zones provide secure and separated operating environments for multiple
WebSphere Application Server installations on a physical system. Users of each WebSphere
Application Server node think that it is deployed in a dedicated system. If the WebSphere
Application Server application environment in a zone ever gets compromised, only that zone
gets affected. The remaining servers on the system can continue to operate safely.
In the next two sections, we discuss how WebSphere Application Server binaries can be
structured as an independent installation or globally shared installation. You can apply either
Sparse Root Zones or Whole Root Zones, depending on your needs.
Global Zone
Zone
Local
Local Zone: Z1
AppSrv01
WAS
WAS v6.1 Profiles
Profiles
FP
FP 9
Dmgr
Local
Local Zone: Z2
AppSrv01
WAS
WAS v6.1 Profiles
Profiles
FP
FP 15
15
AppSrv02
Local
Local Zone: Z3
WAS v6.0.2
v6.0.2 Profiles
Profiles AppSrv01
AppSrv01
FP
FP 25
25
130 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Application Server. It also eases the Fix Pack maintenance by needing to apply the Fix Pack
only once, that is, in the base installation in the Global Zone. Detailed steps for deploying a
shared WebSphere Application Server installation is discussed in 5.3.4, “Scenario 4: Share
the WebSphere Application Server installation with zones from the Global Zone” on
page 137, and steps for Fix Pack maintenance are discussed in Chapter 6, “Management and
maintenance of WebSphere Application Server on the Solaris Operating System” on
page 169.
Global Zone
Local Zone: Z1
AppSrv1
Profiles
Dmgr
Local Zone: Z2
AppSrv1
WAS v6.1 Profiles
AppSrv2
Local Zone: Z3
Profiles AppSrv1
Figure 5-14 Shared WebSphere Application Server binary installation among zones
Zone cloning
The Solaris Zone cloning capability became available as of Solaris 10 11/06.
Zone cloning is a faster way to create new local zones in Solaris 10. It is more than 30% faster
in comparison to creating zones from scratch and lets you reproduce any customization or
software installation you have done without further manual effort. This does not exclude the
fact that the zone must be created and configured before cloning.
Cloning is a powerful feature that you can leverage to improve efficiency and time in deploying
your WebSphere environment. Figure 5-15 on page 132 is a graphical representation of the
following steps, demonstrating how quickly you can deploy multiple versions of WebSphere
Application Server with different levels of fix packs to provide business agility:
1. You can create a zone, either Whole Root Zone or Sparse Root Zone, as needed. Keep
this as your template.
2. Clone the template zone to create a WebSphere Application Server template zone. Install
the WebSphere Application Server V6.1 base binaries.
Host-A
Global Zone
Local Zone: Z
Local Zone: Z0
(c) (e), (f)
WAS_V6.1
You can also move a cloned zone to another system, which is described in “Migrating a Zone
from one system to another” on page 132.
132 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
into consideration, such as differences in patch levels, packages, network interfaces, and so
on. Migrating a zone is a fast and reliable method.
We discuss how WebSphere Application Server zone migration can be accomplished in 6.4,
“Rehosting of an existing WebSphere Application Server environment” on page 195.
Sub-capacity licensing
Sub-capacity licensing lets you license an eligible software program for use on less than the
full processor core capacity of your machine, when the software program is used with one or
more supported virtualization technologies. Without sub-capacity licensing, customers are
required to obtain Processor Value Unit license entitlements for all the processor cores in the
server, regardless of how the software is deployed.
The IBM sub-capacity offering leverages Processor Value Units (PVUs) to provide the
licensing granularity our customers need to leverage multi-core chip technologies and
advanced virtualization technologies.
IBM Software Group has made selected middleware programs available for sub-capacity
licensing on selected processor technologies and selected virtualization technologies (for
example, Solaris Containers and Dynamic System Domains). Lists of the supported products
and technologies can be found at:
http://www.ibm.com/software/lotus/passportadvantage/subcaplicensing.html
134 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
5.3 Configuration scenarios for WebSphere Application Server
in zones
In this section, we describe the potential configuration scenarios and the procedures to
accomplish the successful installation and configuration of WebSphere Application Server
V6.1 ND in Solaris 10 and zones.
Sample scenarios
Here are some sample scenarios:
1. Install and configure WebSphere Application Server in a Global Zone (a conventional way
of installation on Solaris).
2. Install and configure WebSphere Application Server in a Whole Root Zone (an
independent WebSphere Application Server installation in a local zone).
3. Install and configure WebSphere Application Server in a Sparse Root Zone (an
independent WebSphere Application Server installation in a local zone).
4. Share a WebSphere Application Server installation in zones from the Global Zone.
5. Install IHS in a Non-Global Zone to front end WebSphere Application Server.
Tip: To make the zone creation and configuration process more efficient, you can use a
tool called the Zone Manager.
The Zone Manager is an open source sub-project of OpenSolaris. Its purpose is to simplify
the creation and management of Solaris zones. With it, you can create a zone configured
the way you want with a single command-line invocation. There are many real world use
cases and examples of using the Zone Manager at:
http://thezonemanager.com
To install and configure a WebSphere Application Server instance using the Zone
Manager, you need to decide upon the desired configuration attributes of the zone. Then
use the corresponding Zone Manager arguments to create your WebSphere Application
Server instance.
There are many Zone Manager configuration properties. However, for this example, we will
use only the following configuration properties. To see the full set of properties, run the
Zone Manager script with the -h flag or see the documentation at the open source project
site at:
http://opensolaris.org/os/project/zonemgr/
More specific examples and updates, such as for WebSphere Application Server
deployment in a zone using the Zone Manager, can be found at:
http://thezonemanager.com
Example 5-2 on page 137 shows the command needed to accomplish this installation.
136 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 5-2 Configuring a Sparse Root Zone
global#zonecfg -z yourzone
zonecfg:yourzone>create
zonecfg:yourzone>set zonepath=/export/zones/yourzone
zonecfg:yourzone>set autoboot=true
zonecfg:yourzone>add net
zonecfg:yourzone:net>set physical=’zone_network_interface’
zonecfg:yourzone:net>set address=’zone_ip_address’
zonecfg:yourzone:net>end
zonecfg:yourzone>verify
zonecfg:yourzone>commit
zonecfg:yourzone>exit
global# zoneadm list -cv
global# zoneadm -z yourzone install
...it takes several minutes...
global# zoneadm -z yourzone boot
...goes through to configure the Solaris Zone’s environment...
global# zlogin -C yourzone
yourzone console login: userid
Password:
.......Enter User ID ad Password ....
#
4. Install the WebSphere Application Server in the desired location, as described in 4.1,
“Installing WebSphere Application Server V6.1 Network Deployment” on page 52.
138 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
global# zoneadm -z waszone1 install
global# zoneadm -z waszone1 boot
Note: At the end of the zone installation, you will receive a message indicating that the
installation of two WebSphere packages were skipped. You can ignore this message, as it
will not affect your operation.
3. Log into the zone to create the WebSphere Application Server profile:
global# zlogin -C waszone1
.......Enter User ID ad Password ....
bash# mkdir -p /opt3/IBMSW/WASprofiles/properties
bash# export DISPLAY=your_workstation_display:N
bash# cd /opt/IBM/WebSphere/AppServer/bin/ProfileManagement
bash# ./pmt.sh
Important: Prior to invoking the PMT tool, you must create the properties directory in the
same path as you defined for WS_WSPROFILE_DEFAULT_PROFILE_HOME in
Example 5-3 on page 138. If you do not create this directory, the PMT tool will fail in this
shared installation environment.
For more information about creating a profile with a graphical user interface, refer to 4.2.2,
“Creating profiles with Profile Management Tool” on page 60. That topic has all the steps
needed to create a profile.
Once the installation is completed, you can run the WebSphere Application Server
installation’s First steps application to verify your newly configured environment and start the
server. You can also start the server manually from the command line, as shown in
Example 5-5.
Example 5-5 Starting the Application Server from the command line
bash#cd /opt3/IBMSW/WASprofiles/profiles/AppSrv01/bin/
bash#./startServer.sh server1
To verify that your server is up and running, you can go to the WebSphere Application Server
admin console through your Web browser (assuming you use the default port 9060) using the
following address:
http://<appserver_host>:9060/admin
To created additional zones, repeat the steps in this section for ““Non-Global Zone
preparations” on page 138”.
By setting ITP_LOC, ejbdeploy.sh will have write permission to your local profile's
deploytool/itp/configuration directory. Information about the WebSphere Application
Server maintenance update for this environment is addressed in 6.3.3, “Installing a Fix
Pack to the Global Zone” on page 182.
If you create a Whole Root Zone to install IHS, the installation procedure is straightforward, as
defined in the IBM IHS installation documentation, because Whole Root Zone, by design, has
its own /usr directory. The root user of that zone has all the read and write privileges in the
/usr within the zone to create the required symbolic links.
140 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
If you create a Sparse Root Zone to install IHS, the root user of the zone has no write
privileges in the /usr directory; thus, the IHS installer cannot create the symbolic links for the
GSKit binary and shared library files. The best known solution today is to first install GSKit in
the Global Zone to create the symbolic links in /usr before installing IHS or the WebSphere
plug-in in the Sparse Root Zone. This is accomplished by running gskit.sh from within the IHS
or WebSphere plug-in installation image.
The maintenance level of GSKit in use is determined by what is installed in the Sparse Root
Zone, since no real code exists in /usr, allowing you to control the level of GSKit on a zone by
zone basis. If you need more assistance or have any issues, you can open a PMR with IBM
Technical Support.
Detailed procedure
Here are the highlights of the IHS configuration in the Sparse Root Zone.
Global Zone
Install the GSKit in Global Zone as follows:
1. Log in to Global Zone as the root user.
2. Locate your WebSphere Application Server installation media, or mount the CD-ROM or
DVD-ROM to your system.
3. Go to the GSKit directory and issue the command:
global# ./gskit.sh
The script installs the GSKit on your Solaris machine. The GSKit is needed by IHS for SSL
purposes. It installs library files and binary files into a /opt/ibm/gsk7/lib and
/opt/ibm/gsk7/bin directories. It also creates symbolic links into /usr/lib and /usr/bin
directories, as shown in Example 5-6 on page 140.
4. As the installation proceeds, you will see information about the propagation of the
installation to other zones.
5. When the installation is over, you can proceed with the installation of IHS to the Sparse
Root Zone.
5. If you installed IHS in silent mode, you do not get any messages about whether the
installation was successful or unsuccessful. After the installation is complete, you can
verify a successful installation by looking the log.txt file in <ihs-root>/logs/install directory.
Look for the INSTCONFSUCCESS indicator at the end of the log.txt file.
You can also verify the installation by running the verifyinstallver.sh script in the
<ihs-root>/bin directory.
From here, you can continue to configure the WebSphere plug-in within IHS to front-end
WebSphere Application Server as needed.
Attention: If you install the IHS plug-in for the “WebSphere Application Server machine
(local)” scenario in a shared WebSphere Application Server installation from the Global
Zone, as described in “Shared WebSphere Application Server binary installation for zones”
on page 130, you will get an error message that there is no write privilege. The IHS plug-in
installer is not aware of your custom profile directory, but rather it is looking for the profile
directory in the ${was.install.root} directory (for example,
/opt/IBM/WebSphere/AppServer/profiles). This directory is in the Global Zone and you are
running the IHS plug-in installer in a local zone.
A workaround solution is to select Web server machine (remote) even though your IHS
plug-in installation scenario has a local app server in the zone.
The resource management capabilities in Solaris have evolved from simple processor
partitioning and binding to sophisticated virtualized operating environments. In Solaris 2.6.
Sun introduced the concept of isolating process execution to a specific processor or a group
of processors known as processor sets. In Solaris 9, Sun introduced the Solaris Resource
Manager, which provides finer granularity of resource controls, including Fair-Share
Scheduler (FSS). In Solaris 10, Sun added more capabilities, such as Dynamic Resource
Pools, which are an integral part of Solaris Containers (Zone + Resource Management). As of
the Solaris 10 08/07 release, zone configuration has more capabilities for binding a zone to a
set of dedicated CPUs, capping memory (physical or swap), and selecting FSS as the default
scheduler.
142 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
In 2.1.4, “Tunable parameters in Solaris 10” on page 20, we discuss Solaris tuning
parameters in relation to resource controls for project and process, which are the minimum
resource allocations that you need to set as a baseline requirement in compliance with the
IBM recommended parameters for WebSphere Application Server. Detailed information can
be found at:
http://docs.sun.com/app/docs/doc/817-1592/6mhahuoh2
Figure 5-17 shows a system where multiple processes are processing different workloads
without resource management. When “App 3” starts using more system resources, such as
increased CPU utilization or requiring more memory as the application continues to have
memory leak, it begins to cause resource starvation in other processes on the system. On the
right side of the figure, the system operates more steadily when resource boundaries are
imposed on these processes for fair utilization. Therefore, when a process attempt to utilize
system resources, it cannot go beyond the permitted threshold.
Utilization
Utilization
No Resource Control
Resource Boundary 2
Resource Boundary 1 App 3
App 3
Fair Utilization
App 2
Starvation
App 2 App 1
App 1
Time Time
Resource
Control
Resource
Control
Resource
Control
Resource
Control
Resource
Control
144 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Solaris processor sets
Processor sets are a Solaris partitioning mechanism that allows one or more processes to be
assigned to a specific group of processors.
We provide you with some examples here to demonstrate how a processor set can be applied
in your WebSphere Application Server deployment environment.
First, you must determine the processor’s availability on your system. Example 5-7 shows the
psrinfo command, which queries the status of all available processors on your Solaris
system. A processor can be in one of several conditions, including on-line, off-line, or no-intr.
The psrinfo query shows the available processors with their status (on-line) and the psrset
-p command shows that no processor sets exist on the system. In Example 5-8, we create a
new processor set with processors 16 and 17. Each processor set is automatically given a
unique identifier number. In this case, the ID is 1.
Example 5-10 shows how to bind a process, such as a WebSphere Application Server Java
process, to a processor set. You can bind one or more processes to a processor set.
If you query the processor status, as shown in Example 5-11, you will find that processor 16
and 17 are dedicated to this process and will no longer process system I/O or network
interrupts until they are removed from the processor set and return back to the system.
If the WebSphere Application Server process needs two additional processors, you can add
them to the processor set, as shown in Example 5-12.
Example 5-12 Add two more processors (18 and 19) to processor set 1
# psrset -a 1 18 19
processor 18: was not assigned, now 1
processor 19: was not assigned, now 1
Attention: As you add more processors to a processor set, the processes bound to this
processor set will dynamically be able to recognize and use the added processors. The
reverse is true for removing processors.
Example 5-13 Remove two processors (16 and 17) from processor set 1
# psrset -r 16 17
processor 16: was 1, now not assigned
processor 17: was 1, now not assigned
146 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 5-14 shows how to delete the processor set. Any processes bound to this processor
set will automatically begin to utilize the available processors on the system.
A processor set created using the method shown in this section does not persist after a
system reboot. Alternatively, you can use resource pools or zone configuration for persistent
sets, which we discuss in the following sections.
Resource pools
Resource pools provide you with the ability to work with processor sets and Solaris
scheduling classes, as discussed in 5.4.2, “Resource control mechanisms” on page 144.
When defining processor sets through the resource pool facility, you no longer have to use the
physical ID of the processors, as shown in “Solaris processor sets” on page 145. You just
have to define how many processors you wish to allocate. You can also choose a specific
scheduler, such as Fair Share or Fixed Priority scheduler (FSS or FX), in conjunction with the
processor sets for the resource pool.
Example 5-15 demonstrates how you can apply the resource pools to manage the processor
set for your WebSphere Application Server application process.
1. Configure a processor set, called pset4_zone1, with between two and four processors.
2. Create a resource pool called was_pool1.
3. Make the processor set a part of this resource pool.
Example 5-15 Creating a processor set and associating it with a created resource pool
global# pooladm -e
global# pooladm -s
global# poolcfg -c ‘create pset was_pset1 (uint pset.min = 2; uint pset.max = 4)’
global# poolcfg -c ‘create pool was_pool1’
global# poolcfg -c ‘associate pool was_pool1 (pset was_pset1)’
Tip: To verify the resource, you can query your pool configuration with the following
command:
global# poolcfg -dc info
5. Once you have this resource pool created, you can apply it to various purposes:
– You can bind one or more processes to utilize this pool.
– You can associate the pool to a project that owns a group of processes.
– You can associate the pool to a zone (shown in step 7).
More information is available at:
http://docs.sun.com/app/docs/doc/817-1592/rmconfig-3
Tip: If you no longer wish to associate the process to this pool, you simply bind it to the
system’s default pool:
global# poolbind -p pool_default ${WAS_PID}
7. You can also tie this resource pool to a zone. Assume you have already created a zone
called was_zone1. You can now assign the resource pool to this zone, as shown in
Example 5-16.
Tip: If you no longer wish to associate the zone to this pool, you need to change the zone
configuration as follows:
global# zonecfg -z was_zone1
zonecfg:was_zone1> set pool=””
zonecfg:was_zone1> verify
zonecfg:was_zone1> commit
zonecfg:was_zone1> exit
148 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 5-17 Setting dedicated CPUs to a zone
global# zonecfg -z yourzone
zonecfg:yourzone>add dedicated-cpu
zonecfg:yourzone:dedicated-cpu>set ncpus=2
zonecfg:yourzone:dedicated-cpu>set importance=2
zonecfg:yourzone:dedicated-cpu>end
zonecfg:yourzone>verify
zonecfg:yourzone>commit
zonecfg:yourzone>exit
Example 5-18 shows how you can cap physical and virtual memory to certain limit.
Example 5-18 Setting capped memory to a zone (physical to 1 GB and virtual to 1 GB)
global# zonecfg -z yourzone
zonecfg:yourzone> add capped-memory
zonecfg:yourzone:capped-memory> set physical=1g
zonecfg:yourzone:capped-memory> set swap=1g
zonecfg:yourzone:capped-memory> end
zonecfg:yourzone> exit
Example 5-19 shows how to select Fair-Share Scheduler (FSS) with 10 "CPU" shares for a
zone. FSS is a Solaris scheduling class and is described in 5.4.2, “Resource control
mechanisms” on page 144. It allocates and distributes the available CPU resources among
workloads based on their importance. The "CPU" shares are used to define the importance
where the more the shares, the higher the importance is. In this case, the zone's processes
will have 10 shares of "CPU" of the system.
Example 5-19 Setting capped memory to a zone (Physical to 1GB and Virtual to 1GB)
global# zonecfg -z yourzone
zonecfg:yourzone> set scheduling-class=FSS
zonecfg:yourzone> set cpu-shares=10
zonecfg:yourzone> set max-lwps=800
zonecfg:yourzone> end
zonecfg:yourzone> exit
More information about the zonecfg command’s new features is available at:
http://docs.sun.com/app/docs/doc/817-1592/6mhahuooo
Note: These are just few examples of Solaris Resource Management capabilities. There
are many other possibilities that can be created by applying different combinations of this
technology so that you can better manage your workloads and meet your business service
level agreements (SLAs).
SMF Manifests
/var/svc/manifest
svccvg import
ihs_6.xml
was_61.xml
SMF States
/lib/svc/method/manifest-import SMF Repository online/offline
disabled
/etc/svc
maintenance
legacy_run
degraded
initialized
SMF Utilities
leverages libscf(3LIB)
inetconv
SMF Commands
/etc/inetd.conf
svccfg svcs
inetadm svcprop
SMF Daemons
Figure 5-19 Solaris SMF core components
Service states
SMF defines seven services states as follows:
1. Online: An enabled service that has all of its dependencies satisfied. The service has
started successfully.
2. Offline: An enabled service that has some dependencies that are not satisfied. Use svcs
-x or svcs -l to determine which dependency or dependencies are not satisfied.
3. Disabled: A service that has been turned off.
4. Maintenance: A service that has either failed a method (returned a non-zero exit status),
started and stopped too quickly, exceeded a method time out, or had insufficient
150 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
credentials to run the method. Use svcs -x, svcs -l, or look in the service log file to
determine why the service transitioned into the maintenance state.
5. Legacy_run: The state of a System V Release 4 (SVR4) earlier RC script. These scripts
are run as root by svc.startd when the system transitions between run levels.
6. Degraded: A service method has exited with the SMF_DEGRADE exit status.
7. Initialized: svc.startd cannot access the repository. This happens normally at zone boot
time while the sysidtool and manifest_import services are running.
SMF repository
The SMF repository is a copy of all of the services and their associated properties. The
repository is currently kept on disk in /etc/svc/repository.db (persistent properties) and
/etc/svc/volatile (transient service properties). Users can access the repository using the
commands svcs(1), svcprop(1), and svccfg(1m). Applications can use the public APIs in
libscf(3C).
SMF daemons
svc.startd performs most of the functions formerly performed by init. Rather than being
driven by a set of scripts in a directory, svc.startd manages the service dependencies in the
repository. svc.startd stops and starts services based on changes in each of the service
dependencies. svc.startd also provides a common view of service states.
Service manifest
A service manifest is a complete description of the service. This includes the method used to
start, stop, and refresh the service as well as more advanced properties, such as
dependencies, authorizations, resource control, and service specific properties.
Manifests are written in XML and generally placed in /var/svc/manifest. Manifests are
compiled into a binary format and placed into the repository. This can be done manually by
running svccfg import or automatically at boot time by the manifest_import service.
Example 5-20 The Service Manifest for WebSphere Application Server 6.1 was61.xml
<?xml version=’1.0’?>
<!DOCTYPE service_bundle SYSTEM ‘/usr/share/lib/xml/dtd/service_bundle.dtd.1’>
<service_bundle type=’manifest’ name=’export’>
<service name=’application/was61’ type=’service’ version=’1’>
<instance name=’was_server1’ enabled=’false’>
<dependency name=’network’ grouping=’require_all’ restart_on=’error’ type=’service’>
<service_fmri value=’svc:/milestone/network:default’/>
</dependency>
<dependency name=’filesystem-local’ grouping=’require_all’ restart_on=’none’
type=’service’>
<service_fmri value=’svc:/system/filesystem/local:default’/>
</dependency>
<dependency name=’autofs’ grouping=’optional_all’ restart_on=’error’ type=’service’>
<service_fmri value=’svc:/system/filesystem/autofs:default’/>
</dependency>
<!-- “svcadm enable was61” to start WebSphere Application Server using a custom-defined
method -->
<exec_method name=’start’ type=’method’ exec=’/lib/svc/method/svc-was61 start’
timeout_seconds=’60’>
<method_context/>
</exec_method>
<!-- “svcadm disable was61” to stop WebSphere Application Server using a custom-defined
method -->
<exec_method name=’stop’ type=’method’ exec=’/lib/svc/method/svc-was61 stop’
timeout_seconds=’60’>
<method_context/>
</exec_method>
<!-- “svcadm refresh was61” to bounce WebSphere Application Server using a custom-defined
method -->
<exec_method name=’refresh’ type=’method’ exec=’/lib/svc/method/svc-was61 refresh’
timeout_seconds=’60’>
<method_context/>
</exec_method>
</instance>
<stability value=’Evolving’/>
<template>
<common_name>
<loctext xml:lang=’C’>WebSphere Application Server v6.1</loctext>
</common_name>
<documentation>
<doc_link name=’ibm.com’
uri=’http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp’/>
</documentation>
</template>
</service>
</service_bundle>
152 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
You can also add other dependencies in the service manifest that the WebSphere Application
Server services rely upon (for example, another stand-alone application that your application
requires to be co-located on the same system or a JMS provider (for example, WebSphere
MQ) that must be started before the WebSphere Application Server services, and so on).
See the service_bundle(5) man page for more information about the contents of a service
manifest.
Note: If you have multiple instances of the application server, you can have separate
manifests or a single manifest for all instances. Separate manifests are convenient if you
have multiple administrators making changes. In that case, the recommended file name
would be /var/svc/manifest/application/was61-<instance>.xml. For example, you can use a
name like /var/svc/manifest/application/was61-was_server1.xml.
The next step is to provide a set of methods that start, stop, and optionally refresh the
application server. These are derived from the SVR4 earlier RC scripts with very few
modifications needed.
Example 5-21 is an example of a single method used for both starting and stopping the
application server. By convention, the methods are in /lib/svc/method. In a sparse root
Non-Global Zone, this would not be writable; therefore, you must choose another location.
See the smf_method(5) man page for more information about the execution environment for
SMF methods.
. /lib/svc/share/smf_include.sh
#
# Replace these where your WebSphere Application Server profile is located
WAS_DIR=”/opt/WASProfiles/profiles/AppSrv01”
#
# Replace this server name if it is different
SERVER_NAME=”server1”
#
#
WAS_BIN=”${WAS_DIR}/bin”
START_WAS=”${WAS_BIN}/startServer.sh”
STOP_WAS=”${WAS_BIN}/stopServer.sh”
case $1 in
‘start’)
if [ -x “$START_WAS” ]; then
$START_WAS $SERVER_NAME
else
exit $SMF_EXIT_OK
#
#-------- End of Script --------
Import the service manifest into the SMF repository, and enable and verify the service as
follows:
# svccfg import /var/svc/manifest/application/was61.xml
# svcadm enable was61
# svcs -l was61
fmri svc:/application/was61:was_server1
name WebSphere Application Server v6.1
enabled true
state online
next_state none
state_time Thu Jun 14 18:23:17 2007
logfile /var/svc/log/application-was61:was_server1.log
restarter svc:/system/svc/restarter:default
contract_id 1819
dependency require_all/error svc:/milestone/network:default (online)
dependency require_all/none svc:/system/filesystem/local:default (online)
dependency optional_all/error svc:/system/filesystem/autofs:default (online)
Once the service is enabled, WebSphere Application Server should start if the configuration
is valid. The log file is located at /var/svc/log/application-was61:was_server1.log.
154 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
5.5.3 Running WebSphere Application Server as a non-root user
One important feature of SMF is the ability to start a service as a non-privileged user. This
provides greater security and is a recommended practice. To run WebSphere Application
Server as a regular user, perform a non-root installation of WebSphere Application Server
and change the exec_method tags in the manifest to include a method_context specification.
Example 5-22 is a complete sample manifest for starting WebSphere Application Server as
user wasuser1.
Example 5-22 The Service Manifest for WebSphere Application Server V6.1 was61.xml with non-root
user
<?xml version=’1.0’?>
<!DOCTYPE service_bundle SYSTEM ‘/usr/share/lib/xml/dtd/service_bundle.dtd.1’>
<service_bundle type=’manifest’ name=’export’>
<service name=’application/was61’ type=’service’ version=’1’>
<instance name=’was_server1’ enabled=’false’>
<dependency name=’network’ grouping=’require_all’ restart_on=’error’ type=’service’>
<service_fmri value=’svc:/milestone/network:default’/>
</dependency>
<dependency name=’filesystem-local’ grouping=’require_all’ restart_on=’none’
type=’service’>
<service_fmri value=’svc:/system/filesystem/local:default’/>
</dependency>
<dependency name=’autofs’ grouping=’optional_all’ restart_on=’error’ type=’service’>
<service_fmri value=’svc:/system/filesystem/autofs:default’/>
</dependency>
<!-- “svcadm enable was61” to start WebSphere Application Server using a custom-defined
method -->
<exec_method name=’start’ type=’method’ exec=’/home1/wasuser1/SMF/svc-was61 start’
timeout_seconds=’60’>
<method_context working_directory=’/home1/wasuser1’>
<method_credential user=’wasuser1’ group=’wasgrp1’
privileges=’proc_fork,proc_exec’ />
</method_context>
</exec_method>
<!-- “svcadm disable was61” to stop WebSphere Application Server using a custom-defined
method -->
<exec_method name=’stop’ type=’method’ exec=’/home1/wasuser1/SMF/svc-was61 stop’
timeout_seconds=’60’>
<method_context working_directory=’/home1/wasuser1’>
<method_credential user=’wasuser1’ group=’wasgrp1’
privileges=’proc_fork,proc_exec’ />
</method_context>
</exec_method>
<!-- “svcadm refresh was61” to bounce WebSphere Application Server using a custom-defined
method -->
<exec_method name=’refresh’ type=’method’ exec=’/home1/wasuser1/SMF/svc-was61
restart’ timeout_seconds=’60’>
<method_context working_directory=’/home1/wasuser1’>
<method_credential user=’wasuser1’ group=’wasgrp1’
privileges=’proc_fork,proc_exec’ />
</method_context>
</exec_method>
</instance>
<stability value=’Evolving’/>
<template>
<common_name>
<loctext xml:lang=’C’>WebSphere Application Server v6.1</loctext>
You can find more information about this topic in Restricting Service Administration in the
Solaris 10 Operating System by Brunette, found at:
http://sun.com/blueprints/0605/819-2887.pdf
Tip: For collaboration with others, you can join the OpenSolaris community for SMF at:
http://www.opensolaris.org/os/community/smf
Before:
<instance name=’was_server1’ enabled=’false’>
After:
<instance name=’was_server1’ enabled=’true’>
If you have already created the WebSphere Application Server service with enabled='false',
do the following command to enable the service instance named was61:
# svcadm enable was61
You can verify the status of the service to make sure that enabled is true and the state is
online as follows:
# svcs -l was61
fmri svc:/application/was61:was_server1
name WebSphere Application Server v6.1
enabled true
state online
next_state none
state_time Mon Jan 24 13:31:25 2008
logfile /var/svc/log/application-was61:was_server1.log
restarter svc:/system/svc/restarter:default
contract_id 2748
dependency require_all/error svc:/milestone/network:default (online)
dependency require_all/none svc:/system/filesystem/local:default (online)
dependency optional_all/error svc:/system/filesystem/autofs:default (online)
156 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
5.5.5 Removing a service from SMF
To remove the service, stop it and delete it from the repository. If your manifest is in
/var/svc/manifest, then it should be removed as well or it will be re-imported upon the next
boot:
# svcadm disable was61
# svccfg delete was61
# rm -i /var/svc/manifest/application/was61.xml
Solaris 10 introduces a set of fine grained privileges that can be granted to processes
needing to perform tasks normally reserved for user ID 0. It is even possible to configure
Solaris such that the root user is a role (a non-login account) and has restricted capabilities.
Currently, Solaris 10 features more than 60 privileges. A regular user is granted with the
following privileges by default by Solaris:
file_link_any: Can create a link to an object not owned by the user
proc_fork: Can create additional processes (subject to resource management limits)
proc_exec: Can use the exec(2) system call to execute a different program than is
currently executing
proc_info: Can see the status of any process other than its own children
proc_session: Allows a process to see other processes outside of its own session
These privileges were previously granted to any process; thus in Solaris 10, they are part of
the basic set that all processes are granted by default. Additional privileges may be added or
taken away as necessary. The defaults for a specific user can be specified in the extended
user attributes database of /etc/user_attr. The defaults for a specific application can be
specified as a execution profile in /etc/security/exec_attr.
Consider the example in Example 5-23. You can use the ppriv command to examine the
effective process privileges of the current shell session denoted as $$ in Solaris.
% cat /etc/shadow
cat: cannot open /etc/shadow: Permission denied
If we were to grant the requesting processes file_dac_read (which bypasses discretionary file
access control for reading), the process could read the shadow password file (and any other
file on the system). While still a powerful privilege, the requesting process would not be able
to do other privileged operations, such as changing process execution classes, bypassing file
write permissions, or interacting with processes that it does not own.
% ppriv $$
9118: -bash
flags = <none>
E: basic,file_dac_read
I: basic,file_dac_read
P: basic,file_dac_read
L: all
Debugging individual privileges for a network service can be rather tedious, although there
are some general guidelines that can help.
Most start methods will require at the very least proc_fork and proc_exec. They may require
other privileges from the default basic set. If the service wants to create a listener on a port
less than 1024, it will also require net_privaddr. The start and refresh methods will typically
require proc_fork, proc_exec, proc_session and proc_info.
File permissions (write for log files, execute for scripts) can also be a problem. Fortunately,
these are not a problem with a non-root installation of the application server.
To determine what privileges are actually required by a method, you can look at the
privdebug.pl Perl script available at the OpenSolaris Security Community Web site at
http://opensolaris.org/os/community/security/file. This Perl script was developed by
Glenn Brunette of Sun Microsystems. It produces a list of all of the privileges a process
requests along with the status (granted, failed). Use privdebug.pl to run the service as root
to get a list of all of the privileges that are needed for the method.
In Example 5-26 on page 159, the WebSphere Application Server is installed and owned by
the user wasuser1 in the Non-Global Zone named waszone8. You can use privdebug.pl to
determine which privileges are really required to start and stop the WebSphere Application
Server.
158 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 5-26 Using the privdebug.pl script to examine the start method
# ./privdebug.pl -f -v -e "zlogin -l wasuser1 waszone8 /lib/svc/method/svc-was61
start"
STAT TIMESTAMP PPID PID UID PRIV CMD
USED 15790348592172000 5344 5345 0 proc_fork zlogin
USED 15790348656904000 5347 5348 100 proc_exec startServer.sh
USED 15790348661423100 5347 5348 100 proc_fork startServer.sh
USED 15790348675359800 5347 5348 100 proc_fork startServer.sh
USED 15790348677628000 5347 5348 100 proc_fork startServer.sh
USED 15790348682510900 5347 5348 100 proc_fork startServer.sh
USED 15790348691073400 5347 5348 100 proc_fork startServer.sh
USED 15790348697784300 5347 5348 100 proc_fork startServer.sh
USED 15790348650569000 5346 5347 100 proc_exec svc-was61
USED 15790348655554700 5346 5347 100 proc_fork startServer.sh
USED 15790348670542100 5347 5348 100 proc_fork startServer.sh
USED 15790348692779800 5348 5355 100 proc_exec startServer.sh
USED 15790348699352300 5348 5356 100 proc_exec startServer.sh
USED 15790348596052000 5345 5346 0 proc_exec zlogin
USED 15790348605130200 5345 5346 0 sys_audit su
USED 15790348605139200 5345 5346 0 sys_audit su
USED 15790348668087400 5347 5348 100 proc_fork startServer.sh
USED 15790348703717100 5347 5348 100 proc_fork startServer.sh
USED 15790348622866300 5345 5346 0 sys_audit su
USED 15790348622876000 5345 5346 0 sys_audit su
USED 15790348622955200 5345 5346 0 sys_audit su
USED 15790348622962000 5345 5346 0 sys_audit su
USED 15790348625927600 5345 5346 0 proc_taskid su
USED 15790348628039600 5345 5346 0 proc_setid su
USED 15790348628399600 5345 5346 0 proc_setid su
USED 15790348628420800 5345 5346 0 proc_setid su
USED 15790348628429500 5345 5346 0 proc_setid su
USED 15790348629763000 5345 5346 100 proc_exec su
USED 15790348642003500 5345 5346 100 proc_exec bash
USED 15790348649028000 5345 5346 100 proc_fork svc-was61
Notice in this example that the start method only requires proc_fork and proc_exec. The
privilege sys_net_config was also used in this example, but an examination of the
privileges(5) man page shows that it is not needed. In fact, it silently fails without impact
later in the application execution.
Similarly, running the stop method produces the output in Example 5-27.
160 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
/home1/wasuser1/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/server1/stopServer.
log contract_event
ADMU0128I Starting tool with the AppSrv01 profile
contract_event
ADMU3100I server1 Reading configuration for server
contract_event
ADMU3201I Server stop request issued. Waiting for stop
status. contract_event
ADMU4000I Server server1 stop completed.
contract_event
Since our service manifest, as discussed in Example 5-20 on page 152, maps the refresh
method to a service restart (stop followed by start), the refresh method requires the same
privileges: proc_fork and proc_exec.
These method credentials can be added to our existing SMF manifest in the method_context
for each of the three exec_methods. These entries would now look like the ones shown in
Example 5-28.
Example 5-28 Modified SMF for start and stop methods with non-root user and restricted privileges
<exec_method name='start' type='method'
exec='/lib/svc/method/svc-was61 start' timeout_seconds='60'>
<method_context working_directory='/home1/wasuser1'>
<method_credential user='wasuser1'
group='wasgrp1' privileges='proc_fork,proc_exec' />
</method_context>
</exec_method>
</exec_method>
</exec_method>
# ppriv 5945
5945: /home1/wasuser1/IBM/WebSphere/AppServer/java/bin/java -XX:MaxPermSize=
flags = <none>
E: basic,!file_link_any,!proc_info,!proc_session
I: basic,!file_link_any,!proc_info,!proc_session
P: basic,!file_link_any,!proc_info,!proc_session
L:
basic,contract_event,contract_observer,file_chown,file_chown_self,file_dac_execute
,file_dac_read,file_dac_search,file_dac_write,file_owner,file_setid,ipc_dac_read,i
pc_dac_write,ipc_owner,net_bindmlp,net_icmpaccess,net_mac_aware,net_privaddr,proc_
audit,proc_chroot,proc_owner,proc_setid,proc_taskid,sys_acct,sys_admin,sys_audit,s
ys_mount,sys_nfs,sys_resource
Not only is WebSphere Application Server running as a non-root user, but with less privileges
than a regular user. If the service becomes compromised, the attacker can only use fork and
exec. And if the service is running in a Non-Global Zone, then the max-lwps resource control
will limit the impact of the compromise.
This technique can be applied to more complicated service examples, including server side
applets or scripts that require additional privileges that normally run as root.
For complete details on Process Rights Management, refer to Limiting Service Privileges in
the Solaris 10 Operating System by Brunette, found at:
http://sun.com/blueprints/0505/819-2680.pdf
162 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
ZFS is designed to provide:
Data integrity that far exceeds the integrity and reliability provided by other file systems, by
using end-to-end checksums on both data and metadata contents. ZFS automatically
detects and recovers from media errors, cable problems, phantom or misdirected writes,
driver errors, and other sources of disk subsystem error. ZFS also ensures that on-disk
contents are always self-consistent, eliminating the need for the time-consuming "fsck"
(file system check) traditionally needed after a system outage. ZFS does this by using
"copy on write" algorithms that always write new data to a different location on disk.
Related changes either succeed or fail to be written to disk as a whole. This ensures that
the on-disk copy of data is consistent with itself.
Scaling for today's massive applications and data requirements, without the use of
complex, brittle, and expensive volume managers. ZFS does this by providing pooled
storage with 128 bits of data addressability for limitless logical scale.
Advanced capabilities like point-in-time snapshots, clones, on-disk encryption, and
replication. For example, ZFS permits instantaneous data snapshots, which can be used
to preserve the entire contents of a file system, whether they represent a particular
software build's contents, or the contents of a stock market data repository at the moment
the market closes. Snapshots are space-efficient, as only new, altered contents need to be
written to different disk locations.
Simpler administration, making it possible to create data resources, mirrors, and stripes
without learning obscure command options and flags. In particular, ZFS removes
administrator burden by providing self-tuning, removing onerous and error-prone activities
such as setting stripe sizes or calculating space needed for inodes. ZFS automatically
tunes itself to best match the data characteristics for which it is being used. It is easy for
an administrator to set quotas or space reservations, to disable or enable compression,
and other common tasks.
High performance. Current file systems are slow in numerous common situations that ZFS
addresses by eliminating linear-time file system creation, fixed block sizes, excessive
locking, and naive prefetch.
These features make ZFS one of the most advanced file systems, bringing new levels of
reliability and protection for critical data assets. A free feature of Solaris, it is so popular that it
has been adapted from OpenSolaris to be ported to other operating systems, such as Linux
and MacOS 10.5.
ZFS provides a pooled storage model that no longer requires the user to deal with the
intricacies of storage management, such as volumes, partitions (or slices), format, mounting,
and underutilization. You can easily build RAID volumes and file systems without needing
expensive volume management software. In this section, we provide an introduction to ZFS
and show how it can be applied to your WebSphere Application Server deployment.
To create a file system on Solaris, you would typically use the format command to select a
disk, create partitions/slices, and do mkfs. ZFS eliminates all those steps and simplifies the
process with the zpool and zfs commands. The following example demonstrates that a ZFS
file system, named pool1, is created using the device c0t1d0 with 68 GB:
global# zpool list
no pools available
global# zpool create pool1 c0t1d0
global# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool1 68G 88K 68.0G 0% ONLINE -
global# zpool status
pool: pool1
state: ONLINE
scrub: none requested
config:
If you need to make this volume bigger, you can simply concatenate one or more additional
devices to pool1, as shown in the following example. You will find that the pool size has grown
from 68 GB to 136 GB after adding another device (c0t2d0) with 68 GB:
global# zpool add pool1 c0t2d0
global# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool1 136G 91K 136G 0% ONLINE -
global# zpool status
pool: pool1
state: ONLINE
scrub: none requested
config:
164 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
errors: No known data errors
If you want to delete this pool and create another pool, called mpool1, a two-way mirrored
volume with c0t1d0 and c0t2d0, do the following (mpool1 has 2x68 GB on disks in the zfs
pool and is mirrored; thus, the total pool size is 68 GB.):
global# zpool destroy pool1
global# zpool list
no pools available
global# zpool create mpool1 mirror c0t1d0 c0t2d0
global# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mpool1 68G 89K 68.0G 0% ONLINE -
global# zpool status
pool: mpool1
state: ONLINE
scrub: none requested
config:
Now, you can turn mpool1 from a two-way mirrored volume to a three-way mirrored volume,
as shown in the following example:
global# zpool attach mpool1 c0t1d0 c0t3d0
global# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mpool1 68G 90K 68.0G 0% ONLINE -
global# zpool status
pool: mpool1
state: ONLINE
scrub: resilver completed with 0 errors on Mon Oct 29 16:39:13 2007
config:
Furthermore, you can create zfs file systems within the pool that we just created:
global# zfs create mpool1/IBMSW1
global# zfs create mpool1/IBMSW2
You can check the filesystem with the standard Solaris command like df.
global# df
/ (/dev/dsk/c0t0d0s0 ):129268622 blocks 8041716 files
/devices (/devices ): 0 blocks 0 files
/system/contract (ctfs ): 0 blocks 2147483614 files
/proc (proc ): 0 blocks 16301 files
/etc/mnttab (mnttab ): 0 blocks 0 files
/etc/svc/volatile (swap ):17003904 blocks 1166846 files
/system/object (objfs ): 0 blocks 2147483494 files
/lib/libc.so.1 (/usr/lib/libc/libc_hwcap2.so.1):129268622 blocks 8041716
files
/dev/fd (fd ): 0 blocks 0 files
/tmp (swap ):17003904 blocks 1166846 files
/var/run (swap ):17003904 blocks 1166846 files
/mpool1 (mpool1 ):140377815 blocks 140377815 files
/mpool1/IBMSW1 (mpool1/IBMSW1 ):140377815 blocks 140377815 files
/mpool1/IBMSW2 (mpool1/IBMSW2 ):140377815 blocks 140377815 files
You may also change the mount point to something that we like, such as /opt2:
global# zfs set mountpoint=/opt2 mpool1
global# df
/ (/dev/dsk/c0t0d0s0 ):129268624 blocks 8041717 files
/devices (/devices ): 0 blocks 0 files
/system/contract (ctfs ): 0 blocks 2147483614 files
/proc (proc ): 0 blocks 16301 files
/etc/mnttab (mnttab ): 0 blocks 0 files
/etc/svc/volatile (swap ):17003648 blocks 1166846 files
/system/object (objfs ): 0 blocks 2147483494 files
/lib/libc.so.1 (/usr/lib/libc/libc_hwcap2.so.1):129268624 blocks 8041717
files
/dev/fd (fd ): 0 blocks 0 files
/tmp (swap ):17003648 blocks 1166846 files
166 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
/var/run (swap ):17003648 blocks 1166846 files
/opt2 (mpool1 ):140377801 blocks 140377801 files
/opt2/IBMSW1 (mpool1/IBMSW1 ):140377801 blocks 140377801 files
/opt2/IBMSW2 (mpool1/IBMSW2 ):140377801 blocks 140377801 files
global# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mpool1 156K 66.9G 27.5K /opt2
mpool1/IBMSW1 24.5K 66.9G 24.5K /opt2/IBMSW1
mpool1/IBMSW2 24.5K 66.9G 24.5K /opt2/IBMSW2
Now, this file system is ready for IBM WebSphere deployment. You can monitor the
performance of ZFS as follows:
global# zpool iostat -v 5
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
mpool1 158K 68.0G 0 1 104 2.47K
mirror 158K 68.0G 0 1 104 2.47K
c0t1d0 - - 0 0 417 4.31K
c0t3d0 - - 0 0 106 3.83K
---------- ----- ----- ----- ----- ----- -----
The advantages of using ZFS for WebSphere Application Server deployment are:
Provisioning of WebSphere Application Server and Solaris Containers with higher
reliability.
Delegate file system management to a Container.
Easier to migrate Solaris Containers from one host to another.
Large log files can be automatically compressed within the ZFS environment.
Simplify volume and file system management.
Tip: For collaboration with others, you can join the OpenSolaris community for ZFS at
http://www.opensolaris.org/os/community/zfs.
This topic discusses the update strategy of your WebSphere Application Server environment,
and how to install and uninstall a maintenance package to your WebSphere Application
Server installation.
Note: The following information in this section can be accessed o line at:
http://www-1.ibm.com/support/docview.wss?rs=180&uid=swg27009276
WebSphere Application Server Support delivers this strategy to provide a clear upgrade path
that reduces the risk of installing collections of Application Server fixes.
Requests for published collections of fixes are delivered in a timely manner and tested
together without new features or behavioral changes.
WebSphere Application Server V6.1 will provide fix packs containing cumulative fixes that will
be updated regularly. This provides a consistent maintenance approach that you can follow as
you manage your products.
Installing preventative maintenance as soon as it becomes available will save you time. As
long as you test appropriately, actively applying preventative maintenance can avoid problems
that could result in a service call.
Delivering updates
Fix Pack deliveries will be timely. Deliveries are approximately every 12 weeks. The date of
next planned Fix Pack will be published at
http://www.ibm.com/software/webservers/appserv/was/support, at least 30 days prior to
availability.
Each Fix Pack delivery can consist of multiple fix packs for the following components:
Application Server
Application Client
Web server plug-ins
IBM HTTP Server
Java SDK
170 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Note:
The preceding component fix packs are all tested together at the same level.
In some cases, maintenance might not be delivered for every component for each new
Fix Pack level (for example, there was no maintenance delivered for IBM HTTP Server
for Fix Pack 1 (6.1.0.1)).
IBM supports users who might not install all fix packs delivered at each Fix Pack level
(for example, installing Fix Pack 5 (6.1.0.5) for the Application Server but not installing
Fix Pack 5 for IBM HTTP Server).
There is a single list of all defects for each release that is updated for each new Fix Pack.
Install a fix on top of any previous Fix Pack installed (for example, 6.1.0.5 could be applied to
6.1.0.1 or 6.1.0.2).
Note: The version numbers (6.1, 6.1.0.1, 6.1.0.2, and so on) used throughout this
document are to illustrate a typical maintenance path used to provide solutions to our
customers. It does not reflect actual, nor intended, deliverables.
Table 6-1 shows the types of solutions that are provided for WebSphere Application Server.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 171
Solutions Characteristics of each solution
Maintenance planning
Choose the appropriate update solution and verify the inclusion of the needed individual fixes
in the Fix Pack:
Review the Recommended fixes and the Fix list for Version 6.1 to help you plan your
updates (http://www.ibm.com/software/webservers/appserv/was/support).
172 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
In order to ensure the quality of fix packs, individual fixes that become available late in the
development cycle may not be included in a Fix Pack.
The historyInfo utility lists the history of installed and uninstalled fixes. For more details,
refer to 6.3.1, “Installing Update Installer to the Global Zone” on page 176.
The current recommended fixes for WebSphere Application Server V6.1 are available at:
http://www.ibm.com/support/dociew.wss?rs=180&uid=swg27004980#ver61
With feature packs, you can selectively take advantage of new standards and features while
maintaining a more stable internal release cycle. Feature packs are offered generally
available or available in either open alpha, beta, or technology preview. WebSphere Feature
Packs are available at:
https://www14.software.ibm.com/iwm/web/cc/earlyprograms/websphere.shtml
You should also check the SunSolve Web site on regular basis and the IBM recommended
fixes Web site for new required patches. Solaris patches are available for download at:
http://sunsolve.sun.com
You can also subscribe to Sun Alert for weekly summary reports at:
http://sunsolve.sun.com/show.do?target=salert-notice
You can determine your system’s current patch level with the following command:
bash# showrev -p
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 173
There are different level of patches that are available for the Solaris patching mechanism that
you need to be concerned with:
General patches
Kernel Update patches
Some Solaris patches are targeted specifically for the Java platform known as JavaSE Cluster
Patches for Solaris. These patches are concerned with your WebSphere’s Java Virtual
Machine and related software libraries. They are available for download at:
https://sunsolve.sun.com/show.do?target=patches/JavaSE
Solaris packages may apply to the entire system, or to individual zones. The patch and
package tools are aware of the scope and visibility of each package and it is possible for the
Global Zone administrator to manage the software in all zones, subject to the following
package and patch attributes:
The SUNW_PKG_ALLZONES package parameter is a binary flag that defines the scope
of a package, specifying the type of zone in which the package can be installed. If set to
true, then the package can be installed and patched only from the Global Zone. If false,
then it can be patched individually in a Non-Global Zone
The SUNW_PKG_HOLLOW parameter defines the visibility of a patch, specifying whether
it must be installed and identical in all zones, and typically is for kernel software shared by
all zones. If true, then the patch must be installed from the Global Zone, and is propagated
implicitly to the Non-Global Zones so their patch inventories can maintain the proper patch
dependencies. If SUNW_PKG_HOLLOW is true, then SUNW_PKG_ALLZONES is also
true.
Finally, SUNW_PKG_THISZONE is a binary flag that, if true, indicates that the patch
applies only to the currently running zone (where the patchadd command is running). This
is typically used for application software installed in a zone.
Solaris uses these attributes to drive the behavior of the patchadd utility: If patchadd is issued
from the Global Zone, the patch is added to the Global Zone and the Global Zone's patch
database is updated to record the software inventory. Then the patch is added to each
Non-Global Zone, and each zone's patch database is updated. This makes is possible to
patch all the zones on the system with a single administrator command. For the sake of
efficiency, it is a best practice to boot all the zones before applying a patch. If a zone is not
booted, then it is booted once into single user mode to check dependencies (and then
halted), and again to install the patch packages (and then halted). If the zone is already
running, then this extra activity is skipped.
174 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
6.3 Maintenance of WebSphere Application Server V6.1 in
zones
We introduce the deployment concepts and strategies for installation and configuration of
WebSphere Application Server V6.1 in Solaris 10 Zones in 5.1.3, “Solaris Containers” on
page 118. In this section, we describe how to manage the maintenance of such an
environment. First, we describe how to install and uninstall an Update Installer. Then, we
describe how to install and uninstall a Fix Pack in this environment.
We look at the special case of shared binary installation among multiple zones, as this is not
documented in the IBM InfoCenter for WebSphere Application Server V6.1. The shared
binaries reside in the Global Zone while the profile directories reside in the local zones. We
demonstrate how to apply a Fix Pack to the WebSphere Application Server binaries in the
Global Zone and how this update is automatically propagated to respective local zones. This
deployment strategy improves the manageability of your WebSphere Application Server
environment by needing to apply fixes once in the centrally located installation base in the
Global Zone.
Before applying a new Fix Pack, we examine and record the current installed version of
WebSphere Application Server with the versionInfo.sh command in both the Global Zone
and local zones. We see the output of this command in Example 6-1.
--------------------------------------------------------------------------------
IBM WebSphere Application Server Product Installation Status Report
--------------------------------------------------------------------------------
Installation
--------------------------------------------------------------------------------
Product Directory /opt/IBM/WebSphere/AppServer
Version Directory /opt/IBM/WebSphere/AppServer/properties/version
DTD Directory /opt/IBM/WebSphere/AppServer/properties/version/dtd
Log Directory /opt/IBM/WebSphere/AppServer/logs
Backup Directory
/opt/IBM/WebSphere/AppServer/properties/version/nif/backup
TMP Directory /var/tmp
Product List
--------------------------------------------------------------------------------
ND installed
Installed Product
--------------------------------------------------------------------------------
Name IBM WebSphere Application Server - ND
Version 6.1.0.0
ID ND
Build Level b0620.14
Build Date 5/16/06
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 175
--------------------------------------------------------------------------------
End Installation Status Report
--------------------------------------------------------------------------------
3. You will see the Update Installer installation welcome window (Figure 6-1).
4. Click Next.
5. Read the licence agreement carefully, click I accept the term in the licence agreement
radio button, and click Next (Figure 6-2 on page 177).
176 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-2 License agreement window
Installer checks that the system meets the installation requirements (Figure 6-3).
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 177
6. Click Next.
In this window (Figure 6-4), you can choose the installation directory for Update Installer.
We recommend that you use the default installation directory.
Attention: Leaving the Directory path field empty prevents you from continuing.
Remember that you cannot specify a directory that already contains Update Installer as a
Directory path. If you have a previous version of Update Installer, you have to uninstall it
prior to reinstallation.
7. Fill in the Directory path field or leave it as the default and click Next.
8. Review the installation information (Figure 6-5 on page 179). If it is correct, click Next. If it
needs corrections, click Back, make the corrections, and proceed.
178 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-5 Installation summary window
The installer program creates an uninstaller first and then it installs the Update Installer
package. After the successful completion of the installation, you should see the Installation
Complete window (Figure 6-6).
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 179
You can launch the Update Installer program right after the installation of Update Installer
is completed by selecting the Launch IBM update installer for WebSphere Software on
exit check box or from the command line, as shown in Example 6-3.
Important: You must stop all WebSphere Application Server related Java processes
before using the Update Installer because these active processes can interfere with
product updates that it performs.
This procedure installs the Update Installer for WebSphere Software on your system. To
install maintenance packs into your WebSphere Application Server environment, refer to
6.3.3, “Installing a Fix Pack to the Global Zone” on page 182 for more details of the
installation of maintenance packages to your WebSphere Application Server environment.
3. The uninstall command starts the uninstallation wizard of Update Installer (Figure 6-7 on
page 181). To continue, click Next.
180 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-7 Update Installers uninstall welcome window
4. The uninstallation wizard shows the uninstallation summary (Figure 6-8). If you want to
cancel uninstallation, click Cancel. To begin the uninstallation, click Next.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 181
5. After uninstallation completes, the uninstallation wizard shows the Uninstallation
Complete window (Figure 6-9). Click Finish to exit the uninstallation wizard.
This completes the uninstallation of the WebSphere Application Server software from your
system.
182 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Global
Global Zone
Zone
Local
Local Zone:
Zone: Z1
Z1
AppSrv01
AppSrv01
Profiles
Profiles
Dmgr
Dmgr
Local
Local Zone:
Zone: Z2
Z2
AppSrv01
AppSrv01
Fix
Fix Pack
Pack WAS
WAS v6.1
v6.1 Profiles
Profiles
A
A
AppSrv02
AppSrv02
Local
Local Zone:
Zone: Z3
Z3
Profiles
Profiles AppSrv01
AppSrv01
Figure 6-10 Installing a Fix Pack to the shared WebSphere Application Server binaries propagates to local zones
The Fix Pack installation process is the same as any other platform, but the significant part is
the automatic propagation of the Fix Pack to each inherited local zone through the
WebSphere Application Server base binary installation directory as a loop-back file system
(LOFS):
1. Download the WebSphere Application Server Fix Pack from the IBM WebSphere
Application Server Support Web site.
2. Put the Fix Pack into the WebSphere Application Server Update Installers maintenance
directory. In our case, it is in /opt/IBM/WebSphere/UpdateInstaller/maintenance.
3. Go to Update Installer directory and issue the command shown in Example 6-5.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 183
4. After issuing the command, you see the IBM Update Installer for WebSphere software
welcome window (Figure 6-11).
Attention: Make sure that every WebSphere and related Java processes are stopped
before continuing Fix Pack installation. Otherwise, failure may occur.
5. Click Next.
6. Type or browse to the path of the product binaries to be updated (Figure 6-12 on
page 185). Typically, the field already contains the right path to the product binaries, but if
your product is in another path, you can go to the location by using the Browse button or
type in the correct path into field. After you have verified the right path, click Next.
184 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-12 Product selection to be updated
Attention: You cannot leave the Directory path field empty because it prevents you from
continuing the installation of the Fix Pack.
7. Select the maintenance operation. Click the Install maintenance package radio button
and click Next, as shown in Figure 6-13.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 185
8. Enter the path to directory where the maintenance package is available for installation
(Figure 6-14). You can specify a directory or click Browse to select the path to the
maintenance package. When you have entered the path, click Next.
Attention: You cannot leave the Directory path field empty because it prevents you from
continuing the installation of the Fix Pack.
186 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
9. The next window shows you the available maintenance packages for installation in the
given path. Click the check box if it is not already checked (Figure 6-15) and click Next.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 187
10.Review the Installation Summary window for correctness. Figure 6-16 shows the
maintenance package to be installed and the product to be updated. If something needs to
be corrected, click the Back button; otherwise, click the Next button to begin the
installation of the maintenance package.
Important: Before you begin the update installation, make sure you have your application
server profiles backed up. We recommend that you back up your profiles or archive the
whole profiles directory for fast roll back.
11.After the installation is completed, you see the Installation Complete window (Figure 6-17
on page 189). You can click the Relaunch button if you are planning to install or uninstall
another maintenance package. If you are not going to do any more maintenance
installations, click the Finish button to exit the installation program.
188 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-17 Installation Complete window
When you have finished your Fix Pack installation, log in to a local zone and verify that
installation is propagated to the local zone. Use the commands in Example 6-6 to verify
installation propagation.
In this example case, the propagation took place and the profile in the local zone was
updated. Observe the differences between Example 6-1 on page 175 and Example 6-7. In
the latter example, the differences are bolded.
--------------------------------------------------------------------------------
IBM WebSphere Application Server Product Installation Status Report
--------------------------------------------------------------------------------
Installation
--------------------------------------------------------------------------------
Product Directory /opt/IBM/WebSphere/AppServer
Version Directory /opt/IBM/WebSphere/AppServer/properties/version
DTD Directory /opt/IBM/WebSphere/AppServer/properties/version/dtd
Log Directory /opt/IBM/WebSphere/AppServer/logs
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 189
Backup Directory
/opt/IBM/WebSphere/AppServer/properties/version/nif/backup
TMP Directory /var/tmp
Product List
--------------------------------------------------------------------------------
ND installed
Installed Product
--------------------------------------------------------------------------------
Name IBM WebSphere Application Server - ND
Version 6.1.0.11
ID ND
Build Level cf110734.37
Build Date 8/31/07
--------------------------------------------------------------------------------
End Installation Status Report
--------------------------------------------------------------------------------
This process installs the maintenance package to WebSphere Application Server V6.1. If you
need to uninstall the maintenance package, Fix Pack, cumulative fix, or interim fix, see 6.3.4,
“Uninstalling a Fix Pack from the Global Zone” on page 190 for more information about the
uninstallation of the maintenance pack.
3. The update command starts the IBM Update Installer for WebSphere software. You should
see the welcome window (Figure 6-18 on page 191).
190 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-18 Update Installer welcome window
4. Click Next.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 191
5. Type the path of the product binaries to be updated (Figure 6-19). Typically, this field
already has the right path to the product binaries, but if your product resides in another
path, you can go to the correct location using the Browse button or type the correct path in
the field. After you have verified the right path, click Next.
Attention: You cannot leave the Directory path field empty because it prevents you from
continuing the installation of the Fix Pack.
6. Select the Uninstall maintenance package radio button (Figure 6-20 on page 193) and
click Next.
192 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-20 Maintenance operation selection
7. Select the maintenance package to be uninstalled and click Next (Figure 6-21).
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 193
8. Review the Uninstallation Summary window (Figure 6-22). If you want to cancel the
uninstallation, click the Cancel button. To continue with the uninstallation, click Next.
9. If you are planning to install or uninstall another maintenance package, click Relaunch in
the installation completion window (Figure 6-23). Click Finish to exit the installation
wizard.
194 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
This section describes ways to install and uninstall an Update Installer program, which is
needed to update the WebSphere Application Server product. This section also describes the
procedures to install and uninstall maintenance packages with the Update Installer program
to keep the WebSphere Application Server installation up to date.
To plan accordingly, you can begin to categorize the various needs for the rehost and
consider the best approaches to preserve the current environment and roll it out identically in
the new environment. First, we clarify that this is not a migration project; thus, the underlying
system infrastructure is to remain on the same vendor’s platforms. In our case, the system
infrastructure is Solaris based systems that are hosting the WebSphere environment. As a
guideline, you can define the types of rehost that are required as follows:
From a physical system to a physical system
From a physical system to a virtual system (for example, Dynamic System Domains to
Containers)
From a logical system to a virtual system (for example, LDoms to Containers)
From a physical system to a logical system (for example, Dynamic System Domains to
LDoms)
From a virtual system to a logical system (for example, Containers to LDoms)
From a virtual system to a virtual system on different hosts (for example, Containers to
Containers)
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 195
You can use this approach in any of type of rehost situation. It requires some preparation
work to create the Custom Installation Package (CIP) and so on, but once you have this
package prepared, you have a very flexible solution.
We discuss details about these two approaches to relocate your WebSphere Application
Server environment as part of the rehosting requirements in detail in the next two sections.
A Customized Installation Package (CIP) created with the Installation Factory can contain a
previously exported configuration archive of an existing WebSphere Application Server profile
located in a custom defined location along with WebSphere Application Server installation
binaries, maintenance packages, user files and folders, and scripts.
Consideration: If you are intending to clone a WebSphere Application Server profile that
uses messaging, you must also include a script to configure the service integration bus
(SIB). The original SIB from the original profile is not portable and therefore it is not
included in the configuration archive file.
196 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Creating a Custom Installation Package of a pre-existing, stand-alone profile
We give an example of this task here; its topology is as follows:
The WebSphere Application Server profile is located in a custom defined location, the Fix
Pack level is 6.1.0.11, and one additional “business application” named ItsoTest is deployed
to the destination system. The business application is using a custom Web host name to
serve Web content. We create an installation package from this scenario and install it to
another system:
1. Create an archive file of your current profile on the source system, as shown in
Example 6-9.
Example 6-9 Creating a profile archive for CIP using the command line
source_host# cd /opt/IBM/WebSphere/AppServer/bin/
source_host# ./wsadmin.sh -username <username> -password <password> -conntype NONE
WASX7357I: By request, this scripting client is not connected to any server
process. Certain configuration and application operations will be available in
local mode.
WASX7029I: For help, enter: "$Help help"
wsadmin>$AdminTask exportWasprofile { -archive /tmp/itsoTest.car }
wsadmin>exit
2. Go to the directory where the Installation Factory is located and issue the command
ifgui.sh, as shown in Example 6-10.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 197
3. You will see the IBM Installation Factory dialog window, as shown in Figure 6-24. Click the
Create New Customized Installation Package icon.
198 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
4. Choose WebSphere Application Server and Network Deployment edition, as shown in
Figure 6-25. Click the Finish button.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 199
5. Click the Connected mode radio button and choose the appropriate platform for your
environment. In this example, we select the Sun Solaris Sun Sparc 32 bit platform, as
shown in Figure 6-26. Click the Next button.
200 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
6. Change the Identifier field to, for example, com.itso.test and keep the default value for the
Version field, as shown in Figure 6-27. Click Next.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 201
7. Keep the Build definition directory path field as default and edit the CIP build directory path
field, as shown in Figure 6-28. Click Next.
202 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
8. Type or browse the path containing the WebSphere Application Server installation image
and click Next (Figure 6-29).
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 203
9. Select the Core product files check box and click Next (Figure 6-30).
10.Browse your Fix Pack, SDK Fix Pack, and interim fixes you want to include in the
installation package and click Next (Figure 6-31 on page 205).
204 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-31 Build Definition Wizard: Maintenance Packages
11.Optional: You can choose here the scripts you want to be run after an installation. In this
example, we skip this step and go to the next window by clicking Next (Figure 6-32).
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 205
12.Choose the Stand-alone Application Server from the Profile types menu, click the Allow
creation of new profiles using the customizations check box, and after that, click the
Add Configuration Archive button, as shown in Figure 6-33, “Build Definition Wizard:
Profile Customization main window” on page 206.
13.From the Add a Configuration Archive (CAR) file window shown in Figure 6-34 on
page 207, type or browse the whole path to the configuration archive file, leave the
Unrecoverable error radio button selected, and click OK.
206 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-34 Add a Configuration Archive window
14.Click the Add Enterprise Archive button shown in Figure 6-33, “Build Definition Wizard:
Profile Customization main window” on page 206.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 207
15.Browse an EAR file to be included and type in the Application display name. Leave the
Unrecoverable error radio button selected and click OK, as shown in Figure 6-35.
208 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
16.Click Next (Figure 6-36).
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 209
17.Optional: You can add additional files or folders in the Additional Files window shown in
Figure 6-37. In this example, we skip the Additional Files window by clicking Next.
210 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
18.Optional: You can fill in the Organization field and Description text area of the Authorship
panel with your organization’s details, as shown in Figure 6-38. Click Next to continue.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 211
19.Click the Save build definition file and generate customized installation package
radio button. Optionally, you can click Estimate Size and Available Space to learn about
the required free space and installation package size information. Review your installation
package information and click Finish, as shown in Figure 6-39.
Package creation takes a little time and after it is completed, click OK in the Successful Build
window (Figure 6-40). Select File → Exit in the IBM Installation Factory window (Figure 6-24
on page 198) to close the Installation factory application.
212 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
As a result of this process, the Installation Factory creates a Customized installation package
with a pre-existing stand-alone Application Server Profile, two fix packs, and one additional
customized business application in it.
2. After transferring the installation package to the destination system, unpack the tar file and
start installation wizard by issuing the install command that is included in the installation
package, as shown in Example 6-12.
Example 6-12 Unpack the installation package and start the installation
dest_host#mkdir -p /tmp/install
dest_host#cd /tmp/install
dest_host#tar xvf /<path>/<to>/itsoTest_inst.tar
---output omitted---
x ./WAS/panels/coexistencePanel.xml, 12268 bytes, 24 tape blocks
x ./WAS/readme, 0 bytes, 0 tape blocks
x ./WAS/readme/wasstyle_nlv.css, 3685 bytes, 8 tape blocks
x ./WAS/readme/readme_en.html, 7701 bytes, 16 tape blocks
x ./WAS/responsefile.nd.txt, 47704 bytes, 94 tape blocks
dest_host#./WAS/install
InstallShield Wizard
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 213
3. From the Installation wizard welcome window, you can optionally see information about
the custom installation package by clicking the About this custom installation package
button. To continue with the installation wizard, click Next in the Welcome window
(Figure 6-41).
4. Click to accept the license agreement and click Next, as shown in Figure 6-42 on
page 215.
After accepting the licensing terms, the installation verifies your system. The system must
meet the prerequisites for the installation to continue. If you receive an error message
indicating that your system does not meet the prerequisites, cancel the installation, make
the required corrections, and restart the installation.
214 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-42 CIP installation wizard: License agreement window
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 215
6. Select the Install customization files contained in this installation check box and click
Next, as shown in Figure 6-44.
7. Choose the installation directory for the installation and click Next. In this example, we
choose the default installation directory, as shown in Figure 6-45 on page 217.
216 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-45 CIP installation wizard: Installation root definition
Attention:
Leaving the destination of the installation root field empty prevents you from continuing.
Do not use symbolic links as the destination directory because they are not supported.
Spaces are not supported in the name of the destination installation directory in Solaris.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 217
8. Choose Application Server from the Environments menu and click Next, as shown in
Figure 6-46.
9. Optional: Enable administrative security, as shown in Figure 6-47 on page 219, and click
Next.
218 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-47 CIP installation wizard: Enable administrative security
Attention: If the source system WebSphere Application Server profile has security
enabled, the profile’s original security configuration will override the values given here.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 219
10.Review the summary shown in Figure 6-48. If you have to change something, click Back;
otherwise, click Next.
The Installation wizard creates the uninstaller program and then displays the progress
window that shows which components are being installed. This may take a little time. At the
end of the installation, the wizard displays the Installation completion window.
Optional: You can verify the installation by launching the First steps console by clicking the
Launch the First steps console check box before clicking the Finish button of completion
window, as shown in Figure 6-49 on page 221.
220 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-49 CIP installation wizard: Installation completed
--------------------------------------------------------------------------------
IBM WebSphere Application Server Product Installation Status Report
--------------------------------------------------------------------------------
Installation
--------------------------------------------------------------------------------
Product Directory /opt/IBM/WebSphere/AppServer
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 221
Version Directory /opt/IBM/WebSphere/AppServer/properties/version
DTD Directory /opt/IBM/WebSphere/AppServer/properties/version/dtd
Log Directory /opt/IBM/WebSphere/AppServer/logs
Backup Directory
/opt/IBM/WebSphere/AppServer/properties/version/nif/backup
TMP Directory /var/tmp
Product List
--------------------------------------------------------------------------------
ND installed
Installed Product
--------------------------------------------------------------------------------
Name IBM WebSphere Application Server - ND
Version 6.1.0.11
ID ND
Build Level cf110734.37
Build Date 8/31/07
--------------------------------------------------------------------------------
End Installation Status Report
--------------------------------------------------------------------------------
As a result, Fix Pack 11 was installed with the product files. Next, we verify that the custom
Virtualhost configuration is in place, as shown in Figure 6-50.
Then we verify that the custom VirtualHost configuration in the new profile is set, as shown in
Figure 6-51 on page 223.
222 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 6-51 Custom VirtualHost configuration propagated in installation
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 223
In general, you can consider two different cases for relocating a WebSphere Application
Server environment. One case is where WebSphere Application Server is installed
independently in each local zone, as shown in Figure 6-52. The other case is where
WebSphere Application Server installation is shared by multiple containers from a Global
Zone location, as shown in Figure 6-53.
Host-A
Host-A Host-B
Host-B
Global
Global Zone
Zone Local
Local Zone:
Zone: Z1 Global
Global Zone
Zone
AppSrv01
AppSrv01
WAS
WAS v6.1
v6.1 Profiles
Profiles
FP
FP 99
Dmgr
Dmgr
Local
Local Zone:
Zone: Z2
AppSrv01
AppSrv01
WAS
WAS v6.1
v6.1 Profiles
Profiles
FP
FP 15
15
AppSrv02
AppSrv02
Local
Local Zone:
Zone: Z3 Local
Local Zone:
Zone: Z3
Move
WAS
WAS v6.0.2
v6.0.2 Profiles
Profiles WAS
WAS v6.0.2
v6.0.2 Profiles
Profiles
AppSrv01
AppSrv01 AppSrv01
AppSrv01
FP
FP 25
25 FP
FP 25
25
Figure 6-52 Migrating a Solaris Container from Host-A to Host-B where WebSphere Application Server is installed within
the zone
Host-A
Host-A Host-B
Host-B
Global
Global Zone
Zone Local
Local Zone:
Zone: Z1
Z1 Global
Global Zone
Zone
AppSrv01
AppSrv01
Profiles
Profiles
Dmgr
Dmgr
Local
Local Zone:
Zone: Z2
Z2
AppSrv01
AppSrv01
WAS
WAS v6.1
v6.1 Profiles
Profiles WAS
WAS v6.1
v6.1
FP
FP 11
11 FP 11
11
AppSrv02
AppSrv02
Local
Local Zone:
Zone: Z3
Z3 Local
Local Zone:
Zone: Z3
Z3
Move
Profiles
Profiles AppSrv01
AppSrv01 Profiles
Profiles AppSrv01
AppSrv01
Figure 6-53 Migrating a Solaris Container from Host-A to Host-B where the WebSphere Application Server installation is
shared
In the shared WebSphere Application Server installation case, you need to consider
additional items, because the profile directory in each container is located in a custom defined
location, as discussed in 5.3.4, “Scenario 4: Share the WebSphere Application Server
installation with zones from the Global Zone” on page 137.
224 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
You must first prepare the destination system’s Global Zone to have the WebSphere
Application Server binary installation along with the necessary Fix Pack level as the source
system before you can migrate a zone. You also must make sure that the
wasprofile.properties is configured to point to the proper custom path to the profile directory
just as you have it configured in the source system. Detailed steps are defined in 5.3.4,
“Scenario 4: Share the WebSphere Application Server installation with zones from the Global
Zone” on page 137.
Consider another scenario where you copy the WebSphere Application Server installation
directories from the source system to the destination system:
If WebSphere Application Server binaries are installed as the root user on the source
system, you will not have the package information entry in the Solaris package registry on
the destination system, as shown in Example 2-7 on page 26.
If WebSphere Application Server binaries are installed as non-root privileges on the
source system, you can copy the binaries as is to the destination system.
If you wish to copy the WebSphere Application Server binaries installed from the source to
the destination system, we recommend using CIP, as discussed in 6.4.1, “Relocating
WebSphere Application Server environment using CIP” on page 196.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 225
4. Detach the zone from the source system and create an archive file of the zone with the
commands shown in Example 6-14.
Example 6-14 Detaching and packing the zone on the source system for the transfer
source_host# zoneadm -z yourzone detach
source_host# cd /export/zones/yourzone
source_host# pax -w@f /tmp/yourzone.pax -p e *
Note: You can use any of the utilities to archive your zone directory structure. In this
example, we use pax. Run man pax on Solaris to get more information.
5. Transfer the archive file yourzone.pax from the source system to the destination system.
6. Log in to the Global Zone of the destination system as root.
7. Prepare the directory path for the new zone to be created on the destination system and
unpack the zone archive file, as shown in Example 6-15.
8. On the destination system, you must first create a new zone. This is just a place holder for
you to attach the zone files transferred from the source file to this newly created zone, as
shown in Example 6-16.
Attention: Zone names are unique within a host while host names are unique within a
network. If you wish to keep the zone name the same on the source and destination
systems, you may do that. As we pointed out in the beginning of this section, if you keep
the host identity the same when you move a zone, it is straight forward that you do not have
to make any additional modifications to the WebSphere Application Server environment.
Example 6-16 Creating a new zone and attaching moved files to that zone
dest_host# zonecfg -z yourzone
zonecfg:yourzone> create -a /export/zones/yourzone
zonecfg:yourzone> exit
dest_host# zoneadm -z yourzone attach
9. Now you can boot and log in to this new zone, as shown in Example 6-17, and start the
WebSphere Application Server process(es).
226 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Shared WebSphere Application Server binary installation
For a shared WebSphere Application Server installation environment, you need to do a few
extra steps to ensure the shared structure from the source system is preserved in the
destination system’s environment:
1. If the destination system does not have WebSphere Application Server installed, you must
first lay out the WebSphere Application Server binaries in the Global Zone of the
destination system. You must put the WebSphere Application Server installation in the
exact location as the source system; otherwise, you must reconfigure the zone to inherit
the correct location of the WebSphere Application Server installation. You do not need to
create any profiles at this point. It is imperative that you put the same WebSphere
Application Server version with same level of fix packs as well as the same directory
structure as in the source system.
Tip: Instead of having to reinstall the WebSphere Application Server binary installation on
the destination system, you would be better off using CIP, as described in 6.4.1,
“Relocating WebSphere Application Server environment using CIP” on page 196, to
preserve the WebSphere Application Server binary installation with no profiles from the
source system. Then, use it to install on the destination system so that you can easily
ensure that you receive the matching WebSphere Application Server version along with its
Fix Pack level.
Important: You should keep the Solaris version and its patch level the same between the
source and the destination systems for an easier and safer move. Otherwise, you will have
to do lots of manual work to ensure your application environment will function properly. If
you cannot resolve the discrepancies of the mismatch in the OS level or patches, you may
experience server failure in your WebSphere Application Server.
The example provided here is merely a guideline and you must be aware that there can be
many other differences between the systems involved. It is your responsibility to do your due
diligence to ensure that you study all the differences and know how to accommodate those
differences.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 227
Tip: On a Solaris system, there are commands like prtconf, prtdiag, ifconfig, uname, and
so on. You can use them to examine your hardware configuration. The Solaris OS release
information is stored in /etc/release. You can determine the patch level with the showrev -p
command.
Example 6-18 Creating a zone and attaching files into it with -F flag
dest_host# zonecfg -z yourzone
zonecfg:yourzone> create -a /export/zones/yourzone
zonecfg:yourzone> exit
dest_host# zoneadm -z yourzone attach
4. If the network device is different on the destination system, you will have to reconfigure it,
as shown in Example 6-19, so that the zone can boot successfully.
Important: Keeping the same IP address of the zone when you move it from the source to
destination system will allow you to keep your existing profiles in the zone the way they are
configured. If you change the IP address of the zone while moving, you must either
reconfigure your application server profiles or create a new profile, as shown in 4.2.2,
“Creating profiles with Profile Management Tool” on page 60, reconfigure the server, and
redeploy your business applications.
228 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
zonecfg:yourzone:net> end
zonecfg:yourzone> exit
5. As shown in Example 6-17 on page 226, you can boot up and log in to the zone and start
up your WebSphere Application Server server.
Tip: There are other possibilities to move containers more efficiently on Solaris, such as
using ZFS. More information about this topic is available at:
http://sun.com/solaris/howtoguides/moving_containers.jsp
For a stand-alone node, run the backupConfig utility at the node level. For a network
deployment cell, run the backupConfig utility at the deployment manager level, because it
contains the master repository. This is the reason that you should not perform the
backupConfig command at the node level of a cell.
The restoreConfig command restores the configuration of your stand-alone node or cell
from the zipped file that was created using the backupConfig command.
The WebSphere Application Server commands backupConfig and restoreConfig only affect
configurations of the WebSphere Application Server environment. We recommend using
standard system backup tools to back up the remaining portion of the WebSphere Application
Server installation or the whole environment.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 229
Example 6-20 shows an example of backupConfig command usage.
Parameters that can be used with the backupConfig command are shown in Table 6-2.
230 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
6.5.2 Restoring a profile
To restore a back up profile created previously with the backupConfig command, use the
restoreConfig command. If the configuration you are restoring already exists, the existing
directory is renamed config.old (then config.old_1, and so on) before the restore begins. The
command restores the entire contents of the <profile_home>/config directory. By default, all
servers on the node stop before the configuration is restored, so that a node synchronization
does not occur during the restoration.
Executing restoreConfig from the <was_home>/bin directory without the -profileName
parameter will restore the default directory.
Executing restoreConfig from the <profile_home>/bin directory without the -profileName
parameter will restore that profile.
Table 6-3 shows the parameters to use with the restoreConfig command.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 231
Parameter Description
An exported archive is a zip file of the config directory with host-specific information removed.
The recommended extension to use in such files is .car. The exported archive can be
complete configuration or a subset of configuration. Importing the archive creates the
configurations defined in the archive.
The target configuration of an archive export or import can be a specific server or an entire
profile. To use an archive, you would:
1. Export a WebSphere Application Server configuration. This creates a zip file with the
configuration.
2. Unzip the files for browsing or update for use on other systems. For example, you might
need to update resource references.
3. Send the configuration to the new system. An import can work with the zip file or with the
expanded format.
4. Import the archive. The import process requires that you identify the object in the
configuration you want to import and the target object in the existing configuration. The
target can be the same object type as the archive or its parent:
– If you import a server archive to a server configuration, the configurations are merged.
– If you import a server archive to a node, the server is added to the node.
Server archives
The following command in wsadmin can be used to create an archive of a server:
232 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
This process removes applications from the server that you want to specify, and breaks the
relationship between the server that you specify and the core group of the server, cluster, or
bus membership. If you export a single server of a cluster, the relation to the cluster is
eliminated.
When you use the importServer command, you select a configuration object in the archive
as the source and select a configuration object on the system as the target. The target object
can match the source object of its parent. If the source and target are the same, the
configurations are merged.
Profile archives
You can create a configuration archive (CAR) file containing the configuration of a
stand-alone application server profile for later restoration. A CAR file can be used to clone the
original profile to another machine or system. CAR files can be bundled in a customized
installation package for use with the Installation Factory feature. For more information about
using the Installation Factory, refer to the Information Center.
You can only create an archive of an unfederated profile (stand-alone application server
profile). Refer to 6.4.1, “Relocating WebSphere Application Server environment using CIP” on
page 196 for more information about using the CAR file for creating a Customized Installation
Package with the Installation Factory.
Chapter 6. Management and maintenance of WebSphere Application Server on the Solaris Operating System 233
234 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
7
Note that scaling the application server environment does not help if your application has an
unscalable design. For Web application and EJB development best practices, refer to the
white paper WebSphere Application Server Development Best Practices for Performance and
Scalability, found at:
http://www.ibm.com/software/webservers/appserv/ws_bestpractices.pdf
7.1.1 Scalability
Scalability defines how easily a site will expand. Web sites must expand, sometimes with little
warning, and grow to support an increased load. The increased load may come from many
sources:
New markets
Normal growth
Extreme peaks
An application and infrastructure that is architected for good scalability makes site growth
possible and easy.
Most often, one achieves scalability by adding hardware resources to improve throughput. A
more complex configuration, employing additional hardware, should allow one to service a
higher client load than that provided by the simple basic configuration. Ideally, it should be
possible to service any given load simply by adding additional servers or machines (or
upgrading existing resources).
However, adding new systems or processing power does not always provide a linear increase
in throughput. For example, doubling the number of processors in your system will not
necessarily result in twice the processing capacity. Adding an additional horizontal server in
the Application Server tier will not necessarily result in twice the request serving capacity.
Adding additional resources has an additional impact on resource management and request
distribution. While the impact and corresponding degradation may be small, you need to
remember that adding n additional machines does not always result in n times the throughput.
Also, you should not simply add hardware without doing some investigation and possible
software tuning first to identify potential bottlenecks in your application or any other
performance-related software configurations. Adding more hardware may not necessarily
improve the performance if the software is badly designed or not tuned correctly. Once the
software optimization has been done, then the hardware resources should be considered the
236 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
the next step for improving performance. See 14.2, “Scalability”, of IBM WebSphere V6
Planning and Design Handbook, SG24-6446 for more information about this topic.
There are two different ways to improve performance when adding hardware: vertical scaling
and horizontal scaling. See “Scalability” on page 246 for more information about these
concepts.
Performance
Performance involves minimizing the response time for a given request or transaction.
The response time is the time it takes from the moment the user initiates a request at the
browser to the moment the result of the HTML page returns to the browser. At one time, there
was an unwritten rule of the Internet known as the “8 second rule.” This rule stated that any
page that did not respond within eight seconds would be abandoned. Many enterprises still
use this as the response time benchmark threshold for Web applications.
Throughput
Throughput, while related to performance, more precisely defines the number of concurrent
transactions that can be accommodated.
For example, if an application can handle 10 customer requests simultaneously and each
request takes one second to process, this site has a potential throughput of 10 requests per
second.
The proposed configuration should ensure that each machine or server in the configuration
processes a fair share of the overall client load that is being processed by the system as a
whole. In other words, it is not efficient to have one machine overloaded while another
machine is mostly idle. If all machines have roughly the same capacity (for example, CPU
power), each should process a roughly equal share of the load. Otherwise, there likely needs
to be a provision for a workload to be distributed in proportion to the processing power
available on each machine.
Furthermore, if the total load changes over time, the system should automatically adapt itself;
for example, all machines may use 50% of their capacity, or all machines may use 100% of
their capacity. But not one machine uses 100% of its capacity while the rest uses 15% of their
capacity.
In this chapter, we discuss both Web server load balancing using WebSphere Edge
Components and WebSphere workload management techniques.
Failover
The proposition to have multiple servers (potentially on multiple independent machines)
naturally leads to the potential for the system to provide failover. That is, if any one machine or
server in the system were to fail for any reason, the system should continue to operate with
the remaining servers. The load balancing property should ensure that the client load gets
redistributed to the remaining servers, each of which will take on a proportionately slightly
higher percentage of the total load. Of course, such an arrangement assumes that the system
is designed with some degree of overcapacity, so that the remaining servers are indeed
sufficient to process the total expected client load.
Ideally, the failover aspect should be totally transparent to clients of the system. When a
server fails, any client that is currently interacting with that server should be automatically
redirected to one of the remaining servers, without any interruption of service and without
requiring any special action on the part of that client. In practice, however, most failover
solutions may not be completely transparent. For example, a client that is currently in the
middle of an operation when a server fails may receive an error from that operation, and may
be required to retry (at which point the client would be connected to another, still available
server). Or the client may observe a pause or delay in processing, before the processing of its
requests resumes automatically with a different server. The important point in failover is that
each client, and the set of clients as a whole, is able to eventually continue to take advantage
of the system and receive service, even if some of the servers fail and become unavailable.
Conversely, when a previously failed server is repaired and again becomes available, the
system may transparently start using that server again to process a portion of the total client
load.
The failover aspect is also sometimes called fault tolerance, in that it allows the system to
survive a variety of failures or faults. It should be noted, however, that failover is only one
technique in the much broader field of fault tolerance, and that no such technique can make a
system 100 percent safe against every possible failure. The goal is to greatly minimize the
probability of system failure, but keep in mind that the possibility of system failure cannot be
completely eliminated.
238 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Note that in the context of discussions on failover, the term server most often refers to a
physical machine (which is typically the type of component that fails). However, we will see
that WebSphere Application Server also allows for the possibility of one server process on a
given machine to fail independently, while other processes on that same machine continue to
operate normally.
HAManager
WebSphere Application Server V6 introduces a new concept for advanced failover and thus
higher availability, called the High Availability Manager (HAManager). The HAManager
enhances the availability of WebSphere Application Server singleton services like transaction
services or JMS message services. It runs as a service within each application server
process that monitors the health of WebSphere Application Server clusters. In the event of a
server failure, the HAManager will fail over the singleton service and recover any in-flight
transactions. Refer to Chapter 9, “WebSphere HAManager”, of WebSphere Application
Server V6 Scalability and Performance Handbook, SG24-6392 for details.
7.1.4 Maintainability
Maintainability is the ability to keep the system running before, during, and after scheduled
maintenance. When considering maintainability in performance and scalability, remember
that maintenance periodically needs to be performed on hardware components, operating
systems, and software products in addition to the application components.
While maintainability is somewhat related to availability, there are specific issues that need to
be considered when deploying a topology that is maintainable. In fact, some maintainability
factors are at cross purposes to availability. For example, ease of maintainability would dictate
that one should minimize the number of application server instances in order to facilitate
online software upgrades. Taken to the extreme, this would result in a single application
server instance, which of course would not provide a high availability solution. In many cases,
it is also possible that a single application server instance would not provide the required
throughput or performance.
Mixed configuration
In some configurations, it may be possible to mix multiple versions of a server or application,
so as to provide for staged deployment and a smooth upgrade of the overall system from one
software or hardware version to another. Coupled with the ability to make dynamic changes to
the configuration, this property may be used to effect upgrades without any interruption of
service.
In WebSphere Application Server V6, there are two methods for sharing of sessions between
multiple application server processes (cluster members). One method is to persist the
session to a database. An alternate approach is to use memory-to-memory session
replication functionality, which was added to WebSphere Application Server V5 and is
implemented using WebSphere internal messaging. The memory-to-memory replication
(sometimes also referred to as “in-memory replication”) eliminates a single point of failure
found in the session database (if the database itself has not been made highly available using
clustering software).
Tip: Refer to A Practical Guide to DB2 UDB Data Replication V8, SG24-6828 for details
about DB2 replication.
SSL
Building up SSL communication causes extra HTTP requests and responses between the
machines and every SSL message is encrypted on one side and decrypted on the other side.
SSL at the front end or Web tier is a common implementation. This allows for secured
purchases, secure viewing of bank records, and so on. SSL handshaking, however, is
expensive from a performance perspective. The Web server is responsible for encrypting data
sent to the user, and decrypting the request from the user. If a site will be operated using SSL
more often than not and the server is operating at close to maximum CPU, then using some
240 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
form of SSL accelerator hardware in the server, or even an external device that offloads all
SSL traffic prior to communication with the Web server, may help improve performance.
Note: WebSphere Application Server security is out of the scope of this book. However,
an entire book is dedicated to this topic: WebSphere Application Server V6: Security
Handbook, SG24-6316.
As shown in Figure 7-1, a load balancing mechanism called IP spraying can be used to
intercept the HTTP requests and redirect them to the appropriate machine on the cluster,
providing scalability, load balancing, and failover.
HTTP Server
HTTP Plugin
Web Requests Caching HTTP
IP Sprayer Requests
Client Proxy
HTTP Server
Plugin
242 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
7.2.2 Workload management with Web server plug-in
This is a short introduction to plug-in workload management, where servlet requests are
distributed to the Web container in clustered application servers, as shown in Figure 7-2. This
configuration is also referred to as the servlet clustering architecture.
Web
Container
Application Server
HTTP Server Servlet
Requests
Plugin
Web
Container
Application Server
Clustering application servers that host Web containers automatically enable plug-in
workload management for the application servers and the servlets they host. In the simplest
case, the cluster is configured on a single machine, where the Web server process also runs.
Routing of servlet requests occurs between the Web server plug-in and the clustered
application servers using HTTP or HTTPS. This routing is based purely on the weights
associated with the cluster members. If all cluster members have identical weights, the
plug-in sends an equal number of requests to all members of the cluster, when assuming no
session affinity. If the weights are scaled in the range from zero to 20, the plug-in routes
requests to those cluster members with the higher weight value more often. A rule of thumb
formula for determining routing preference would be:
where n is the number of cluster members in the cluster. The Web server plug-in distributes
requests around cluster members that are not available.
A cluster is a set of application servers that are managed together and participate in workload
management. Application servers participating in a cluster can be on the same node or on
different nodes. A Network Deployment cell can contain no clusters, or have many clusters
depending on the need of the administration of the cell.
When creating a cluster, it is possible to select an existing application server as the template
for the cluster without adding that application server into the new cluster (the chosen
application server is used only as a template, and is not affected in any way by the cluster
creation). All other cluster members are then created based on the configuration of the first
cluster member.
Cluster members can be added to a cluster in various ways, that is, during cluster creation
and afterwards. During cluster creation, one existing application server can be added to the
cluster or one or more new application servers can be created and added to the cluster.
There is also the possibility of adding additional members to an existing cluster later.
Depending on the capacity of your systems, you can define different weights for the various
cluster members.
It may be a good idea to create the cluster with a single member first, adjust the member's
configuration, and then add the other members. This process guarantees that all cluster
members are created with the same settings. Cluster members are required to have identical
application components, but they can be sized differently in terms of weight, heap size, and
other environmental factors. You must be careful though not to change anything that might
result in different application behavior on each cluster member. This concept allows large
enterprise machines to belong to a cluster that also contains smaller machines such as Intel
based Windows servers.
Starting or stopping the cluster automatically starts or stops all cluster members, and
changes to the application are propagated to all application servers in the cluster.
Figure 7-3 on page 245 shows an example of a possible configuration that includes server
clusters. Server cluster 1 has two cluster members on node B only. Server cluster 2, which is
completely independent of server cluster 1, has two cluster members on node A and three
cluster members on node B. Finally, node A also contains a free-standing application server
that is not a member of any cluster.
244 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Node A Node B
Servlet EJB
Application Server
Servlet EJB
Servlet EJB
Server Cluster 1 Cluster
Application Server/Cluster Member Members in
Application Server/Cluster Member Server
Cluster 1
Servlet EJB
Servlet EJB
Clusters and cluster members provide the necessary support for workload management,
failover, and scalability. For details, refer to Chapter 6, “Plug-in workload management and
failover”, of WebSphere Application Server V6 Scalability and Performance Handbook,
SG24-6392.
Distributing workloads
The ability to route a request to any server in a group of clustered application servers allows
the servers to share work and improving throughput of client requests. Requests can be
evenly distributed to servers to prevent workload imbalances in which one or more servers
has idle or low activity while others are overburdened. This load balancing activity is a benefit
of workload management. Using weighted definitions of cluster members allows nodes to
have different hardware resources and still participate in a cluster. The weight specifies that
the application server with a higher weight will be more likely to serve the request faster, and
workload management will consequently send more requests to that node.
Failover
With several cluster members available to handle requests, it is more likely that failures will
not negatively affect throughput and reliability. With cluster members distributed to various
nodes, an entire machine can fail without any application downtime. Requests can be routed
to other nodes if one node fails. Clustering also allows for maintenance of nodes without
stopping application functionality.
Node A
Cluster 1
Web EJB
Container Container
Cluster 1, Member 1
Web EJB
Container Container
Cluster 1, Member 2
We recommend that you avoid using “rules of thumb” when determining the number of
cluster members needed for a given machine. The only way to determine what is correct
for your environment and application(s) is to tune a single instance of an application server
for throughput and performance, then add it to a cluster, and incrementally add additional
cluster members. Test performance and throughput as each member is added to the
cluster. Always monitor memory usage when you are configuring a vertical scaling
topology and do not exceed the available physical memory on a machine.
In general, 85% (or more) utilization of the CPU on a large server shows that there is little,
if any, performance benefit to be realized from adding additional cluster members.
In horizontal scaling, shown in Figure 7-5 on page 247, cluster members are created on
multiple physical machines. This allows a single WebSphere application to run on several
machines while still presenting a single system image, making the most effective use of
the resources of a distributed computing environment. Horizontal scaling is especially
effective in environments that contain many smaller, less powerful machines. Client
requests that overwhelm a single machine can be distributed over several machines in the
system. Failover is another benefit of horizontal scaling. If a machine becomes
unavailable, its workload can be routed to other machines containing cluster members.
246 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Node A Node B
Cluster 1
Horizontal scaling can handle application server process failures and hardware failures (or
maintenance) without significant interruption to client service.
Note: WebSphere Application Server V5.0 and higher supports horizontal clustering
across different platforms and operating systems. Horizontal cloning on different
platforms was not supported in WebSphere Application Server V4.
WebSphere Application Server applications can combine horizontal and vertical scaling to
reap the benefits of both scaling techniques, as shown in Figure 7-6.
Node A Node B
Cluster 1
The EJB method permissions, Web resource security constraints, and security roles defined
in an enterprise application are used to protect EJBs and servlets in the application server
cluster. Refer to WebSphere Application Server V6: Security Handbook, SG24-6316 for more
information.
Configuring the Web container in a separate application server from the Enterprise JavaBean
container (an EJB container handles requests for both session and entity beans) enables
distribution of EJB requests between the EJB container clusters, as seen in Figure 7-7.
EJB
Container
Application Server
Web EJB EJB
Container Java
Requests Requests
Client
Application Server
EJB
Container
Application Server
In this configuration, EJB client requests are routed to available EJB containers based on the
workload management EJB selection policy (Server-weighted round robin routing or Prefer
local).
Important: Although it is possible to split the Web container and EJB container, it is not
recommended because of the negative performance impact. In addition, application
maintenance also becomes more complex when the application runs in different
application servers.
The EJB clients can be servlets operating within a Web container, stand-alone Java programs
using RMI/IIOP, or other EJBs.
248 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Availability
Maintainability
Session state
For detailed information about topologies, their advantages and disadvantages, required
software, as well as topology selection criteria, refer to the IBM WebSphere V6 Planning and
Design Handbook, SG24-6446.
Note: For each of the Network Deployment topologies, a decision needs to be made
regarding the placement of the Deployment Manager and master cell repository. The
Deployment Manager can be located either on a dedicated machine, or on the same
machine as one of its nodes. It is, however, considered a best practice to place the
Deployment Manager on a separate machine. For more information about possible
configurations, refer to 9.3, “Cell topologies”, in IBM WebSphere V6 Planning and Design
Handbook, SG24-6446.
We start by discussing scalability strategies using WebSphere Application Server that can
help us in ensuring high availability, load balancing, and removing bottlenecks.
We can exploit a strategy, distributing the load among the most appropriate resources, and
using workload management techniques such as vertical and horizontal scaling, as described
in 7.2.3, “Workload management using WebSphere clustering” on page 243. WebSphere
Application Servers can benefit from vertical and horizontal scaling and the HTTP servers
can be horizontally scaled on a clustered configuration. The use of these techniques is
represented in Figure 7-9 on page 251.
We describe various topologies where different techniques are applied on a set of distributed
machines, in order to provide a reliable and efficient processing environment.
Each application server runs in its own JVM process. To allow a failover from one application
server to another without logging out users, we need to share the session data between
multiple processes. There are two ways of doing this in WebSphere Application Server
Network Deployment:
Memory-to-memory session replication
This method employs Data Replication Service (DRS) to provide replication of session
data between the process memory of different application server JVMs. DRS is included
with WebSphere Application Server and is automatically started when the JVM of a
clustered (and properly configured) application server starts.
Database persistence
Session data is stored in a database shared by all application servers.
Server C
Application Deployment
Web Server Server Manager
Node Node
Web Application
Server
Protocol Firewall
Server 1
Domain Firewall
I
N
Existing
T Database
Applications
E HTTP/HTTPS Application
Server
and Data
R Server 2
User N Web Server
E Plug-in
T Application
Server 3 Application
Server A Data
Cluster
Server B
Existing
Directory and
Applications
Security
and Data
Services
LDAP
Figure 7-8 shows a vertical scaling example that includes a cluster with three cluster
members. In this case, the Web server plug-in routes the requests according to the
application servers availability. Load balancing is performed at the Web server plug-in level
based on a round-robin algorithm and with consideration of the session state. Failover is also
possible as long as there are active application servers (JVMs) on the system.
250 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Vertical scaling can be combined with other topologies to boost performance, throughput, and
availability.
Server D
Web Application
I Server Server 1
Protocol Firewall
Domain Firewall
N Existing
T Database
Applications
Application
E HTTP/HTTPS Server
and Data
R Server 2
User Web Server
N Plug-in
E
T Application
Server A Data
Cluster
Server B Server C
Existing
Directory and
Applications
Security
and Data
Services
LDAP
The Web server plug-in distributes requests to the cluster members on each node and
performs load balancing and failover. If the Web server (Server A) goes down, then the
WebContainer Inbound Chain of Server B or C could be utilized (limited throughput) while
Server A or the Web server on Server A is repaired.
Be aware that this configuration introduces a single point of failure; when the HTTP server is
out of service, your entire application is inaccessible from the outside network (internal users
could still access the application server(s) using the WebContainer Inbound Chain). You can
omit this SPOF by adding a backup Web server.
If any component in the Application Server Node 1 (hardware or software) fails, the
Application Server Node 2 can still serve requests from the Web Server Node and vice versa.
The Load Balancer, part of the WebSphere Edge Components, can be configured to create a
cluster of Web servers and add it to a cluster of application servers. This is shown in 7.7,
“Topology with IP sprayer front end” on page 252.
The Dispatcher component of Load Balancer, which is part of the WebSphere Edge
Components, is an IP sprayer that performs intelligent load balancing among Web servers
based on server availability and workload capacity as the main selection criteria to distribute
the requests. Refer to Chapter 4, “Introduction to WebSphere Edge Components”, of
WebSphere Application Server V6 Scalability and Performance Handbook, SG24-6392, for
more details.
Figure 7-10 illustrates a horizontal scaling configuration that uses an IP sprayer on the Load
Balancer Node to distribute requests between Web servers on multiple machines.
Server G
Deployment
Application Manager
Server Node
Web Server
Node
Application
Load
Server
I Balancer
N Node Web
T Server Existing
Server E Database
Applications
E HTTP/HTTPS Server
and Data
Load
Domain Firewall
R
Protocol Firewall
Balancer Server C
User N
E
T
Application
Server A Server Node Application
Web Server Data
Node Application
Server
Cascade
Load Cluster
Balancer Web
Server Existing
Directory and
Backup Server F Applications
Node Security
and Data
Server D Services
Load
Balancer
LDAP
Server B
The Load Balancer Node sprays Web client requests to the Web servers. The Load Balancer
is configured in cascade. The primary Load Balancer communicates to this backup through a
heartbeat to perform failover, if needed, and thus eliminates the Load Balancer Node as a
single point of failure.
Both Web servers perform load balancing and failover between the application servers
(cluster members) through the Web server plug-in.
If any component on Server C, D, E, or F fails, the other ones can still continue receiving
requests.
252 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
7.7.1 An example of IP Sprayer implementation
In this section, we demonstrate how the WebSphere Edge’s Dispatcher component, also
known as IP sprayer, can be configured on Solaris. The example we use here is with Solaris
Logical Domains (LDoms), as detailed in 5.1.2, “Logical Domains” on page 117. This
procedure can be applied to any Solaris systems, including systems with Dynamic Domains.
One of the test cases we performed is to demonstrate that a Load Balancer’s Dispatcher
component can distribute workloads between two Solaris 10 systems that are deployed in
LDom, as shown in Figure 7-11. The dispatcher can be configured to work with servers that
use standard protocols like HTTP. Thus, in our example, we show how the dispatcher can be
configured to communicate with two WebSphere Application Server nodes.
T2000a
app3 (9.43.86.182)
WAS (9.43.86.185)
(1) Configure the dispatcher
Node 1
Edge LDOM 2
(9.43.86.181)
www.itso.com
T2000b
(Non-Forwarding Address
app4
(9.43.86.183)
9.43.86.179)
WAS (9.43.86.186)
Node 3
LDOM 4
Figure 7-11 An example Edge’s Dispatcher distributing workloads to back-end WebSphere Application
Server nodes
2. Set up the Dispatcher and its components, namely the server, executor, manager, and
advisor, as shown in Example 7-2.
Example 7-3 Add the non-forwarding address for the Load Balancer
global# ifconfig lo0:1 plumb 9.43.86.179 netmask 255.255.252.0 up
254 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
7.8 Topology with redundancy of several components
The idea behind having as much redundancy of components as possible is to eliminate (keep
minimized) the single points of failure (SPOF). Most of the components allow some kind of
redundancy like a Load Balancer backup node for the primary Load Balancer node, or
clustered Web servers or application servers, and so on. Some other components, such as
the Deployment Manager, do not support any automatic backup or failover. Figure 7-12 on
page 256 illustrates a topology with redundancy for several components, including:
Two Load Balancers
The one on Server A is the primary (active) Load Balancer. It is synchronized, through a
heartbeat, with a backup Load Balancer (in standby status) on another machine, Server B.
Two Web servers
Both of them receive requests from the Load Balancer and share the requests that come
from the Internet. Each one is installed on a different machine.
An application server cluster
The cluster implements vertical and horizontal scaling.
Eight cluster members
Two on each Application Server Node.
Four Application Server Nodes
Each one hosting two application servers. The nodes on Server E are independent
installations. The nodes on Server F are profiles of a single installation.
Two database servers
Using a HA (high availability) software product. This means that one copy of the database
is the one that is being used and the other one is a replica that will replace the first one if it
fails.
Two LDAP servers
Using a HA (high availability) software product. This means that one copy of the database
is the one that is being used and the other one is a replica that will replace the first one if it
fails.
Application Server 1
Domain Firewall
T Existing
Existing
Database
Applications
Database
E HTTP/HTTPS Server F Applications
Server
and Data
Server
and Data
R
Cascade
User N
Application Server Node 3
E
Load
T Balancer
Backup Web Server Application Server 5 Application
Node 2 Application
Node Data
Data
Application Server 6
Load Web
Balancer Server Server G
Existing
Directory and
Existing
Directory and
Applications
Security
Application Server Node 4 Applications
Security
Server B Server D and Data
Services
and Data
Services
Application Server 7
Cluster
Important: For details about using Sun Cluster and WebSphere, refer to WebSphere
Application Server Network Deployment V6: High Availability Solutions, SG24-6688.
256 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
on the WebSphere Application Server nodes and the static contents, such as HTML, CSS,
and images, on these Web servers. The details of various Web servers supported by
WebSphere Application Server can be found at:
http://www-1.ibm.com/support/docview.wss?rs=180&uid=swg27006921
Sun Java System Web Server (SJSWS) is a highly scalable and secure Web server that can
front end the WebSphere Application Server environment in different ways. In addition to front
ending WebSphere Application Server, there are a number of other benefits that SJSWS can
provide. We discuss some example scenarios here to help you take advantage of numerous
features of SJSWS. The details of various features and documentation of the Web server can
be found at:
http://docs.sun.com/app/docs/coll/1653.1
7.9.2 Configuring the WebSphere plug-in for Sun Java System Web Server
We described the concept of workload management using the Web server plug-in in 7.2.2,
“Workload management with Web server plug-in” on page 243. The procedure is defined in
the IBM InfoCenter at:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.nd.d
oc/info/ae/ae/twsv_plugin.html
Host-A
WAS Container
WAS or LDOM 1
Node 1
Container
WAS
WAS or LDOM 2
Node 2
SJWS
SJWS
Reverse
Reverse
Proxy Host-B
Proxy Container
WAS
WAS or LDOM 3
Node 3
Figure 7-13 Topology with Sun Java System Web Server as reverse proxy server
Details of the configuration steps for reverse proxy can be found here:
http://docs.sun.com/app/docs/doc/820-1061/6ncopp92h
When you follow the steps mentioned in the above documentation to configure the proxy
server with the Sun Java System Web Server admin console, your steps will resemble the
windows shown in Figure 7-14 to Figure 7-17 on page 260.
Figure 7-14 Sun Java System Web Server Proxy configuration Step 1
258 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
After clicking OK in Figure 7-14 on page 258, your configuration is created and will be
validated during deployment, as shown in Figure 7-15, Figure 7-16, and Figure 7-17 on
page 260.
Figure 7-15 Sun Java System Web Server Proxy configuration Step 2
Figure 7-16 Sun Java System Web Server Proxy configuration Step 3
In addition to very basic configuration, there are utilities that can be set to instruct SJSWS to
do special processing:
Sticky Cookie: Cookie name that, when present in a response, will cause subsequent
requests to stick to that origin server.
Sticky URI Parameter: The URI parameter name to inspect for route information. When
the URI parameter is present in a request URI and its value contains a colon (:) followed
by a route ID, the request will "stick" to the origin server identified by that route ID.
Route Header: The HTTP request header name used to communicate route IDs to origin
servers.
Route Cookie: Cookie name generated by the server when it encounters a sticky cookie in
a response. The route cookie stores a route ID that enables the server to direct
subsequent requests back to the same origin server.
260 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The best part of this capability is that you do not need any additional configuration on your
client or server (WebSphere Application Server) sides. All the configuration has to done on
the SJSWS system to enable it to listen on the SSL port to receive the HTTPS requests,
which in turn sends plain HTTP requests to WebSphere Application Server. As mentioned
before, servers like Sun SPARC Enterprise T2000 and T5220 have built-in cryptographic
modules, NCP, which SJSWS can further configure to take advantage of onboard SSL
processing. This can also reduce your need for specialized hardware and simplify your
deployment environment. Details about configuring NCP with Sun Java System Web Server
can be found here:
http://www.sun.com/blueprints/0306/819-5782.pdf
You can get more detailed information about the XD product in Optimizing Operations with
WebSphere Extended Deployment V6.1, SG24-7422.
The characteristics of these static environments is that they are not making the best use of
the overall capacity and the configuration in terms of numbers of servers. Additionally, these
environments cannot quickly respond to unexpected changes in workload. For example, if an
application has a dramatic increase in load, there may be insufficient capacity in the servers
set aside for that application to meet the demand. However, there may be sufficient capacity
in other servers running other applications that cannot be used.
By using the dynamic operations features of Operations Optimization, you can change the
way a typical WebSphere environment is configured today to one that has the following
features:
Improves the utilization of available resources, such as CPU and memory.
Classifies and monitors the workload.
Provides a business-centric view of the workload and how it is performing.
Can respond in real time to changes in the workload mix (without human intervention if so
desired), using business guidelines that the organization specified.
User-defined policies based on business requirements specify performance goals for each
application. WebSphere Extended Deployment dynamically allocates resources to each
application aiming to meet these performance goals.
Optimization of the computing resources that you already own might allow you to run more
applications on the machines that you already have in place.
One of the key elements of WebSphere XD’s Operations Optimization V6.1 is the On Demand
Router (ODR). The ODR is an intelligent proxy that acts as the entry point for traffic coming
into an Extended Development cell, performing request prioritization, flow control, and
dynamic workload management for HTTP requests and SOAP over HTTP requests.
We discuss briefly how to apply this ODR along with Sun virtualization technologies in
Chapter 5, “Configuration of WebSphere Application Server in an advanced Solaris 10
Environment” on page 115 to build out an advanced WebSphere deployment topology.
The ODR is a component that logically replaces and extends the functionality of the
WebSphere Web server plug-in. However, the ODR can and often does work in concert with
existing Web servers that use the WebSphere Web server plug-in.
262 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The ODR provides the standard functionality of an HTTP/1.0 and HTTP/1.1 compliant proxy
server with the following added on demand features:
Request classification and prioritization
Request queuing
Request routing
Dynamic load balancing
HTTP session affinity
SSL ID affinity
WebSphere Partitioning Facility partition affinity
Important: Because the ODR is the entry point into your application server environment, it
is extremely important that you make the ODR highly available by using multiple ODRs.
There can be multiple ODRs in a topology. Each request goes through only one ODR, but for
environments with more than one ODR, the initial request could be routed to any one of them.
The ODR is aware of its environment because of a component called the on demand
configuration service (ODC). The ODC collects information about all of the WebSphere
Extended Deployment and WebSphere Network Deployment application servers and
applications that are deployed in the cell. It also gathers information about the defined
non-WebSphere middleware servers and middleware applications. This information is used
by the ODR to dynamically route HTTP requests to both WebSphere and non-WebSphere
servers. The ODC dynamically configures the routing rules at runtime.
Figure 7-18 shows an example (logical diagram) of how the XD ODR can be implemented in
the topologies we have previously discussed in this chapter.
Host-A
WAS Container
WAS or LDOM 1
Node 1
XD Container
SJWS1 WAS
WAS
Plugin
ODR1 or LDOM 2
Node 2
Edge
Edge Host-B
WAS Container
WAS or LDOM 3
XD
SJWS2
Plugin
ODR1 Node 3
Container
WAS
WAS or LDOM 4
Node 4
Figure 7-18 IBM WebSphere Edge Component, Sun Java System Web Server, and IBM WebSphere
XD ODR
With Solaris zones, you can deploy WebSphere Application Server nodes in separate virtual
operating environment while they are isolated at the process level. With the WebSphere
Application Server HA clustering capability, you can configure these nodes to handle
workload in the cluster. However, as you realize if the cluster members deployed in zones end
up being hosted on a single physical system, the hardware becomes a single point of failure.
Similarly, LDoms can provide process level isolation as well as different OS instances and
patch levels, but the underlying hardware is still a single point of failure. You should also take
similar consideration when choosing the Web server, plug-ins or WebSphere XD ODR. As
shown in Figure 7-18 on page 263, when two WebSphere Application Server instances are
connected to different ODR, they can provide some level of redundancy to prevent the system
downtime in case of hardware failure.
For higher availability at the hardware level, you can achieve it with Sun servers that have
Dynamic System Domains or implement with Sun Cluster technology. Additionally, you can
use Solaris Resource Management capabilities to ensure that collocated services on a
system can coexist well together while maximizing the system utilization. You can also use
Solaris least privilege security mechanism to limit applications’ process privileges in case of
malicious activities.
264 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
8
Security is established at two levels. The first level is administrative security. Administrative
security applies to all applications running in the environment and determines whether
security is used at all, the type of registry against which authentication takes place, and other
values, many of which act as defaults.
The second level is application security. Application security, which can vary with each
application, determines the requirements specific to the application. In some cases, these
values can override global defaults. Application security includes settings such as
mechanisms for authenticating users and authorization requirements.
Administrative security
Administrative security determines whether security is used at all, the type of registry against
which authentication takes place, and other values, many of which act as defaults. Proper
planning is required because incorrectly enabling administrative security can lock you out of
the administrative console or cause the server to end abnormally.
Administrative security can be thought of as a "big switch" that activates a wide variety of
security settings for WebSphere Application Server. Values for these settings can be
specified, but they will not take effect until administrative security is activated. The settings
include the authentication of users, the use of Secure Sockets Layer (SSL), and the choice of
user account repository. In particular, application security, including authentication and
role-based authorization, is not enforced unless administrative security is active. As of
WebSphere Application Server V6.1, administrative security can be enabled during product
installation.
Administrative security represents the security configuration that is effective for the entire
security domain. A security domain consists of all of the servers that are configured with the
same user registry realm name. In some cases, the realm can be the machine name of a
local operating system registry. In this case, all of the application servers must reside on the
same physical machine. In other cases, the realm can be the machine name of a stand-alone
Lightweight Directory Access Protocol (LDAP) registry.
The basic requirement for a security domain is that the access ID that is returned by the
registry or repository from one server within the security domain is the same access ID as
that returned from the registry or repository on any other server within the same security
domain. The access ID is the unique identification of a user and is used during authorization
to determine if access is permitted to the resource.
The administrative security configuration applies to every server within the security domain.
266 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
What administrative security protects
The configuration of administrative security for a security domain involves configuring the
following technologies:
Authentication of HTTP clients
Authentication of IIOP clients
Administrative console security
Naming security
Use of SSL transports
Role-based authorization checks of servlets, enterprise beans, and Mbeans
Propagation of identities (RunAs)
The common user registry
The authentication mechanism
Other security information that defines the behavior of a security domain includes:
– The authentication protocol (Remote Method Invocation over the Internet Inter-ORB
Protocol (RMI/IIOP) security)
– Other miscellaneous attributes
Note: A Kerberos keytab configuration file contains a list of keys that are analogous to user
passwords. It is important for hosts to protect their Kerberos keytab files by storing them on
the local disk, which makes them readable only by authorized users.
This procedure applies only to the ordinary UNIX file system. If your site uses access control
lists, secure the files by using that mechanism. Any site-specific requirements can affect the
owner, group, and corresponding privileges should be used.
Do the following:
1. Go to the install_root directory and change the ownership of the directory configuration
and properties to the user who logs onto the system for WebSphere Application Server
primary administrative tasks. Run the following command:
chown -R logon_name directory_name
Where login_name is a specified user or group and directory_name is the name of the
directory that contains the files
We recommend that you assign ownership of the files that contain password information to
the user who runs the application server. If more than one user runs the application server,
provide permission to the group in which the users are assigned in the user registry.
2. Set up the permissions by running the following command:
chmod -R 770 directory_name
3. Go to the app_server_root/profiles/profile_name/properties directory and set the file
permissions. Set the access permissions for the following files as it pertains to your
security guidelines:
– TraceSettings.properties
– client.policy
– client_types.xml
– sas.client.props
– sas.stdclient.properties
– sas.tools.properties
– soap.client.props
– wsadmin.properties
– wsjaas_client.conf
For example, you might issue the command chmod 770 file_name, where file_name is the
name of the file listed previously in the install_root/profiles/profile_name/properties directory.
These files contain sensitive information, such as passwords.
4. Create a group for WebSphere Application Server and put the users who perform full or
partial WebSphere Application Server administrative tasks in that group.
Tip: If you want to use WebSphere MQ as a Java Messaging Service (JMS) provider,
restrict access to the /var/mqm directories and log files used. Give write access to the user
ID mqm or members of the mqm user group only.
268 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Important: After securing your environment, only the users with permission can access
the files. Failure to adequately secure these files can lead to a breach of security in your
WebSphere Application Server applications.
It is helpful to understand security from an infrastructure perspective so that you know the
advantages of different authentication mechanisms, user registries, authentication protocols,
and so on. Picking the right security components to meet your needs is a part of configuring
security.
You can configure security in WebSphere Application Server using the following steps:
1. Start the WebSphere Application Server administrative console.
If security is currently disabled, you are only prompted for a user ID. Log in with any user
ID. However, if security is currently enabled, you are prompted for both a user ID and a
password. Log in with a predefined administrative user ID and password.
2. Select Security → Secure administration, applications, and infrastructure. Use the
Security Configuration Wizard to configure security, or do it manually. The configuration
order is not important.
3. Configure the user account repository. On the Secure administration, applications, and
infrastructure window, you can configure user account repositories, such as federated
repositories, local operating system, stand-alone Lightweight Directory Access Protocol
(LDAP) registry, and stand-alone custom registry.
Tip: You can choose to specify either a server ID and password for interoperability or
enable a WebSphere Application Server V6.1 installation to automatically generate an
internal server ID.
For more information about automatically generating server IDs, see IBM WebSphere
Application Server V6.1 Security Handbook SG24-6316.
One of the details common to all user registries or repositories is the Primary
administrative user name. This ID is a member of the chosen repository, but also has
special privileges in WebSphere Application Server. The privileges for this ID and the
privileges that are associated with the administrative role ID are the same. The Primary
administrative user name can access all of the protected administrative methods.
In stand-alone LDAP registries, verify that the Primary administrative user name is a
member of the repository and not just the LDAP administrative role ID. The entry must be
searchable.
Important: SAS is supported only between Version 6.0.x and previous version servers that
have been federated in a Version 6.1 cell.
Attention: IBM no longer ships or supports the Secure Authentication Service (SAS) IIOP
security protocol. We recommend that you use the Common Secure Interoperability
Version 2 (CSIv2) protocol.
7. Modify or a create a default Secure Sockets Layer (SSL) configuration. This action
protects the integrity of the messages sent across the Internet. The product provides a
single location where you can specify SSL configurations that the various WebSphere
Application Server features that use SSL can utilize, including the LDAP registry, Web
container, and the authentication protocol (CSIv2 and SAS). For more information, see
8.4.1, “Configuration of the KSSL module on Solaris” on page 303. After you modify a
configuration or create a new configuration, specify it in the SSL configurations window. To
get to the SSL configurations window, complete the following steps:
a. Select Security → SSL certificate and key management.
270 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
b. Under Configuration settings, select Manage endpoint security configurations →
configuration_name.
c. Under Related items, click SSL configurations.
You can also edit the DefaultSSLConfig file or create a new SSL configuration with a new
alias name. If you create a new alias name for your new keystore and truststore files,
change every location that references the DefaultSSLConfig SSL configuration alias. The
following list specifies the locations of where the SSL configuration repertoire aliases are
used in the WebSphere Application Server configuration.
For any transports that use the new network input/output channel chains, including HTTP
and Java Message Service (JMS), you can modify the SSL configuration repertoire
aliases in the following locations for each server:
– Select Security → Secure administration, applications, and infrastructure. Under
RMI/IIOP security, click CSIv2 inbound transport.
– Select Security → Secure administration, applications, and infrastructure. Under
RMI/IIOP security, click CSIv2 outbound transport.
For the SOAP Java Management Extensions (JMX™) administrative transports, you can
modify the SSL configurations repertoire aliases by selecting Servers → Application
servers → server_name. Under Server infrastructure, select Administration →
Administration services. Under Additional properties, select JMX connectors →
SOAPConnector. Under Additional properties, click Custom properties. If you want to
point the sslConfig property to a new alias, click New and type sslConfig in the name field,
and its value in the Value field.
For the Lightweight Directory Access Protocol (LDAP) SSL transport, you can modify the
SSL configuration repertoire aliases by selecting Security → Secure administration,
applications, and infrastructure. Under the User account repository, click the Available
realm definitions drop-down menus and select Standalone LDAP registry.
8. Select Security → Secure administration, applications, and infrastructure to
configure the rest of the security settings and enable security.
9. Validate the completed security configuration by clicking OK or Apply. If problems occur,
they display at the top of the console page in red type.
10.If there are no validation problems, click Save to save the settings to a file that the server
uses when it restarts. Saving writes the settings to the configuration repository.
Important: If you do not click Apply or OK in the Secure administration, applications, and
infrastructure window before you click Save, your changes are not written to the repository.
The server must be restarted for any changes to take effect when you start the
administrative console.
Important: SAS is supported only between Version 6.0.x and previous version servers that
have been federated in a Version 6.1 cell.
Because Java 2 security is relatively new, many existing or even new applications might not
be prepared for the very fine-grain access control programming model that Java 2 security is
capable of enforcing. Administrators need to understand the possible consequences of
enabling Java 2 security if applications are not prepared for Java 2 security. Java 2 security
places some new requirements on application developers and administrators.
If your applications or third-party libraries are not ready, having Java 2 security enabled
causes problems. You can identify these problems as Java 2 security
AccessControlExceptions in the system log or trace files. If you are not sure about the Java 2
security readiness of your applications, disable Java 2 security initially to get your application
installed and verify that it is working properly.
For the details of default permissions granted to applications in the product, refer to the
following policy files:
app_server_root/java/jre/lib/security/java.policy
The file represents the default permissions that are granted to all classes. The policy of
this file applies to all the processes launched by the Java Virtual Machine in the
WebSphere Application Server.
app_server_root/properties/server.policy
This policy defines the policy for the WebSphere Application Server classes. At present, all
the server processes on the same installation share the same server.policy file. However,
you can configure this file so that each server process can has a separate server.policy
file. Define the policy file as the value of the java.security.policy Java system properties.
For details of how to define Java system properties, refer to the Process definition section
of the Manage application servers file.
272 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The server.policy file is not a configuration file managed by the repository and the file
replication service. Changes to this file are local and do not get replicated to other
machines. Use the server.policy file to define Java 2 security policy for server resources.
profile_root/config/cells/cell_name/nodes/node_name/app.policy
Use the app.policy file (per node) or the was.policy file (per enterprise application) to
define Java 2 security policy for enterprise application resources.
The policy embodied by the policy files cannot be made more restrictive because the product
might not have the necessary Java 2 security doPrivileged APIs in place. The restrictive
policy is the default policy. You can grant additional permissions, but you cannot make the
default more restrictive because AccessControlExceptions exceptions are generated from
within WebSphere Application Server. The product does not support a more restrictive policy
than the default that is defined in the policy files.
Several policy files are used to define the security policy for the Java process. These policy
files are static (code base is defined in the policy file) and in the default policy format provided
by the IBM Developer Kit, Java Technology Edition. For enterprise application resources and
utility libraries, WebSphere Application Server provides dynamic policy support. The code
base is dynamically calculated based on deployment information and permissions are
granted based on template policy files during runtime.
Attention: Syntax errors in the policy files cause the application server process to fail, so
edit these policy files carefully. It is recommended that you use the policytool in
<was_home>/java/bin to edit policy files.
With WebSphere Application Server, a user registry or a repository, such as virtual member
manager, authenticates a user and retrieves information about users and groups to perform
security-related functions, including authentication and authorization.
Although WebSphere Application Server supports different types of user registries, only one
user registry can be active. This active registry is shared by all of the product server
processes.
After configuring the registry or repository, you must specify it as the active repository.
Through the administration console, you can select an available realm definition for the
registry or repository from the User account repository section of the Secure administration,
applications, and administration window. After selecting the registry or repository, first click
Set as current, and then click Apply.
Important: The sample provided is intended to familiarize you with this feature. Do not use
this sample in an actual production environment. You can find the sample file in
Appendix C, “Additional material” on page 457.
5. Add your custom registry class name to the class path. We recommend that you add the
Java Archive (JAR) file that contains your custom user registry implementation to the
app_server_root/classes directory.
6. Optional: Select the Ignore case for authorization option for the authorization to perform
a case insensitive check. Enabling this option is necessary only when your user registry is
case insensitive and does not provide a consistent case when queried for users and
groups.
7. Click Apply if you have any other additional properties to enter for the registry
initialization.
8. Optional: Enter additional properties to initialize your implementation:
a. Select Custom properties → New.
b. Enter the property name and value.
For the sample, enter the two properties shown in Table 8-1. It is assumed that the
users.props file and the groups.props file are in the customer_sample directory under
the product installation directory. You can place these properties in any directory that
you choose and reference their locations through custom properties. However, make
sure that the directory has the appropriate access permissions.
274 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The Description, Required, and Validation Expression® fields are not used and can
remain blank.
The WebSphere Application Server Version 4 based custom user registry is migrated to
the custom user registry based on the com.ibm.websphere.security.UserRegistry
interface:
c. Click Apply.
d. Repeat this step to add other additional properties.
9. Select Security → Secure administration, applications, and infrastructure.
10.Under User account repository, click the Available realm definitions drop-down menu,
select Standalone custom registry, and click Configure.
11.Select either the Automatically generated server identity or Server identity that is
stored in the repository option. If you select the Server identity that is stored in the
repository option, enter the following information:
– Server user ID or administrative user: Specify the short name of the account that is
chosen in the second step.
– Server user password: Specify the password of the account that is chosen in the
second step.
12.Click OK and complete the required steps to turn on security.
Attention: The following code samples are split for illustrative purposes only.
276 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
5. Obtain the list of users from the registry with getUsers(String,int):
public Result getUsers(String pattern, int limit)
throws CustomRegistryException,
RemoteException;
The getUsers method returns the list of users from the registry. The names of users
depend on the pattern parameter. The number of users are limited by the limit parameter.
In a registry that has many users, getting all the users is not practical. So the limit
parameter is introduced to limit the number of users retrieved from the registry. A limit of
zero (0) returned all the users that match the pattern and might cause problems for large
registries. Use this limit with care.
The custom registry implementations are expected to support at least the wildcard search
(*). For example, a pattern of asterisk (*) returns all the users and a pattern of (a*) returns
the users starting with a.
The return parameter is an object with a com.ibm.websphere.security.Result type. This
object contains two attributes: a java.util.List and a java.lang.boolean attribute. The list
contains the users that are returned and the Boolean flag indicates if more users are
available in the user registry for the search pattern. This Boolean flag is used to indicate to
the client whether more users are available in the registry.
In the FileRegistrySample.java sample file, the getUsers method retrieves the required
number of users from the user registry and sets them as a list in the Result object. To
discover if more users are presented than requested, the sample gets one more user than
requested and if it finds the additional user, it sets the Boolean flag to true. For pattern
matching, the match method in the RegExpSample class is used, which supports wildcard
characters, such as the asterisk (*) and the question mark (?).
This method is called by the administrative console to add users to roles in the various
map-users-to-roles windows. The administrative console uses the Boolean set in the
Result object to indicate that more entries matching the pattern are available in the user
registry.
This method specifies to take the pattern and limit parameters. The return is a Result
object, which consists of the list and a flag that indicates if more entries exist. When the list
returns, use the Result.setList(List) method to set the list in the Result object. If more
entries exist than requested in the limit parameter, set the Boolean attribute to true in the
result object, using the Result.setHasMore method. The default for the Boolean attribute in
the result object is false.
6. Obtain the display name of a user with getUserDisplayName(String):
public String getUserDisplayName(String userSecurityName)
throws EntryNotFoundException,
CustomRegistryException,
RemoteException;
The getUserDisplayName method returns a display name for a user, if one exists. The
display name is an optional string that describes the user that you can set in some
registries. This descriptive name is for the user and does not have to be unique in the
registry.
If you do not need display names in your registry, return null or an empty string for this
method.
278 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The getGroups method returns the list of groups from the user registry. The names of
groups depend on the pattern parameter. The number of groups is limited by the limit
parameter. In a registry that has many groups, getting all the groups is not practical. So,
the limit parameter is introduced to limit the number of groups retrieved from the user
registry. A limit of zero (0) implies returning all the groups that match the pattern and can
cause problems for large user registries. Use this limit with care. The custom registry
implementations are expected to support at least the wildcard search (*). For example, a
pattern of asterisk (*) returns all the users and a pattern of (a*) returns the users starting
with a.
The return parameter is an object of the com.ibm.websphere.security.Result type. This
object contains the java.util.List and java.lang.boolean attributes. The list contains the
groups that are returned and the Boolean flag indicates whether more groups are
available in the user registry for the pattern searched. This Boolean flag is used to indicate
to the client if more groups are available in the registry.
In the FileRegistrySample.java sample file, the getUsers method retrieves the required
number of groups from the user registry and sets them as a list in the Result object. To
discover if more groups are presented than requested, the sample gets one more user
than requested and if it finds the additional user, and it sets the Boolean flag to true. For
pattern matching, the match method in the RegExpSample class is used, which supports
the asterisk (*) and question mark (?) characters.
This method is called by the administrative console to add groups to roles in the various
map-groups-to-roles windows. The administrative console uses the Boolean set in the
Result object to indicate that more entries matching the pattern are available in the user
registry.
The return is a Result object, which consists of the list and a flag that indicates whether
more entries exist. Use the Result.setList(List) method to set the list in the Result object. If
more entries exist than requested in the limit parameter, set the Boolean attribute to true in
the Result object using the Result.setHasMore method. The default for the Boolean
attribute in the Result object is false.
11.Obtain the display name of a group with getGroupDisplayName(String):
public String getGroupDisplayName(String groupSecurityName)
throws EntryNotFoundException,
CustomRegistryException,
RemoteException;
The getGroupDisplayName method returns a display name for a group if one exists. The
display name is an optional string that describes the group that you can set in some user
registries. This name is a descriptive name for the group and does not have to be unique
in the registry. If you do not need to have display names for groups in your registry, return
null or an empty string for this method.
In the FileRegistrySample.java sample file, this method returns the display name of the
group whose name matches the group name that is provided. If the display name does not
exist, this method returns an empty string.
The product can call this method to present the display names in the administrative
console or through the command line using the wsadmin tool. This method is used for
display purposes only.
12.Obtain the unique ID of a group with getUniqueGroupId(String):
public String getUniqueGroupId(String groupSecurityName)
throws EntryNotFoundException,
CustomRegistryException,
RemoteException;
280 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
group in which the user belongs can be part of the role in the users and groups-to-role
mapping, this method is called to check if any of the groups that this user belongs to
mapped to that role.
17.Retrieve users from a specified group with getUsersForGroup(String,int):
public Result getUsersForGroup(String groupSecurityName, int limit)
throws NotImplementedException,
EntryNotFoundException,
CustomRegistryException,
RemoteException;
This method retrieves users from the specified group. The number of users returned is
limited by the limit parameter. A limit of zero (0) indicates returning all of the users in that
group. This method is not directly called by the WebSphere Application Server security
component. However, this method can be called by other components. In rare situations, if
you are working with a user registry where getting all the users from any of your groups is
not practical, you can create the NotImplementedException exception for the particular
groups. In this case, verify that if the process choreographer is installed the staff
assignments are not modeled using these particular groups. If no concern exists about
returning the users from groups in the user registry, we recommend that you do not create
the NotImplemented exception when implementing this method.
The return parameter is an object with a com.ibm.websphere.security.Result type. This
object contains the java.util.List and java.lang.boolean attributes. The list contains the
users that are returned and the Boolean flag, which indicates whether more users are
available in the user registry for the search pattern. This Boolean flag indicates to the
client whether users are available in the user registry.
In the example, this method gets one user more than the requested number of users for a
group, if the limit parameter is not set to zero (0). If the method succeeds in getting one
more user, the Boolean flag is set to true.
This method can create the NotImplementedException exception in situations where it is
not practical to get the requested set of users. However, create this exception in rare
situations, as other components can be affected. It returns a Result object, which consists
of the list and a flag that indicates whether more entries exist. When the list is returned,
use the Result.setList(List) method to set the list in the Result object. If more entries than
requested are in the limit parameter, set the Boolean attribute to true in the Result object
using Result.setHasMore method. The default for the Boolean attribute in the Result
object is false.
18.Implement the createCredential(String) method:
public com.ibm.websphere.security.cred.WSCredential createCredential(String
userSecurityName)
throws NotImplementedException,
EntryNotFoundException,
CustomRegistryException,
RemoteException;
In this release of the product, the createCredential method is not called. You can return null.
In the example, a null value is returned.
Tip: You can find the FileRegistrySample.java file and the properties files users.props and
groups.props in Appendix C, “Additional material” on page 457.
WebSphere Application Server uses Java Secure Sockets Extension (JSSE) as the SSL
implementation for secure connections. JSSE is part of the Java 2 Standard Edition (J2SE)
specification and is included in the IBM implementation of the Java Runtime Extension (JRE).
JSSE handles the handshake negotiation and protection capabilities that are provided by SSL
to ensure secure connectivity exists across most protocols. JSSE relies on X.509
certificate-based asymmetric key pairs for secure connection protection and some data
encryption. Key pairs effectively encrypt session-based secret keys that encrypt larger blocks
of data. The SSL implementation manages the X.509 certificates.
SSL configurations
An SSL configuration comprises a set of configuration attributes that you can associate with
an endpoint or set of endpoints in the WebSphere Application Server topology. The SSL
configuration enables you to create an SSLContext object, which is the fundamental JSSE
object that the server uses to obtain SSL socket factories. You can manage the following
configuration attributes:
An alias for the SSLContext object
A handshake protocol version
A keystore reference
A truststore reference
A key manager
One or more trust managers
A security level selection of a cipher suite grouping or a specific cipher suite list
A certificate alias choice for client and server connections
To understand the specifics of each SSL configuration attribute, see 8.3.1, “Secure Sockets
Layer configurations” on page 286.
The current release provides improved capabilities for managing SSL configurations and
more flexibility when you select SSL configurations. In this release, you can select from the
following approaches.
Programmatic selection
You can set an SSL configuration on the running thread prior to an outbound connection.
WebSphere Application Server ensures that most system protocols, including Internet
Inter-ORB Protocol (IIOP), Java Message Service (JMS), Hyper Text Transfer Protocol
(HTTP), and Lightweight Directory Access Protocol (LDAP), accept the configuration.
282 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Dynamic selection
You can associate an SSL configuration dynamically with a specific target host, port, or
outbound protocol by using a predefined selection criteria. When it establishes the
connection, WebSphere Application Server checks to see if the target host and port match a
predefined criteria that includes the domain portion of the host. Additionally, you can
predefine the protocol for a specific outbound SSL configuration and certificate alias
selection.
Direct selection
You can select an SSL configuration by using a specific alias, as in past releases. This
method of selection is maintained for backwards compatibility because many applications and
processes rely on alias references.
Scope selection
You can associate an SSL configuration and its certificate alias, which is located in the
keystore associated with that SSL configuration, with a WebSphere Application Server
management scope. This approach is recommended to manage SSL configurations centrally.
You can manage endpoints more efficiently because they are located in one topology view of
the cell. The inheritance relationship between scopes reduces the number of SSL
configuration assignments that you must set.
Each time you associate an SSL configuration with a cell scope, the node scope within the
cell automatically inherits the configuration properties. However, when you assign an SSL
configuration to a node, the node configuration overrides the configuration that the node
inherits from the cell. Similarly, all of the application servers for a node automatically inherit
the SSL configuration for that node unless you override these assignments. Unless you
override a specific configuration, the topology relies on the rules of inheritance from the cell
level down to the endpoint level for each application server.
The topology view displays an inbound tree and outbound tree. You can make different SSL
configuration selections for each side of the SSL connection based on what that server
connects to as an outbound connection and what the server connects to as an inbound
connection.
The runtime uses an order of precedence for determining which SSL configuration to choose
because you have many ways to select SSL configurations. Consider the following order of
precedence when you select a configuration approach:
1. Programmatic selection. If an application sets an SSL configuration on the running thread
using the com.ibm.websphere.ssl.JSSEHelper application programming interface (API),
the SSL configuration is guaranteed the highest precedence.
2. Dynamic selection criteria for outbound host and port or protocol.
3. Direct selection.
4. Scope selection. Scope inheritance guarantees that the endpoint that you select is
associated with an SSL configuration and is inherited by every scope beneath it that does
not override this selection.
All of the nodes put their signer certificates in this common truststore (trust.p12). After you
federate a node, the default SSL configuration is automatically modified to point to the
common truststore, which is located in the cell directory. The node can now communicate
with all other servers in the cell.
All default SSL configurations contain a keystore with the name suffix DefaultKeyStore and a
truststore with the name suffix DefaultTrustStore. These default suffixes instruct the
WebSphere Application Server runtime to add the signer of the personal certificate to the
common truststore. If a keystore name does not end with DefaultKeyStore, the keystore
signer certificates are not added to the common truststore when you federate the server. You
can change the default SSL configuration, but you must ensure that the correct trust is
established for administrative connections, among others.
For more information, refer to IBM WebSphere Application Server V6.1 Security Handbook,
SG24-6316.
Attention: It is unsafe to trust a signer exchange prompt without verifying the SHA digest.
An unverified prompt can originate from a browser when a certificate is not trusted.
284 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
You can run a retrieveSigners administrative script from a client prior to making
connections to servers. To download signers, no administrative authority is required. To
upload signers, you must have Administrator role authority. The script downloads all of the
signers from a specified server truststore into the specified client truststore and can be
called to download only a specific alias from a truststore. You can also call the script to
upload signers to server truststores. When you select the CellDefaultTrustStore truststore
as the specified server truststore and common truststore for a cell, all of the signers for
that cell are downloaded to the specified client truststore, which is typically
ClientDefaultTrustStore.
You can physically distribute to clients the trust.p12 common truststore that is located in
the cell directory of the configuration repository. When doing this distribution, however, you
must ensure that the correct password has been specified in the ssl.client.props client SSL
configuration file. The default password for this truststore is WebAS. Change the default
password prior to distribution. Physical distribution is not as effective as the previous
options. When changes are made to the personal certificates on the server, automated
exchange can fail.
Make dynamic changes to the SSL configuration during off-peak hours to reduce the
possibility of timing-related problems and to prevent the possibility of the server starting
again. If you enable the runtime to accept dynamic changes, then change the SSL
configuration and save the security.xml file. Your changes take effect when the new
security.xml file reaches each node.
With built-in certificate management, you can replace a self-signed certificate along with all of
the signer certificates scattered across many truststores and retrieve a signer from a remote
port by connecting to the remote SSL host and port and intercepting the signer during the
handshake. The certificate is first validated according to the certificate SHA digest, then the
administrator must accept the validated certificate before it can be placed into a truststore.
When you make a certificate request, you can send it to a certificate authority (CA). When the
certificate is returned, you can accept it within the administrative console.
When you create an SSL configuration, you can set the following SSL connection attributes:
Keystore
Default client certificate for outbound connections
Default server certificate for inbound connections
Truststore
Key manager for selecting a certificate
Trust manager or managers for establishing trust during the handshake
Handshaking protocol
Ciphers for negotiating the handshake
Client authentication support and requirements
You can manage an SSL configuration using any of the following methods:
Central management selection
Direct reference selection
Dynamic outbound connection selection
Programmatic selection
Using the administrative console, you can manage all of the SSL configurations for
WebSphere Application Server. From the administrative console, select Security → SSL
certificates and key management → Manage endpoint security configurations →
Inbound → Outbound → SSL_configuration. You can view an SSL configuration at the
level it was created and in the inherited scope below that point in the topology. If you want the
entire cell to view an SSL configuration, you must create the configuration at the cell level in
the topology.
286 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
SSL configuration in the security.xml file
The attributes defining an SSL configuration repertoire entry for a specific management
scope are stored in the security.xml file. The scope determines the point at which other levels
in the cell topology can see the configuration, as shown in Example 8-1.
The SSL configuration attributes from Example 8-1 are described in Table 8-2.
288 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
security.xml attribute Description Default Associated SSL
property
290 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
security.xml attribute Description Default Associated SSL
property
8.3.3 Secure Sockets Layer node, application server, and cluster isolation
Secure Sockets Layer (SSL) enables you to ensure that any client that attempts to connect to
a server during the handshake first performs server authentication. Using SSL configurations
at the node, application server, and cluster scopes, you can isolate communication between
servers that should not be allowed to communicate with each other over secure ports.
Authenticating only the server-side of a connection is not adequate protection when you need
to isolate a server. Any client can obtain a signer certificate for the server and add it to its trust
store. SSL client authentication must also be enabled between servers so that the server can
control its connections by deciding which client certificates it can trust.
Isolation also requires that you use centrally managed SSL configurations for all or most
endpoints in the cell. Centrally managed configurations can be scoped, unlike direct or end
point configuration selection, and they enable you to create SSL configurations, key stores,
and trust stores at a particular scope. Because of the inheritance hierarchy of WebSphere
Application Server cells, if you select only the properties that you need for an SSL
configuration, only these properties are defined at your selected scope or lower. For example,
if you configure at the node scope, your configuration applies to the application server and
individual endpoint scopes below the node scope.
When you configure the key stores, which contain cryptographic keys, you must work at the
same scope at which you define the SSL configuration and not at a higher scope. For
example, if you create a key store that contains a certificate whose host name is part of the
distinguished name (DN), then store that keystore in the node directory of the configuration
repository. If you decide to create a certificate for the application server, then store that
keystore on the application server in the application server directory.
When you configure the trust stores, which control trust decisions on the server, you must
consider how much you want to isolate the application servers. You cannot isolate the
application servers from the node agents or the deployment manager. However, you can
configure the SOAP connector endpoints with the same personal certificate or to share trust.
Naming persistence requires IIOP connections when they pass through the deployment
manager. Because application servers always connect to the node agents when the server
starts, the IIOP protocol requires that WebSphere Application Server establish trust between
the application servers and the node agents.
You can modify the default configuration so that each node has its own trust store, and every
application server on the node trusts only the node agent that uses the same personal
certificate. You must also add the signer to the node trust store so that WebSphere
Application Server can establish trust with the deployment manager. To isolate the node,
ensure that the following conditions are met:
The deployment manager must initiate connections to any process.
The node agent must initiate connections to the deployment manager and its own
application servers.
292 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The application servers must initiate connections to the applications servers on the same
node, to its own node agent, and the deployment manager
As shown in Figure 8-1, Node Agent A contains a key.p12 keystore and a trust.p12 trust store
at the node level of the configuration repository for node A.
key p12
nodeA key server1
Node
Agent
(A)
key p12
cell key trust p12
server2
nodeA signer
cell signer
Dmgr
key p12
trust p12 nodeB key server1
cell signer
nodeA signer
Node
nodeB signer
Agent
(B)
When you associate an SSL configuration with this keystore and truststore, you break the link
with the cell-scoped trust store. To isolate the node completely, repeat this process for each
node in the cell. WebSphere Application Server SSL configurations override the cell scope
and use the node scope instead so that each process at this scope uses the SSL
configuration and certificate alias that you selected at this scope. You establish proper
administrative trust by ensuring that nodeA signer is in the common trust store and the cell
signer is in the nodeA trust store. The same logic applies to node B as well.
If you configure outbound SSL configurations dynamically, you can accommodate these
conditions. When you define a specific outbound protocol, target host, and port for each
different SSL configuration, you can override the scoped configuration.
key p12
key p12 nodeAserver1 key
nodeA key server1 key p12
nodeAserver1 signer
nodeA signer
Node cell signer
Agent
(A) key p12
key p12 nodeAserver2 key
trust p12
cell key server2
nodeA signer key p12
nodeAserver1 signer nodeAserver2 signer
nodeAserver2 signer nodeA signer
cell signer cell signer
Dmgr
key p12
nodeBserver1 key
key p12
nodeB key server1 key p12
trust p12
nodeBserver1 signer
cell signer
nodeB signer
nodeA signer
Node cell signer
nodeB signer
Agent
(B) key p12
nodeBserver2 key
The dynamic configuration enables server1 on Node A to communicate with server 1 on Node
B only over IIOP. The dynamic outbound rule is IIOP,nodeBhostname,*.
Figure 8-3 on page 295 shows a sample cluster configuration where cluster 1 contains a
key.p12 with its own self-signed certificate, and a trust.p12 that is located in the
config/cells/<cellname>/clusters/<clustername> directory.
294 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
key p12
nodeA key
server1
Node
Agent
(A)
key p12
trust p12 server2
cell key
nodeA signer
cluster1 signer
cluster2 signer
cell signer
Dmgr
In the example, cluster1 might contain Web applications, and cluster2 might contain EJB
applications. Considering the various protocols, you decide to enable IIOP traffic between the
two clusters. Your job is to define a dynamic outbound SSL configuration at the cluster1 scope
with the following properties:
IIOP,nodeAhostname,9403|IIOP,nodeAhostname,9404|IIOP,nodeBhostname,9403|IIOP,nodeB
hostname,9404
You have to create another SSL configuration at the cluster1 scope that contains a new
trust.p12 file with the cluster2 signer. Consequently, outbound IIOP requests go either to
nodeAhostname ports 9403 and 9404 or to nodeBhostname ports 9403 and 9404. The IIOP
SSL port numbers on these two application server processes in cluster2 identify the ports.
As you review Figure 8-3, note the following features of the cluster isolation configuration:
The trust.p12 for cluster1 contains signers that allow communications with itself (cluster1
signer), between both node agents (nodeAsigner and nodeBsigner), and with the
deployment manager (cell signer).
The trust.p12 for cluster2 contains signers that allow communications with itself (cluster2
signer), between both node agents (nodeAsigner and nodeBsigner), and with the
deployment manager (cell signer).
Node agent A and Node agent B can communicate with themselves, the deployment
manager, and both clusters.
Also, you must enable SSL client authentication for SSL to enforce the isolation requirements
on both sides of a connection. Without mutual SSL client authentication, a client can easily
obtain a signer for the server programmatically and thus bypass the aim of isolation. With SSL
client authentication, the server would require the client's signer for the connection to
succeed. For HTTP/S protocol, the client is typically a browser, a Web Service, or a URL
connection. For the IIOP/S protocol, the client is typically another application server or a Java
client. WebSphere Application Server must know the clients to determine if SSL client
authentication enablement is possible. Any applications that are available through a public
protocol must not enable SSL client authentication because the client may fail to obtain a
certificate to authenticate to the server.
Important: It is beyond the scope of this chapter to describe all the factors you must
consider to accomplish complete isolation. For more information about this topic, refer to
IBM WebSphere Application Server V6.1 Security Handbook, SG24-6316.
WebSphere Application Server certificate management requires that you define the keystores
in your WebSphere Application Server configuration. With iKeyman, you need access to the
keystore file only. You can read a keystore file with personal certificates and signers that is
created in iKeyman. A keystore file can be read into the WebSphere Application Server
configuration by using the createKeyStore command.
The majority of certificate management functions are the same between WebSphere
Application Server and iKeyman, especially for personal certificates and signer certificates.
However, certificate requests are special. The underlying behavior is different in the two
certificate management schemes. Because of this different behavior, when a certificate
request is generated from iKeyman, the process must be completed in iKeyman. For example,
a certificate that is generated by a certificate request that originated in iKeyman must be
received in iKeyman as well.
The same is true for WebSphere Application Server. For example, when a certificate is
generated from a certificate request that originated in WebSphere Application Server, the
certificate must be received in WebSphere Application Server.
You can perform the certificate operations shown in Table 8-3 on page 297 using iKeyman.
296 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Table 8-3 iKeyman: Certificate operations
Types of certificates Functions Description
To improve performance for LDAP searches, the default filters for Sun Java System Directory
Server are defined such that when you search for a user, the result contains all the relevant
information about the user (user ID, groups, and so on). As a result, the product does not call
the LDAP server multiple times. This definition is possible only in these directory types, which
support searches where the complete user information is obtained.
298 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 8-4 Configuring LDAP with Sun Java System Directory Server
3. Use the WebSphere Application Server administrative console to set up the information
that is needed to use Sun Java System Directory Server.
4. Based on the information from the previous steps, you can specify the following values on
the LDAP settings window(Figure 8-5 on page 301):
– Primary administrative user name
Specify the name of a user with administrative privileges that is defined in the registry.
This user name is used to access the administrative console or used by wsadmin. You
can either enter the complete distinguished name (DN) of the user or the short name of
the user, as defined by the user filter in the Advanced LDAP settings window.
– Type
Specify Sun Java System Directory Server. The type of LDAP server determines the
default filters that are used by WebSphere Application Server.
– Host
Specify the fully qualified host name of the machine that is running Sun Java System
Directory Server.
– Port
Specify the LDAP server port number. The host name and the port number represent
the realm for this LDAP server in the WebSphere Application Server cell. So, if servers
in different cells are communicating with each other using Lightweight Third Party
Authentication (LTPA) tokens, these realms must match exactly in all the cells.
300 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
– SSL Settings
Optional: Select the SSL Enabled option if you want to use Secure Sockets Layer
communications with the LDAP server.
If you select the SSL Enabled option, you can select either the Centrally managed or
the Use specific SSL alias option.
302 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
KSSL is added during the implementation and is responsible for the server side SSL protocol.
It works as the SSL proxy server providing a clear text proxy port to the application server and
listening on the SSL port. It manages keys and certificates and it is responsible for the SSL
handshake with the clients and managing the SSL session state information. SSL handshake
is asynchronous without an application server.
On UltraSPARC T1 and T2 based systems, you can use the KSSL module to take advantage
of all the features provided by the OS and the Solaris Cryptographic Framework.
Solaris provides a tool called pktool to manage the certificates and keys and which can also
be used for managing the certificates and keys on multiple keystores, including PKCS#11
tokens (that is, Cryptographic Framework), Netscape Security Services (NSS) tokens, and
standard file based keystore for OpenSSL. In this example, we will talk about the PKCS#11
based keystore so we take the advantage of Cryptographic Framework. pktool also provides
additional capability to manage the Certificate Revocation List (CRL). The details for using
pktool can be found by issuing the command man pktool.
pktool’s token subcommand can be used to list all the available PKCS#11 tokens, which can
be used to generate the certificate later. An example of this subcommand is shown in
Example 8-2.
We are going to use the token named Sun Software PKCS#11 softtoken, as shown in
Example 8-3, to generate the certificate.
Before we configure the KSSL proxy, we need to disable the metaslot feature in the
cryptographic framework, as shown in Example 8-5.
Now we need to create the a pin file in which we need to enter the pin in plain text format, as
shown in Example 8-6. The proper caution and access rights need to be given to the pin file
to secure it.
Once this is done, we have everything ready to create the proxy. For this task, we need to go
to administrative console of WebSphere and create another HTTP port and bind it to
default_host virtual host, as shown in Example 8-7. Take note of the port just created and
then stop the application server.
At this point, we need to start the application server so that it is ready to accept client
requests.
In Example 8-7, a proxy SSL port has been created using port 443, which will receive the
client’s SSL request and forward it to the plain text HTTP port running on 9080. For
demonstration purposes, we have used the default HTTP port, which needs to be replaced by
the new port, which has been created as mentioned previously.
The KSSL proxy services can be enabled and disabled by using the commands shown in
Example 8-8 on page 305.
304 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 8-8 svcadm disable and enable commands
svcadm disable /network/ssl/proxy - disable the kssl proxy
svcs | grep kssl - looks for kssl proxy services
In case of any issues, this log file can be checked for error messages before taking corrective
actions.
To check the connectivity on the newly created SSL port on port, 443, do either of the
following tasks:
Run openssl s_client -connect ws07 443.
Open a browser and go to https://<hostname>:443/<application-url>.
If everything is configured correctly, then the user will get a response from the server, and,
based on the method, the user will be prompted to accept the certificate presented by the
server.
Use the kstat command to verify that the Solaris 10 kernel is indeed doing the SSL
processing for the application. This command lists the different kernel statistics by default, but
the output can be made more specific by using different command options. In this case, we
can use the kstat command, as shown in Example 8-9.
WebSphere Application Server enables you to use multiple trust association interceptors. The
Application Server uses the first interceptor that can handle the request.
306 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
8.5.2 Trust association settings
Use this page to enable trust association, which integrates application server security and
third-party security servers. More specifically, a reverse proxy server can act as a front-end
authentication server while the product applies its own authorization policy onto the resulting
credentials passed by the proxy server.
When security is enabled and any of these properties change, go to the Secure
administration, applications, and infrastructure window and click Apply to validate the
changes.
When security is enabled and any of these properties are changed, go to the Secure
administration, applications, and infrastructure window and click Apply to validate the
changes.
The following steps are required when setting up security for the first time. Ensure that
Lightweight Third Party Authentication (LTPA) is the active authentication mechanism:
1. From the WebSphere Application Server console, select Security → Secure
administration, applications, and infrastructure.
2. Ensure that the Active authentication mechanism field is set to Lightweight Third Party
Authentication (LTPA). If not, set it and save your changes.
Lightweight Third Party Authentication (LTPA) is the default authentication mechanism for
WebSphere Application Server. You can configure LTPA prior to configure single sign-on
(SSO) by selecting Security → Secure administration, applications, and
infrastructure → Authentication mechanisms and expiration. Although you can use
Simple WebSphere Authentication Mechanism (SWAM) by selecting Use SWAM, there is no
authenticated communication between servers option on the Authentication mechanisms and
expiration window. Single sign-on (SSO) requires LTPA as the configured authentication
mechanism:
1. From the administrative console for WebSphere Application Server, select Security →
Secure administration, applications, and infrastructure.
2. Under Web security, click Trust association.
3. Select the Enable trust association option.
4. Under Additional properties, click the Interceptors link.
5. Click com.ibm.ws.security.web.WebSealTrustAssociationInterceptor to use the
WebSEAL interceptor. This interceptor is the default.
6. Under Additional properties, click Custom Properties.
7. Click New to enter the property name and value pairs. Ensure the parameters shown in
Table 8-4 are set.
com.ibm.websphere.security. webseal.loginId The format of the user name is the short name
representation. This property is mandatory. If the
property is not set in the WebSphere Application
Server, TAI initialization fails.
308 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Option Description
8. Click OK.
9. Save the configuration and log out.
10.Restart WebSphere Application Server.
Note: Enabling Web security single sign-on (SSO) is optional when you configure the
TAMTrustAssociationInterceptorPlus.
Although you can use Simple WebSphere Authentication Mechanism (SWAM) by selecting
the Use SWAM-no authenticated communication between servers option on the
Authentication mechanisms and expiration window, single sign-on (SSO) requires LTPA as
the configured authentication mechanism:
1. From the administrative console for WebSphere Application Server, select Security →
Secure administration, applications, and infrastructure.
2. Under Web security, click Trust association.
com.ibm.websphere.security. You can configure TAI so that the via header can
webseal.checkViaHeader be ignored when validating trust for a request. Set
this property to false if none of the hosts in the via
header need to be trusted. When set to false, you
do not need to set the trusted host names and
host ports properties. The only mandatory
property to check when the via header is false is
com.ibm.websphere.security.webseal.loginId.
The default value of the check via header
property is false. When using Tivoli Access
Manager plug-in for Web servers, set this
property to false.
Note: The via header is part of the standard
HTTP header that records the server names that
the request passed through.
com.ibm.websphere.security. webseal.loginId The format of the user name is the short name
representation. This property is mandatory. If it is
not set in WebSphere Application Server, the TAI
initialization fails.
310 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Option Description
312 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
9
To better understand Sun Java Virtual Machine (JVM) and Solaris performance, readers are
suggested to read this chapter in conjunction with the following Sun documentation:
http://java.sun.com/performance
http://java.sun.com/performance/reference/whitepapers/5.0_performance.html
http://java.sun.com/j2se/reference/whitepapers/memorymanagement_whitepaper.pdf
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
http://docs.sun.com/app/docs/doc/816-5166
http://docs.sun.com/app/docs/doc/817-6223
The following topics are discussed and the differences and added value that Solaris 10 offers
to any enterprise for WebSphere deployment will be highlighted:
Performance management challenges
System performance monitoring and tuning
JVM performance monitoring and tuning
WebSphere Application Server performance monitoring and tuning
Performance benchmarks
9.2.1 Challenges
Performance problems can be encountered almost anywhere. The problem can be network
and hardware related, back-end system related, or it can be in the various layers of the
product software stack, or quite often, application design issues.
Understanding the flow used to diagnose a problem helps to establish the monitoring that
should be in place for your site to detect and correct performance problems. The first
dimension is the "user view,” the black box view of your Web site. This is an external
perspective of how the overall Web site is performing from a user’s point of view and identifies
how long the response time is for a user. From this black box perspective, it is important to
understand the load and response time on your site. To monitor at this level, many industry
monitoring tools allow you to inject and monitor synthetic transactions, helping you identify
when your Web site experiences a problem.
The second step is to understand the basic health of all the systems and network that make
up a user request. This is the "external" view, which typically leverages tools and utilities
provided with the systems and applications running. In this stage, it is of fundamental
importance to understand the health of every system involved, including Web servers,
application servers, databases, back-end systems, and so on. If any of the systems has a
problem, it may have a rippling effect and cause something like the "servlet is slow" problem.
314 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
This dimension corresponds to the "what resource is constrained" portion of the problem
diagnosis. To monitor at this level, WebSphere provides PMI instrumentation and the Tivoli
Performance Viewer as a starting point. There are also several industry tools built using PMI
instrumentation that provide 24x7 monitoring capabilities. In addition to the capabilities built
into WebSphere, there are numerous visual monitoring tools that are included with the JVM,
which comes with WebSphere Application Server on the Solaris platform. Solaris 10 also
offers many tools to monitor and make appropriate decisions to tune the system for optimum
performance.
System Management
– Processor Statistics
– Memory
– Network
– I/O Subsystem
Real-time Tracing
– DTrace
– Sun Studio Collector/Analyzer
JVM
– jvmstat
– jmap
– jstack
– jhat
– And many other tools.
The third dimension is the application view. This dimension actually understands the
application code that is satisfying the user request. This dimension understands that there are
specific servlets that are accessing session beans, entity CMP beans, a specific database,
and so on. This dimension typically comes into play in an in-depth internal understanding of
who is using the resource. Typically at this stage, some type of time trace through the
application or thread analysis under load conditions techniques is deployed to isolate areas of
the application, and particular interactions with back-end systems or databases that are
especially slow under load. WebSphere provides the Request Metrics technology as a
starting point. In many cases, you start using many of the development tools provided, such
as IBM Rational Application Developer, Sun Studio, and NetBeans™.
9.2.3 Differences
The WebSphere Application Server software is available on a number of different platforms,
including Solaris 10. Although WebSphere is designed to operate similarly on all platforms,
there are always platform specific differences that cannot be abstracted away, and it is very
important that administrators make an effort to understand these platform specific
differences. The performance of any application server depends on its core processing
engines, which are the JVM and the OS. Once the administrator understands these
differences and knows the capability of a particular JVM and OS, the administrator will be
able to manage system performance effectively. Each OS has its own tunable parameters to
perform optimally with different types of applications. Similarly, each JVM implementation has
its own tunable parameters. As said earlier, the JVM plays an important role in the
performance of application servers; therefore, it is important to recognize these differences
regarding WebSphere on Solaris 10.
Processors have been traditionally designed to provide a high degree of parallelism at the
instruction level using a single hardware thread of execution. Such complex processors have
been able to reach extremely high processing frequencies. However, these high clock rates
have been achieved only with the unwanted side effects of additional power usage and
decreased efficiency through excess generation of heat. What is more, the issue of memory
latency causes the otherwise fast single-threaded processors to waste time in an idle state
while they wait for memory to become available. Finally, given the type of demanding
business workload requirements, applications have not been able to reap the benefits of
parallel instructions; rather, they have had to rely on parallel threads of execution.
Sun offers various hardware technology that may be suited to one deployment scenario but
may not be appropriate for another deployment situation. Understanding the system behavior
when choosing the right hardware saves a lot of deployment and support calls later in the
deployment cycle.
Another important thing to note about WebSphere on Solaris is that it is bundled with Sun
JDK with some changes to some of the components of a Sun’s JDK implementation. This
requires additional caution when managing the performance of the system on Solaris 10 and
other Solaris platforms. The tuning parameters and behavior of the Sun’s JDK may be
different when compared to the IBM JDK, even if technically they are the same specification
level implementation of Java Virtual Machine. In addition to the JVM specification level, there
are additional tools and technologies that may be available with a particular implementation
and that can be used to manage the performance.
316 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
As explained in 1.7.2, “The Java technology for WebSphere on Solaris” on page 12, the Sun
JVM we use here has had changes made to the ORB/XML and Security component. The
details of the changes can be found in the following files in WAS_INSTALL_DIR/java/:
README_FIRST
ibmorbguide.htm
securityguide.ref.htm
Refer to these files in order to understand the differences and plan accordingly.
Performance management for any system would include the following items:
Staying up to date with patches for your OS, for example, Solaris 10
System monitoring tool
– Collecting data
– Analyzing
– Taking corrective actions
Tuning
Different organizations have different policies regarding updated software components from
the software and OS vendors. We recommend that JDK and the Solaris OS be updated with a
Solaris patch cluster. While updating with patches solves some problems, it may bring in
additional complexity. Therefore, you need to plan carefully before updating the system.
Some of the implications would be as follows:
Analyzing the need for a specific patch in your environment
Addressing Sun notifications for Solaris 10, JDK, and WebSphere Application Server
alerts or the IBM Fix Pack guidelines for WebSphere Application Server
Addressing the need for change in your environment
Solaris delivers patches in different ways to address different needs. Details can be found at:
http://sunsolve.sun.com/show.do?target=patchpage
It is essential that your environment functions correctly, and you must meet the criteria
outlined on that Web site. Any deviation from it may result in an unsupported configuration as
well as other issues, including performance.
For example, the system requirements for WebSphere Application Server V6.1 on Solaris
SPARC can be found at:
http://www.ibm.com/support/docview.wss?rs=180&uid=swg27007662
The same applies to the JVM version as well, which can be found at:
http://www.ibm.com/support/docview.wss?rs=180&uid=swg27005002
The following Web site has information about keeping the Java SE patches up-to-date for
Solaris:
http://sunsolve.sun.com/pub-cgi/show.pl?target=patches/JavaSE
Figure 9-1 The Sun Web site for Solaris updates and patches
318 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
9.3.2 Solaris system monitoring
In this section, we discuss how you can monitor your Solaris OS. We recommend using the
commands described in this section to monitor the system’s behavior. We provide the
definition of each command along with the recommended command line option(s), if
applicable. We also provide a sample output for each command and explain what you should
be paying attention to in the output.
Note: The command definitions are from the Solaris online manual, which we provide a
link to at the end of each command section.
Each command is followed by its manual section information inside the parenthesis. For
example, vmstat(1M) means it is from the System Administration Commands section 1M.
vmstat(1M)
The vmstat command reports virtual memory statistics regarding kernel thread, virtual
memory, disk, trap, and CPU activity.
On multi-processor (MP) systems, vmstat averages the number of CPUs into the output.
This command can used to get monitoring data about the following items:
– Kernel Threads
– Memory
– Paging
– Disk
– CPU
Example 9-1 shows a sample vmstat output. Looking at the last row, it depicts a system
that has about 83% of its capacity being used (that is, 17% idle), out of which 64% of the
time being spent is user time and 19% is the kernel time. The Kernel Threads in the run
queue is at 40, which means there are 40 processes ready in the run queue. Looking at
the page and disk columns, we do not see any paging activities and disk contentions for
input/output. It is also clear that enough swap and free memory are available. Therefore,
the load on this system can be increased to achieve higher resource utilization.
If the last column marked “id” (idle CPU) reaches the range of zero to less than 10%, the
system has reached the saturation point and needs to be tuned to lower CPU utilization.
In cases where the run queue is very high (the column kthr “r”), it may mean that there are
too many threads currently in the system and they are not getting CPU time when they are
ready for execution. These conditions can be eliminated by reducing different thread pool
sizes within WebSphere Application Server. The optimal thread pool settings are crucial to
the system’s stability and performance.
For more details on the mpstat command, see the documentation at:
http://docs.sun.com/app/docs/doc/816-5166/mpstat-1m
intrstat(1M)
The intrstat utility iteratively examines the interrupt distribution over the CPU cores.
The intrstat utility gathers and displays runtime interrupt statistics. The output is a table
of device names and CPU IDs, where each row of the table denotes a device, and each
column of the table denotes a CPU. Each cell in the table contains both the raw number of
interrupts for the given device on the given CPU, and the percentage of absolute time
spent in that device's interrupt handler on that CPU.
The device name is given in the form of {name}#{instance}. The name is the normalized
driver name, and typically corresponds to the name of the module implementing the driver
(refer to the ddi_driver_name command (9F) for more information). Many Sun-delivered
drivers have their own manual pages (refer to the intro command (7) for more
information).
320 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
If the standard output is to a terminal, the table contains as many columns of data as can
fit within the terminal width. If the standard output is not a terminal, the table contains at
most four columns of data. By default, data is gathered and displayed for all CPUs. If the
data cannot fit in a single table, it is printed across multiple tables. The set of CPUs for
which data is displayed can be optionally specified with the -c or -C option.
By default, intrstat displays data once per second and runs indefinitely. Both of these
behaviors can be optionally controlled with the interval and count parameters, respectively.
Example 9-3 shows an example intrstat output.
As we noted in the previous example, there is some problem with the system and we were
looking for the way to determine the root cause. So on the same system, we ran intrstat
for the same duration as mpstat. The data from this tool, the row in bold, shows that CPU
ID 27 is receiving all the network interrupts for the device marked with id e1000g#0. This
means that on this system, WebSphere Application Server has been configured to use the
e1000g#0 device for its network communications and all interrupts generated due to
network activities are going to CPU ID 27, which does not have any more room to
accommodate any additional load. The system needs to be tuned and additional network
cards need to be used. We may do link aggregation here or add another instance of
WebSphere Application Server so that the load gets distributed. Another strategy would be
to do some tuning to spread the interrupt processing over multiple CPU cores by using the
following tunable parameters in /etc/system:
set ip:ip_soft_rings_cnt = 8
set ddi_msix_alloc_limit = 8
For more details on the intrstat command, see the documentation at:
http://docs.sun.com/app/docs/doc/816-5166/intrstat-1m
prstat(1M)
The prstat utility iteratively examines all active processes on the system and reports
statistics based on the selected output mode and sort order. prstat provides options to
examine only processes matching specified PIDs, UIDs, zone IDs, CPU IDs, and
processor set IDs.
For more details on the iostat command, see the documentation at:
http://docs.sun.com/app/docs/doc/816-5166/iostat-1m
netstat(1M)
The netstat command is used to get different network statistics. It displays the contents of
certain network-related data structures in various formats, depending on the options you
select. Based on command options, you can get various data points from this command to
see how the network is behaving and based on that take appropriate actions.
For more details on the netstat command, see the documentation at
http://docs.sun.com/app/docs/doc/816-5166/netstat-1m
corestat
The corestat command is a tool that monitors the utilization of UltraSPARC T1 and T2
cores. The corestat command reports aggregate core usage based on the instructions
executed by a core (that is, by the available Virtual Processors sharing the same core).
As we discussed earlier in the system performance management section, there are tools
that existed before the Chip Multi Threading (CMT) processor came into existence. Some
of the advances brought additional complexity, so to understand the characteristics of
these systems, we need to look at the collective data from hardware counters and system
monitoring tools (that is, vmstat and mpstat). It is essential to understand the CMT
processor and core usages in the right context to be able to make the proper decision for
performance management and related tasks.
When Sun first started shipping the CMT based systems, called UltraSPARC T1, some of
the Sun engineers had written the corestat utility to monitor the status of the processor
cores. The corestat utility is based on hardware performance counters and is very helpful
in understanding the system behavior under load. UltraSPARC T2 is the second
generation CMT processor, which is the most current product at this writing.
For T1 based processor details, go to:
http://www.solarisinternals.com/wiki/index.php/CMT_Utilization
322 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
For more details, go to the following blogs. These blogs also have the coretstat utility,
which can be downloaded free of charge and be used as is in making performance and
capacity planning decisions:
– For UltraSPARC T1 based system:
http://blogs.sun.com/travi/entry/ultrasparc_t1_utilization_explained
– For UltraSPARC T2 based system:
http://blogs.sun.com/travi/entry/corestat_for_ultrasparc_t2
– The corestat tool details:
http://blogs.sun.com/travi/entry/corestat_core_utilization_reporting_tool
Sun’s UltraSPARC T1 processor is very radical approach to multicore processing. They
are often described as a “system on a chip”, and provide very high scalability and
throughput. More details about T1 can be found at:
http://sun.com/processors/UltraSPARC-T1/
Due to a radical design shift, we need to understand some of the principle changes behind
the processing of these systems. The traditional system monitoring tool may not be
adequate to describe the complete details of the system’s utilization. This processor is
Chip Multiprocessing combined with Chip Multithreading. The thread scheduling and OS
treat each of the threads as different CPUs, which results in 32 processors on T1 and 64
processors on T2 based servers, respectively. The conventional idle state and the idle
state in the context of these systems would mean different things. Thus, paying attention
to this idle state will help resolve performance issues. Utilization of the core will be
different from the utilization of the system, and looking at both of them will provide the
guidelines for tuning or even increasing the load on the system.
As a result, the meaning of the idle thread has been changed, which requires a different
understanding of processor behavior. The blog’s previously referenced can help you
understand these differences and make the right choice for performance tuning. The tool
can be downloaded from:
– For UltraSPARC T1 based systems:
http://blogs.sun.com/roller/resources/travi/corestat.v.1.1.tar.gz
– For UltraSPARC T2 based systems:
http://blogs.sun.com/travi/resource/corestat.v.1.2.2.tar.gz
plockstat(1M)
The plockstat utility gathers and displays user-level locking statistics. By default,
plockstat monitors all lock contention events, gathers frequency and timing data about
those events, and displays the data in decreasing frequency order, so that the most
common events appear first.
The plockstat utility gathers data until the specified command completes or the process
specified with the -p option completes.
The plockstat utility relies on DTrace to instrument a running process or a command it
invokes to trace events of interest. This imposes a small but measurable performance
impact on the processes being observed. Users must have the dtrace_proc privilege and
have permission to observe a particular process with plockstat. For more information,
see 9.3.3, “Dynamic Tracing (DTrace)” on page 330.
324 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
For each PID operand, the kill utility will perform actions equivalent to the kill(2)
function called with the following arguments:
– The value of the PID operand will be used as the PID argument.
– The sig argument is the value specified by the -s option, the -signal_name option, or
the -signal_number option, or, if none of these options is specified, by SIGTERM.
The signaled process must belong to the current user unless the user is the super-user.
For example, to generate a thread dump on Solaris 10 system, get the process ID of the
WebSphere Application Server and then issue the following command:
kill -3 <PID_OF_WAS_PROCESS>
Alternatively, it can be done as follows:
kill -SIGQUIT <PID_OF_WAS_PROCESS>
This will send the thread dump to the <WAS_LOG_DIR>/native_stdout.log file.
pmap(1)
The pmap command displays information about the address space of a process.
Example 9-6 shows a sample output of the pmap command.
Example 9-7 shows a sample pstack output on a WebSphere Application Server process.
326 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
f8005874 *
sun/reflect/NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)
Ljava/lang/Object;+87 (l
ine 39)
f8005874 *
sun/reflect/DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Obje
ct;)Ljava/lang/Object;+6
(line 25)
f8005d3c *
java/lang/reflect/Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/O
bject;+111 (line 585)
f8005874 *
com/ibm/wsspi/bootstrap/WSLauncher.launchMain([Ljava/lang/String;)V+341 (line 183)
f8005764 * com/ibm/wsspi/bootstrap/WSLauncher.main([Ljava/lang/String;)V+17 (line
90)
f8005764 *
com/ibm/wsspi/bootstrap/WSLauncher.run(Ljava/lang/Object;)Ljava/lang/Object;+22
(line 72)
f8005d3c *
org/eclipse/core/internal/runtime/PlatformActivator$1.run(Ljava/lang/Object;)Ljava
/lang/Object;+219 (line
78)
f8005d3c *
org/eclipse/core/runtime/internal/adaptor/EclipseAppLauncher.runApplication(Ljava/
lang/Object;)Ljava/lang/
Object;+103 (line 92)
f8005874 *
org/eclipse/core/runtime/internal/adaptor/EclipseAppLauncher.start(Ljava/lang/Obje
ct;)Ljava/lang/Object;+2
9 (line 68)
f8005874 *
org/eclipse/core/runtime/adaptor/EclipseStarter.run(Ljava/lang/Object;)Ljava/lang/
Object;+135 (line 400)
f8005874 *
org/eclipse/core/runtime/adaptor/EclipseStarter.run([Ljava/lang/String;Ljava/lang/
Runnable;)Ljava/lang/Obj
ect;+60 (line 177)
pldd(1)
The pldd command lists the dynamic libraries linked into each process, including shared
objects explicitly attached using dlopen(3DL). This command is very useful when you want
to discover which libraries have been loaded by WebSphere Application Server, and when
you are switching between different technology implementations, such as switching from
aio to nio for WebContainer.
Example 9-8 shows a sample pldd output.
priocntl(1M)
The priocntl command displays or sets scheduling parameters of the specified
process(es). It can also be used to display the current configuration information for the
system's process scheduler or execute a command with specified scheduling parameters.
Processes fall into distinct classes with a separate scheduling policy applied to each class.
The currently supported process classes are Real-Time, Time-Sharing, and Interactive
classes. The characteristics of these classes and the class-specific options available are
described in the "Usage” section of the Solaris manual page for priocntl(1), which can be
found at http://docs.sun.com/app/docs/doc/816-0210/6m6nb7mi6?a=view. With the
appropriate permissions, the priocntl command can change the class and other
scheduling parameters associated with a running process.
Solaris introduced Fair Share (FSS) and Fixed (FX) process schedulers. When running
only the application server on a system, it is good to run it in FX scheduler mode so that
the WebSphere Application Server process gets fixed priority. When you run your
environment or workload with and without these settings, you will notice a difference in the
“involuntery context switches (icsw)” mpstat output. When this option is enabled, the
number in icsw will be low, which will result in performance gain. In the lab environment for
some applications, about a 10% improvement was observed. If the application can take
adantage of fixed priority, using this scheduler will enhance the perofrmance.
For example, the scheduler for a WebSphere Application Server process can be changed
as follows:
/usr/bin/priocntl -s -c FX -m 59 -p 59 -i pid <PID_OF_WAS_PROCESS>
328 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
ps(1)
The ps command displays information about processes. Normally, only those processes
that are running with your effective user ID and are attached to a controlling terminal (see
termio(7I)) are shown. Additional categories of processes can be added to the display
using various options. In particular, the -a option allows you to include processes that are
not owned by you (that do not have your user ID), and the -x option allows you to include
processes without controlling terminals. When you specify both -a and - x, you get
processes owned by anyone, with or without a controlling terminal. The -r option restricts
the list of processes printed to running and runnable processes.
The ps command displays, in tabular form, the process ID under PID, the controlling
terminal (if any) under TT, the CPU time used by the process so far, including both user
and system time, under TIME, the state of the process, under S, and finally, an indication
of the COMMAND that is running.
For example, this command can be run to discover what is the actual command that has
been invoked to start the WebSphere Application Server process:
/usr/ucb/ps -auwx <PID_OF_WAS_PROCESS>
This is a very handy command to discover some of the JVM arguments as well as the
class path, as shown in Example 9-9.
Example 9-10 Recommended ndd tuning for Sun SPARC Enterprise T5220 server
ndd -set /dev/tcp tcp_conn_req_max_q 16384
ndd -set /dev/tcp tcp_conn_req_max_q0 16384
ndd -set /dev/tcp tcp_xmit_hiwat 131072
ndd -set /dev/tcp tcp_recv_hiwat 131072
ndd -set /dev/tcp tcp_naglim_def 1
You can get more information about the ndd command on your Solaris system by typing in
the man ndd command.
DTrace can dynamically instrument the operating system kernel and user processes to record
data at locations of interest, called probes. A probe is an event or activity to which DTrace can
bind a request to perform a set of actions, like recording a stack trace, a time stamp, or the
argument to a function. Probes are like programmable sensors scattered all over your Solaris
330 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
system in interesting places. DTrace probes come from a set of kernel modules called
providers, each of which performs a particular kind of instrumentation to create probes.
Having said that, it is still important to make sure that the scripts are well written. It is a best
practice to check the probe’s effect on your scripts before running it in production.
See Chapter 38, “Performance Considerations”, in Solaris Dynamic Tracing Guide, found
at:
http://docs.sun.com/app/docs/doc/817-6223
DTrace introduces its own scripting language called D. D-scripts are used to interact with the
Solaris Dynamic Tracing facility. Also, D-scripts are the portable way of developing standard
scripts for collecting data from a live running system.
Figure 9-2 on page 332 explains the basic workings of the DTrace system. DTrace is a Solaris
kernel feature. The user space consumers like dtrace and plockstat interact with the DTrace
system using the libdtrace interface. Typically, users interact with the DTrace system using
D-scripts. There are also C and Java APIs to interact with the DTrace subsystem.
By default, DTrace can only be run as the root user. Non-root users need some extra
privileges to run DTrace. These are easily configured by updating the /etc/user_attr files with
the necessary privileges. There are three privileges that are specific to DTrace:
dtrace_proc:: Permits users to use the PID provider for process level tracing
dtrace_user:: Permits users to use the profile and syscall provider to look at what their
process is doing in the kernel
dtrace_kernel:: Permits users to access almost all DTrace probes except the capability to
look into other users’ processes.
For example, you can allow the user named “joe” to have this privilege:
joe::::defaultpriv=basic,dtrace_proc,dtrace_user,dtrace_kernel
Once you have done this edit, the user needs to relogin to get the privileges.
#intrstat
DTrace #plockstat
Consumers #dtrace –s gc_info.d
(e.g. System intrstat dtrace lockstat plockstat
Commands)
DTrace
libdtrace
Framework
User Space
Kernel Space
DTrace
(Kernel Software)
Introduction to D-scripts
First, we start with a brief discussion of the construction of a D-script. A D-script consists of a
probe description, a predicate, and actions, as shown below:
probe description
/predicate/
{
actions
}
As you saw earlier, probes are events in DTrace. In the D-script, you can subscribe for a set of
one or more events and set up callbacks in the action section. The predicates are limitations
that can be set when the actions are executed.
332 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Here the probe description is the syscall::open:entry, which describes the open system
call. The predicate is /execname == "java"/. This checks to see if the executable that is
calling the open system call is the Java process. The trace statement is the action that
executes every time Java calls the open system call. D-scripts allow you to look at the
arguments. In this case, the first argument of the open system call is the name of the file that
is being opened. So this simple script will print out the file name that is opened by any Java
process in the system, which includes those that are currently running and those that will be
started after the script is run.
Probe description
Now let us look a little deeper. A probe is described with four tuples: the provider, module,
function, and name.
Provider: Specifies the instrumentation method to be used. For example, the syscall
provider is used to monitor system calls while the io provider is used to monitor the disk
I/O.
Module and function: Describes the module and function you want to observe.
Name: Typically represents the location in the function. For example, use the entry for
name to set when you enter the function.
Note that wild cards like * and ? can be used. Blank fields are interpreted as wildcard.
Table 9-1 shows a few examples.
syscall::open*:entry Entry into any system call that starts with open (open and open64)
Predicate
A predicate can be any D expression. The action is executed only when the predicate
evaluates to true. A few examples are given in Table 9-2.
pid == 1976 True if the PID of the process that caused the
probe to fire is 1976
ppid !=0 && arg0 == 0 True if the parent process ID is not 0 and the first
argument is 0
Note: Predicates and action statements are optional. If the predicate is missing, then the
action is always executed. If the action is missing, then the name of the probe that fired is
printed.
The previous lines mean that the script that follows need to be interpreted using:
dtrace. D uses C-style comments.
*/
pid2401:libc::entry
{}
The following steps can be used to build a D-script using the PID provider:
1. The same script can be run from the command line using the -n option of dtrace:
# dtrace -n pid2401:libc::entry
334 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
You can use this command or script by replacing 2401 with the PID of the process in which
you are interested, such as WebSphere Application Server’s Java process ID. As you
notice, this is not too configurable.
2. If you modify the script to take the process ID as a parameter, your script will look like:
#!/usr/sbin/dtrace -s
pid$1:libc::entry
{
}
3. Now you can provide the process ID as an argument to your script. Note that dtrace will
produce an error and not execute if you provide more than one argument to this script.
You may have noticed that the output from this script went by too fast to be useful. D
language has a wonderful construct called aggregate to collect all the details in memory
and print out a summary. Aggregrations allow you to collect tables of information in
memory. Aggregations have the following construction:
@name[table index(es)] = aggregate_function()
For example:
@count_table[probefunc] = count() ;
This aggregation will collect information into a table with the name of the function
(probefunc is a built-in variable that has the name of the function). The aggregation
function count() keeps track of the number of times the function was called. The other
popular aggregation functions are average, min, max, and sum.
4. The following example adds aggregates in the program we are developing to show a table
of summary of user functions:
#!/usr/sbin/dtrace -s
pid$1:::entry
{
@count_table[probefunc]=count();
}
This script will collect information into a table and will continue to run until you press ^c
(Ctrl-C). Once you stop the script, DTrace will print out the table of information. Notice that
you do not need to write any code to print the table; DTrace automatically does this for you.
You may have noticed that the probes were created dynamically when running this script.
The probes are disabled as soon as the script is stopped; you do not need to do anything
special to disable these probes. In addition, DTrace will automatically clean up any
memory it allocated; you do not need to write any code for cleanup.
5. You can easily modify this script to create a table with the function and library name by
changing the index to probemod and probefunc.
#!/usr/sbin/dtrace -s
pid$1:::entry
{
@count_table[probemod,probefunc]=count();
}
Note that the ts[] is an array, and D has automatically declared and initialized it for you. It is
best practice to save space by setting the variable to 0.
7. This script works for most cases; however, there are a few corner case exceptions:
– Exception 1: Monitor function entry and return
Because you are creating the probes on a live running application, it is very likely that
you were executing a function when you ran the D-script. Therefore, it is possible to
see the return for a function for which you did not see an entry. This case is easily
handled by adding a predicate /ts[probefunc] != 0/ to the return probe section. This
predicate will allow you to ignore the above corner case.
– Exception 2: Multi-threaded applications
The next corner case is a little more tricky and involves multi-threaded applications.
There is a likely race condition where two threads could execute the same function at
the same time. In this case, you need one copy of the ts[] for each thread. DTrace
addresses this through the self variable. Anything that you add to the self variable will
be made thread local.
8. The following script has been modified to handle these two corner cases:
#!/usr/sbin/dtrace -s
pid$1:libc::entry
{
self->ts[probefunc] = timestamp;
}
pid$1:libc::return /self->ts[probefunc]/
{
@func_time[probefunc] = sum(timestamp - self->ts[probefunc]);
self->ts[probefunc] = 0;
}
For very large applications, the above script enables a large number of probes (possibly
hundreds of thousands). Even though each probe is fairly light weight, adding hundreds of
thousands of probes to a live running system will impact the performance of your
application.
336 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
You can limit the number of probes you enable by modifying the probe description. See
Table 9-5 for some examples.
9. Next, you see how to monitor a process from the time it starts until the end. DTrace allows
you to do this using the $target variable and the -c option. This script will count the number
of times libc functions are called from a given application:
#!/usr/sbin/dtrace -s
pid$target:libc::entry
{
@[probefunc]=count();
}
10.Save this script to a file as libc_func.d and run it:
# libc_func.d -c “cat /etc/hosts”
You can easily replace cat /etc/hosts with the command of interest.
338 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 9-12 GC example
#!/usr/sbin/dtrace -s
dvm$1:::gc-start
{
self->ts = timestamp;
}
dvm$1:::gc-finish
{
printf("GC ran for %dnsec\n", timestamp - self->ts);
}
Note: This script takes the PID of the Java process as its first and only argument.
The script in Example 9-13 will print all the objects allocated and freed. It will also show
you who allocated the object and the size of the object that was allocated.
dvm$1:::object-free
{
printf("%s freed %d bytes\n",copyinstr(arg0), arg1);
}
Note: This script takes the PID of the Java process as its first and only argument.
The script in Example 9-14 prints the class name and method name of the method that is
called. As we generally see many of these method calls in a typical Java process, we
choose to look at this as an aggregate. Once you press ^c (Ctrl-C), the script will produce
the sorted list and count of all the methods that were called.
dvm$1:::method-entry
{
@[copyinstr(arg0),copyinstr(arg1)] = count();
}
Note: Again, this script takes the PID of the Java process as its first and only argument.
Note: In general, there is no need to add any flags or restart an application to use DTrace
to observe its runtime behavior. You can look at the WebSphere Application Server using
the PID provider without any changes on a live production system.
In many cases, you would like to observe not just the application server itself but the Java
programs that run in the application server. In Java 5.0, you need an extra library to be able
to observe the application and hence you need to modify the WebSphere Application
Server’s startup script startServer.sh. For more details, see “JDK 1.4.2 and 5.0” on
page 337.
340 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 9-16 Modified startServer.sh script for DTrace
#!/bin/sh
WAS_USER_SCRIPT=/opt/IBM/WebSphere6.1/AppServer/profiles/AppSrv01/bin/setupCmdLine
.sh
export WAS_USER_SCRIPT
LD_LIBRARY_PATH=/export/home/DTrace/dvm/build/sparc/lib
export LD_LIBRARY_PATH
/opt/IBM/WebSphere6.1/AppServer/bin/startServer.sh "$@"
In some cases, the DVM provider can have a significant probe effect. There are ways to
reduce the probe effect significantly using a pipe. The DVM provider looks into a named
pipe to see if it should enable the probes. You can set the name of the named pipe by
setting the dynamic variable in the -Xrundvmti option. For example:
JAVA_TOOL_OPTIONS="-Xrundvmti:all,dynamic=/tmp/dpipe"
You can turn on the probes by echoing a character into the file /tmp/dpipe. The probes can
be turned off by echoing another character to the pipe.
Note: To run the following DTrace scripts, you need to set the execution permission to each
script and provide arguments as needed by the script. For example, do the following
command to set the execute permission on the script:
#chmod +x class_load.d
#./class_load.d
dvm*:::gc-start
{
self->ts = timestamp;
}
dvm*:::gc-finish
{
printf("GC ran for %d nsec \n", timestamp - self->ts);
}
dvm*:::gc-stats
{
printf("gc-stats: used objects: %ld, used object space: %ld \n",
arg0, arg1);
}
dvm*:::object-alloc
{
@[copyinstr(arg0)] = sum(arg1);
}
BEGIN
{
printf("%50s %10s\n","CLASSNAME","ALLOC SIZE");
}
END
{
printa("%50s %10@d\n",@);
}
Important: Remember to press Ctrl-C (^c) to get the output from the script.
342 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 9-20 Observing Monitors script: observ_mon.d
#!/usr/sbin/dtrace -qs
dvm*:::monitor*
{
@[probename,copyinstr(arg0)]=count();
}
Note: This script will give you a count of the different monitor events and the thread name
that caused these events. The events description are as follows:
monitor-contended-enter: Probe that fires as a thread attempts to enter a contended
monitor.
monitor-contended-entered: Probe that fires when a thread successfully enters the
contended monitor.
monitor-contended-exit: Probe that fires when a thread leaves a monitor and other
threads are waiting to enter.
monitor-wait: Probe that fires as a thread begins a wait on a monitor through
Object.wait().
monitor-waited: Probe that fires when a thread completes an Object.wait().
monitor-notify: Probe that fires when a thread calls Object.notify() to notify waiters on
a monitor.
monitor-notify: Probe that fires when a thread calls Object.notifyAll() to notify waiters
on a monitor.
dvm*:::method-entry
{
@[copyinstr(arg0),copyinstr(arg1)]=count();
}
END
{
printa("%30s::%30s %10@d\n",@);
}
Note: You need to press ^c (Ctrl-C) to see the results of the script. It will print the name of
the class and method name followed by the number of times that method was called.
dvm*:::exception-throw
{
printf("%s thrown\n",copyinstr(arg0));
jstack();
}
This script will print out all the exceptions that are thrown and the stacktrace from where the
exception is thrown. Be aware this is all the exceptions that are thrown as well as the
exceptions that are handled.
In general, the types of probes in Java 6.0 are similar to the dvm probes. The name of the
provider has been changed to hotspot.
In a later version of Java, the ability to add your own custom probes will be introduced. See
http://blogs.sun.com/kamg/ for more details on future enhancements.
344 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
7. Solaris Internals DTrace topic:
http://www.solarisinternals.com/wiki/index.php/DTrace_Topics
This is a compilation of the various DTrace resources focused on Solaris internals.
8. DTrace Bigadmin page: http://www.sun.com/bigadmin/content/dtrace/
This is a compilation of the various DTrace resources.
9. Interesting blogs for DTrace:
http://blogs.sun.com/roller/page/bmc
http://blogs.sun.com/mws
http://blogs.sun.com/roller/page/ahl
http://blogs.sun.com/kamg/
http://blogs.sun.com/brendan/
On the Solaris platform, Sun provides the Sun Studio Analyzer (which we hereafter refer to as
Analyzer), which helps you collect as well as analyze the performance data from a variety of
applications. The applications can be based on anything from C, C++, and FORTRAN as well
as profiling applications written in Java. The real advantage of this is that you do not need to
have the source code of the application or compile and build it with any special option to use
this tool.
WebSphere Application Server applications are just one type of Java application in regards to
Analyzer. The data can be collected from any application written to run in the WebSphere
Application Server environment. Analyzer has two parts:
Collector
The Collector part of the Sun Studio Analyzer tool collects performance data by profiling
and tracing Java function calls. The data can include call stacks, microstate accounting
information, thread-synchronization delay data, hardware-counter overflow data, MPI
function call data, memory allocation data, and summary information for the operating
system and the process.
Analyzer
The Analyzer part of the Sun Studio Analyzer tool displays the data recorded by the
Collector in a GUI. It reads and processes the data collected by the Collector and displays
various metrics of performance at the level of the program, the functions, the source lines,
and the instructions. These metrics are classified into five groups:
– Timing metrics
– Hardware counter metrics
– Synchronization delay metrics
– Memory allocation metrics
– MPI tracing metrics
The Analyzer also displays the raw data in a graphical format as a function of time.
In this section, we provide tools and techniques to help you improve the performance of Java
Virtual Machine, which will in turn result in better performance of WebSphere Application
Server.
The Sun Java performance team maintains the performance page at the Java Web site, and it
contains up-to-date information. Readers are encouraged to read the latest performance tips
and techniques at:
http://java.sun.com/performance/
V6.1 1.4 5
WebSphere Application Server is the core for other WebSphere products, such as Portal
Server, Process Server, Customer Center, and Commerce Server.
346 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Be aware of the specific versions bundled with WebSphere Application Server based
products.
JIT and JVM (Memory Management and GC) operations can have a significant impact on
WebSphere Application Server performance.
With proper Fix Packs or WebSphere SR, JDK can be updated to later versions, as shown in
Table 9-7.
6.1.0.x 1.5.0_06
6.0.2.xa 1.4.2_08*
6.0.1.x 1.4.2_07
6.0 1.4.2_05
5.1.1.x 1.4.2_05
5.1.1.x 1.4.2_11
5.1 1.4.1_05
5.0.2.x 1.3.1_08
5.0.1 1.3.1_07
5.0 1.3.1_05
a. WebSphere Application Server support for Solaris Containers begins with V6.0.2.
For JDK versions on all supported platforms, consult the Web site at:
http://www.ibm.com/support/docview.wss?rs=180&context=SSEQTP&uid=swg27005002JavaSE
5.0
These default settings attempt to optimize the JVM for some generic applications without
having to tune the JVM using numerous command-line options.
348 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Maximum pause time goal
Use the following command-line option to specify the maximum pause time goal:
-XX:+MaxGCPauseMillis=n
This option is effectively a suggestion to the Throughput collector that pause times should not
exceed n milliseconds. The Throughput collector will attempt to achieve this goal by
dynamically adjusting the heap size and other related garbage collection parameters. These
adjustments may affect the throughput of the application, and it may be that the desired goal
cannot be achieved.
Maximum pause time goals are applied separately to each generation. Generations may
need to be decreased in size to meet the goal. There is no default value for the maximum
pause time.
Throughput goal
The throughput goal is established as the percentage of total time spent performing garbage
collection. Use the following command-line option to specify the throughput goal:
-XX:+GCTimeRatio=n
The ratio of GC time to application processing time is given by the following formula:
1 / (1 + n)
For example, -XX:GCTimeRatio=9 sets a goal of 10% of the total to be spent performing
garbage collection. The default goal is 1% (n=99). This is the total time spent garbage
collecting all generations. If the goal cannot be achieved, the generation sizes are increased.
Since the generations would then take longer to fill up, the applications can run longer
between collections in an attempt to achieve the throughput goal.
Footprint goal
Once both throughput and maximum pause time goals have been met, the garbage collector
begins to decrease the heap size. As the heap size is decreased one of the goals, typically
the throughput goal, can no longer be met. At that point the collector begins tuning to achieve
the goal that is currently not met.
Goal priorities
The maximum pause time goal has the highest priority, so the collector tries to achieve that
goal first. The throughput goal will not be attempted until the maximum pause time goal is
met. The footprint goal has the lowest priority. Only after the maximum pause time and the
throughput goal have been achieved will the collector try to meet the footprint goal.
For more details on JVM tuning, refer to “Recommendations for tuning the JVM” on page 356.
Here we explain the memory footprint related changes between the J2SE 1.4.2 and Java SE
5.0 applications.
Measuring the real memory impact of a Java application is often quite difficult. Perhaps the
first hurdle in understanding the footprint is that conventional system utilities, such as Task
Manager on Windows, only tell part of the footprint story. Memory reported depends on
whether application data and programs are read as conventional files or memory mapped
files. In other words, often the true memory footprint of an application includes all the files that
have been brought into the operating system's file system memory cache. Often memory
pages that are shared by other processes or in the file system cache are not reported by
conventional tools. Getting consistent footprint measurements is further complicated by
accurately measuring the same moment in an application's lifetime. Clearly the longer an
application operates, the more likely it is to perform classloading, compilation, or other
activities that affect footprint. The great news for users of Java SE 5.0 is that despite adding
massive new functionality Java engineering has actually pared down core JVM memory
usage and leveraged Class Data Sharing to make the actual memory impact on your system
lower than with J2SE 1.4.2.
Note: As of the writing of this book, Class Data Sharing is only applicable to the -client
runtime; it is not available with the -server runtime.
When the JVM cannot allocate an object from the heap because of a lack of contiguous
space, a memory allocation fault occurs, and the garbage collector is invoked. The first task of
the garbage collector is to collect all the garbage that is in the heap. This process starts when
any thread calls the garbage collector either indirectly as a result of allocation failure, or
directly by a specific call to System.gc(). Usually the full heap or part of it is garbage collected
when it becomes full or exceeds a threshold limit of occupancy (usage).
350 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Garbage collection proceeds through three phases:
Mark
In this phase, all live or reachable objects are found. Any object that is not reachable is
considered garbage and will be removed during the next phase.
Sweep
During this phase, the garbage found in the Mark phase is removed from the heap.
Compaction
If after the Sweep phase there is still not enough contiguous free memory, the heap can be
compacted. During compaction all live objects are moved to one end of the heap.
The job of fulfilling a request for space on the heap requires that a large enough block of
contiguous memory be available. If the space is available but not contiguous due to heap
fragmentation, then two things usually happen: The heap can be compacted, meaning that all
live objects are moved to one end of the heap leaving the other end free to be allocated. If the
process of compaction still does not create enough space, then the heap can be expanded up
to its configured maximum heap size.
Various algorithms for performing garbage collection can be implemented to take advantage
of the different characteristics of the objects in each generation. Generational collectors can
take advantage of the following observations about Java applications:
The majority of objects on the heap are not long lived. They are only referenced for a short
period of time and then become garbage.
Very few references from older to younger objects are used.
The frequency of young generation garbage collections is high, but the young generation
footprint can be kept small, resulting in faster collections. The collections are more efficient
because most of the objects are garbage.
When an object in the young generation survives a few garbage collections, it can be
promoted (tenured) to the older generation, as displayed in Figure 9-3 on page 353. The older
generation usually has a larger chunk of the heap than the younger generation so it will fill up
over a longer period of time. The frequency of old generation collections is low, but they will
require a longer time to complete.
352 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Allocation
Young Generation
object object object object
Promotion
Old Generation
object object object
The algorithm implemented for the young generation collector is optimized for speed because
the frequency of collections is high, while the algorithm implemented for the old generation
collector is optimized to use heap space more efficiently since the old generation will use
most of the heap space.
Note: As of Java SE 5.0 Update 6, the combination of a parallel young generation and
parallel old generation can be enabled with -XX:+UseParallelOldGC.
This section will briefly describe each collector and provide recommendations for when each
collector should be used.
For a more detailed discussion of the HotSpot collectors, refer to the following white paper
Memory Management in the Java HotSpot Virtual Machine, found at:
http://java.sun.com/j2se/reference/whitepapers/memorymanagement_whitepaper.pdf
HotSpot generations
In the HotSpot JVM, memory is partitioned into three generations.
Young generation
Most objects are initially allocated in the young generation unless they are too large.
Old generation
The old generation contains objects that have survived a few young generation garbage
collections and have been promoted to the old generation. Some large objects may have
been directly allocated to the old generation.
The young generation is further divided into three parts: an area called Eden and two smaller
areas referred to as survivor spaces, as shown in Figure 9-4.
Young Generation
object object object object object object object Eden
From To
object object object Empty Survivor Spaces
As discussed earlier, most objects begin life in Eden with some being directly allocated to the
old generation. The survivor spaces contain objects from Eden that have survived at least
one garbage collection. At any point in time, one of the survivor spaces (called From in the
figure) contain such surviving objects and the other survivor space is empty and remains
empty until another garbage collection occurs.
Serial collector
A serial collector collects all generations in sequence using only one CPU. This is a
stop-the-world collection, meaning that all application execution is paused until the collection
is complete.
As shown in Figure 9-5 on page 355, live objects in the Eden space are copied to the empty
survivor space (called To in the figure). If an Eden object is too large for the survivor space, it
is copied directly to the old generation. Live objects in the other survivor space (called From
in the figure) that are comparatively young are also copied to the empty survivor space, while
the comparatively older objects are copied to the old generation. The objects marked with an
X are garbage and do not get copied.
354 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Young Generation
object Eden
From To
Old Generation
object object
When the garbage collection is complete, the Eden space and the previously occupied
survivor space are now empty, as shown in Figure 9-6, and the two survivor spaces “flip” or
switch roles.
Young Generation
Empty Eden
To From
Old Generation
object object object
In Java SE 5.0, the serial collector is the default on machines that are not server-class. You
can explicitly select the serial collector by using the -XX:+UseSerialGC command-line option.
Parallel collector
The parallel collector is also referred to as the throughput collector and it is designed to take
advantage of a machine with multiple CPUs and a large amount of physical memory. Multiple
CPUs are used to perform garbage collection tasks in parallel.
In addition, you should use the parallel collector if the serial collector is not giving you short
enough pause times on minor collections or if the GC impact (which is a function of frequency
and pause time) is too high. Furthermore, we recommend a large young generation for that
parallel collector, or you will end up with threads contending for work, which could reduce
efficiency.
In Java SE 5.0, the parallel collector is the default on server-class machines. You can
explicitly select it with the -XX:+UseParallelGC command-line option.
Use the parallel compacting collector if the following conditions are true:
The machine has multiple CPUs and plenty of physical memory.
To further reduce full gc times for throughput oriented applications.
The parallel compacting collector is not yet a default on server-class machines. If you want to
use it, you must explicitly select the parallel compacting collector by using the
-XX:+UseParallelOldGC command-line option.
The CMS collector is not the default on server-class machines. If you want to use it, you must
explicitly select the CMS collector by using the -XX:+UseConcMarkSweepGC command-line
option. For pause time sensitive applications, running the concurrent collector in incremental
mode may be useful. To enable incremental mode, use the -XX:+CMSIncrementalMode
option.
356 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
terms of required throughput and acceptable pause times, you may not need to further
optimize the tuning.
In many cases, things may not be this simple. If load testing indicates performance problems
associated with garbage collection, you should consider the following actions:
Set the minimum and maximum values of the Java heap size.
Evaluate whether the default garbage collector is appropriate for the application.
If necessary, explicitly select another garbage collector.
Load test the application and evaluate whether the performance is acceptable with the
new garbage collector.
Section 9.4.6, “Tools for performance tuning” on page 364 will describe tools for measuring
and analyzing performance. The results produced by these tools will enable you to better
select alternative tuning options.
Several tables of “Key garbage collection options”, showing commonly used settings, is
provided in 9.4.5, “Key options related to garbage collection” on page 361.
If the throughput goal can be achieved, but pause times are too long, you can follow these
guidelines:
Select a maximum pause time goal.
Expect that achieving a maximum pause time goal may mean that the throughput goal will
not be achieved.
Consequently, choose values that are a reasonable compromise for your application.
Trying to satisfy both goals will cause oscillations in the heap size, even after the application
appears to reach a steady state. There are opposing forces at work. To achieve a throughput
goal may require a larger heap, while the goal to achieve a maximum pause time and a
minimum footprint may require a smaller heap.
Note: There are times when manually tuning the size of the young generation and the size
of the survivor spaces and the number of GC threads will have significant impact. There
are also times when making the initial and maximum heap sizes the same (for example,
-Xmx == -Xms) is appropriate to use, instead of letting the heap grow and shrink.
-XX:+UseSerialGC
Note: The serial collector is not the default on server class machines unless -client is
specifically added to the command line.
Use the Parallel or Throughput Collector for better performance on multi-CPU servers (For
example, T2000 or T5220):
-XX:+UseParallelGC.
Set the proper number of threads for parallel garbage collection (for example,
-XX:ParallelGCThreads=<threads>, where <threads> is equal to total number of CPUs on
the system or hardware threads on a multi-core system).
Use the Concurrent Low Pause Collector for constant response time on multi-CPU servers:
-XX:+UseConcMarkSweepGC
Use the Incremental Low Pause Collector for constant response time on servers that have
few processors:
-Xincgc
358 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The AggressiveOpts option turns on point performance compiler optimizations that are
available in v1.5.0_06:
-XX:+AggressiveOpts
Use Fixed Size Young Generation to better handle stress. For example:
-XX:NewSize=512m
-XX:MaxNewSize=512m (for 1.3) -Xmn512m (for 1.4)
Note: If you are using a fixed size young generation, you should also be using a fixed sized
heap (that is, -Xms == -Xmx). It is not strictly necessary, but you should also set NewSize
== MaxNewSize unless -Xms == -Xmx.
Permanent Generation:
-XX:PermSize=<initial size>
-XX:MaxPermSize=<max size>
Before choosing the command-line flags, you should monitor and observe the behavior of the
JVM under commonly expected application workloads or in stressed conditions. Then,
analyze the performance data and change the default JVM configuration(s) as needed.
The following list shows the typical JVM tuning parameters that can be used for WebSphere
Application Server V6.1:
initialHeapSize This is a WebSphere specific JVM option and is similar to the -Xms
option. The values are numeric in MB. For example, a value of 2048
MB would mean 2 GB of memory for the initial JVM heap.
maximumHeapSize This is a WebSphere specific JVM option and is similar to the -Xmx
option. The values are numeric in MB. For example, a value of 2048
MB would mean 2 GB of memory for the maximum JVM heap.
-server Sets the Java runtime environment to use the server optimized virtual
machine. Generally, “server” is recommended for high throughput
Application Servers. Due to the ergonomics feature, this may be the
default option.
Note: The number of GC threads setting will depend on the underlying processor
architecture of the deployment system. On the Chip Multithreading processor based (for
example, UltraSPARC-T1 and T2) systems, you should not set the threads higher than one
quarter of the available hardware threads. For example, a system with the UltraSPARC-T2
(8-core) processor has 64 hardware threads; thus, the maximum recommended value is
-XX:ParallelGCThreads=16.
On other processors, the maximum recommended value is the total number of “logical”
processors on the system (for example, reported by the psrinfo command). For example,
the threads value is 8 on a system with four UltraSPARC IV+ (Dual Core) processors (for
example, =8 processors). Setting the number of GC threads too high would have an
adverse effect on overall system performance.
In addition, you need to consider the number of WebSphere instances that you are running
on the system. Since this thread setting is for each JVM, you must distribute these threads
accordingly (that is, the sum of the GC threads for all WebSphere instances is no more
than the number of processors).
-XX:+AggressiveHeap
Available in JDK 1.4.2, this parameter attempts to set various
parameters based on the system’s physical memory and number of
processors, but going forward, you should use AggressiveOpts
instead.
-XX:MaxPermSize Size of the permanent generation. (This is for Sun JVM 5.0 and newer;
64-bit VMs are scaled 30% larger, 1.4 amd64: 96m, and 1.3.1 -client:
32m.) It is important to note here that this value is further scaled up
with WebSphere Application Server and the default setting sets it to
256m on a 32-bit platform.
-XX:PermSize It is the initial size of the permanent generation and is used along with
the -XX:MaxPermSize flag.
360 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
-XX:+UseParallelOldGC
Use parallel garbage collection for the full collection. Enabling this
option automatically sets -XX:+UseParallelGC. (Introduced in 5.0
update 6.)
Using a combination of the above flags can lead you to your desired system throughput. In
some cases, JVM tuning can be simplified using just a few flags to achieve similar throughput:
-XX:+AggressiveOpts
Turn on point performance compiler optimizations that are expected to
be the defaults in upcoming releases. (Introduced in 5.0 update 6.)
-XX:+UnlockDiagnosticVMOptions -XX:-EliminateZeroing
These two options must be used with +AggressiveOpts for optimizing
WebSphere Application Server. Because the EliminateZeroing option
is considered a diagnostic VM option, you must set
+UnlockDiagnosticVMOptions first in order to use EliminateZeroing.
This -EliminateZeroing option disables the initialization of newly
created character array objects.
Besides JVM tuning, you also need to invest the time in setting the WebSphere thread pool
sizes appropriately.
–XX:+UseSerialGC Serial
–XX:+UseParallelGC Parallel
–Xmsn See 9.4.2, “JVM ergonomics Initial Heap size, in bytes, of the
(self tuning)” on page 347. heap.
362 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Option Default Description
–XX:NewRatio=n 2 on client JVM and 8 on server Ratio between the young and
JVM. old generations. For example, if
n is 3, then the ratio is 1:3 and
the combined size of Eden and
the survivor spaces is one
fourth of the total size of the
young and old generations.
By default on Solaris, the GC data will be written to the server’s native_stdout.log file in the
log directory.
364 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
34456K: The combined size of live objects after GC.
100736K: The total available space, excluding the permanent generation, which is the
total heap minus one of the survivor spaces.
0.0248518 secs: Time it took for garbage collection.
You can get more GC details by adding any of the following arguments to the server’s Generic
JVM arguments configuration:
-XX:+PrintGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
Refer to Table 9-9 on page 362 for a description of each of these options.
Example 9-24 shows output from when the -XX:+PrintGCDetails argument is used.
Avoiding Full GC results in better performance, as it severely affects response time. Full GC is
usually an indication that the Java heap is too small or many short-lived objects have been
created. Increase the heap size by using the MaximumHeap (-Xmx) and InitialHeap (-Xms)
options until Full GCs are no longer triggered or it can be controlled to minimize the impact.
For a long running high volume Web site, it may be impossible to eliminate the full GC
completely; however, the time it takes to do the Full GC and the frequency of it can be
controlled by looking at the GC log and applying the appropriate tuning. For example, if an
application generates many short-lived objects that get collected by a minor GC, you can
increase the young generation (-Xmn). The reason is that these objects do not get promoted
and they get collected during minor GC.
If your application tends to generate the long-lived objects that are going to persist for some
time, you can increase the old generation. It is ideal to pre-allocate the heap by setting -Xmx
and -Xms to the same value (that is, the fixed heap size). For example, to set the Java heap to
3.5 GB, you can use the following Java options:
366 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The J2SE 1.5.0_06 release also introduced parallelism in garbage collection for old
generation. Add the -XX:+UseParallelOldGC option to the standard JVM flags to enable this
feature.
For young generation, the number of parallel GC threads is the number of logical processors
presented by the Solaris OS. For example, on a fully populated UltraSPARC T1
processor-based system (that is, eight cores and four threads/core), the processors equate to
the number of threads which, in this case, is 32. It may be necessary to scale back the
number of threads involved in young generation GC to achieve response time constraints. To
reduce the number of threads, you can set XX:ParallelGCThreads=number_of_threads.
Similarly, the UltraSPARC T2 processor has 64 threads and for these systems the number
Parallel GC threads would be 64, which may be too much and may affect performance
adversely. Therefore, proper caution needs to be taken when working with these types of
hardware. At the same time, US IV+, for example, may have few parallel GC threads and
increasing this number would improve the performance of the system.
Threads view
Threads details
view
368 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Heap configuration and usage
The -heap option is used to obtain Java heap information, including:
1. GC algorithm specific information: This includes the name of the GC algorithm (Parallel
GC for example) and algorithm specific details (such as number of threads for parallel
GC).
2. Heap configuration: The heap configuration may have been specified as command-line
options or selected by the VM based on the machine configuration.
3. Heap usage summary: For each generation, it prints the total capacity, in-use, and
available free memory. If a generation is organized as a collection of spaces (the new
generation, for example), then a space-wise memory size summary is included.
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 268435456 (256.0MB)
NewSize = 2228224 (2.125MB)
MaxNewSize = 4294901760 (4095.9375MB)
OldSize = 1441792 (1.375MB)
NewRatio = 2
SurvivorRatio = 32
PermSize = 16777216 (16.0MB)
MaxPermSize = 268435456 (256.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 44761088 (42.6875MB)
used = 15615640 (14.892234802246094MB)
free = 29145448 (27.795265197753906MB)
34.88664082517386% used
From Space:
capacity = 6094848 (5.8125MB)
used = 5237680 (4.9950408935546875MB)
free = 857168 (0.8174591064453125MB)
85.93618741599462% used
To Space:
capacity = 6094848 (5.8125MB)
used = 0 (0.0MB)
free = 6094848 (5.8125MB)
0.0% used
PS Old Generation
capacity = 180355072 (172.0MB)
Heap histogram
The -histo option can be used to obtain a class-wise histogram of the heap. For each class, it
prints the number of objects, memory size in bytes, and fully qualified class name. Note that
internal classes in the HotSpot VM are prefixed with an “*”. The histogram is useful when
trying to understand how the heap is used. To get the size of an object, you need to divide the
total size by the count of that object type.
370 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Getting information about the permanent generation
The permanent generation is the area of the heap that holds all the reflective data of the
virtual machine itself, such as class and method objects (also called the “method area” in the
Java Virtual Machine Specification).
Configuring the size of the permanent generation can be important for applications that
dynamically generate and load a very large number of classes (Java Server Pages or Web
containers, for example). If an application loads too many classes, then it is possible it will
abort with an OutOfMemoryError error. The specific error is Exception in thread XXXX
java.lang.OutOfMemoryError: PermGen space.
To get further information about permanent generation, use the -permstat option. It prints
statistics for the objects in the permanent generation.
Example 9-28 shows the output from the jmap -permstat command.
For each class loader object, the following details are printed:
1. The address of the class loader object in the snapshot when the utility was run.
2. The number of classes loaded (defined by this loader with the method
java.lang.ClassLoader.defineClass).
3. The approximate number of bytes consumed by meta-data for all classes loaded by this
classloader.
4. The address of the parent class loader (if any).
5. A “live” or “dead” indication, which indicates whether the loader object will be garbage
collected in the future.
6. The class name of this class loader.
A stack trace of all threads can be useful when trying to diagnose a number of issues, such as
deadlocks or hangs. In many cases, a thread dump can be obtained by pressing Ctrl-\ at the
application console (standard input) or by sending the process a QUIT signal. Thread dumps
can also be obtained programmatically using the Thread.getAllStackTraces method, or in the
debugger using the debugger option to print all thread stacks (the where command in the case
of the jdb sample debugger). In these examples, the VM process must be in a state where it
can execute code. In rare cases (for example, if you encounter a bug in the thread library or
HotSpot VM), this may not be possible, but it may be possible with the jstack utility, as it
attaches to the process using an operating system interface.
372 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Thread t@1341: (state = BLOCKED)
- java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be
imprecise)
- com.ibm.ws.util.BoundedBuffer.waitGet_(long) @bci=22, line=192 (Compiled frame)
- com.ibm.ws.util.BoundedBuffer.take() @bci=86, line=543 (Compiled frame)
- com.ibm.ws.util.ThreadPool.getTask() @bci=137, line=819 (Compiled frame)
- com.ibm.ws.util.ThreadPool$Worker.run() @bci=264, line=1544 (Interpreted frame)
[...lines removed here to reduce output...]
For more information, the manual page for jstack is available at:
http://java.sun.com/j2se/1.5.0/docs/tooldocs/share/jstack.html
One of the interesting things to note here is the startup, which is defined by the time it takes to
start the application, and footprint, which is defined by the memory needed by WebSphere
Application Server, also changes with the above changes. The default parameters are usually
defined to address the need for more generalized cases of deployment, which may include
the developer environment to a default production deployment. So some of the settings may
not be right per se for developer or some special deployment cases. When you observe such
a change, you may consult the relevant documentation and take the appropriate action to
minimize the impact of technology change and take advantage of the new technology.
For example, when using the WebSphere Application Server V6.1, you may notice the
increase in memory usage by WebSphere Application Server, but this is one result of the JVM
option that has been changed in Java SE 5.0. To take control of this memory increase, some
of these options can be reverted to get memory usage back to the level it was with the
previous version. Some recommendations on managing memory usage can be found at the
following link:
http://www.ibm.com/support/docview.wss?rs=180&context=SSEQTP&q1=footprint&uid=swg2
1240768&loc=en_US&cs=utf-8&lang=en
In summary, the information at that link suggests overriding a default setting that is causing
the JVM to use more memory by using the following options:
As you may notice, the JVM has been forced to run in -client mode, which used to be the
default for WebSphere Application Server V6.0.2 or earlier. Due to ergonomics and the
hardware configuration, the JVM normally chooses to run in -server mode.
To enable your WebSphere Application Server installation to make use of this feature, you
need to have this JVM option added to you generic JVM argument:
-Xshare:on
Another important thing to note here is that this feature in JDK 1.5 has been made available to
the client VM only, so you need to have the -client option as well genericJVMArguments,
because ergonomics is going to start the server VM if the underlying hardware is classified as
a server class machine (for example, greater than 2 CPUs and 2 GB of RAM).
To speed up the startup process, WebSphere Application Server has additional parameters
that can be adjusted to help speed up the startup process:
Parallel Start
Select this field to start the server on multiple threads. This might shorten the startup time.
This can be accessed by selecting Applications → Enterprise Applications →
application_name → Startup behavior.
Run in development mode
Enabling this option may reduce the startup time of an application server. This may include
JVM settings, such as disabling bytecode verification and reducing JIT compilation costs.
Do not enable this setting on production servers.
Specify that you want to use the JVM properties -Xverify and -Xquickstart on startup.
Before selecting this option, add the -Xverify and -Xquickstart properties as generic
arguments to the JVM configuration.
If you select this option, you must save the configuration and restart the server before this
configuration change takes effect.
The default setting for this option is false, which indicates that the server will not be started
in development mode. Setting this option to true specifies that the server will be started in
development mode (with settings that will speed server startup time).
Other factors that affect the WebSphere Application Server startup are that major portions of
the server startup cannot be done over multiple threads, such as classloading. The system
will have a great deal of processing capability that the WebSphere Application Server process
cannot take advantage of due to the current single threaded nature of the class loader. This
can be verified by running mpstat 5 during server startup; you will notice that only one core,
thread, or CPU is being used while the rest of them are idle.
374 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
9.4.8 Additional performance tuning tools
The following tools are also useful for JVM tuning and troubleshooting. These will be
discussed and used in Chapter 10, “Problem determination” on page 395.
Here we talk some of the important features of these tools and discuss in detail the part of
jvmstat that is bundled with JVM and is already available with WebSphere Application Server
installations. These tools are located in WAS_JAVA_DIR/bin. They are:
jps
jstat
jstatd
The jps tool can be used to list all the instrumented JVM processes running on the system.
Running jps on a system with WebSphere Application Server would generate output similar
to that shown in Example 9-30.
The jstat tool, also known as the JVM Statistics Monitoring Tool, attaches to a running
HotSpot Java virtual machine and collects and logs performance statistics specified by the
command-line options. It was formerly known as jvmstat.
This is very helpful and more readable compared to the PrintGCDetails option of verbose gc.
Example 9-31 shows an output from the jstat command.
In Example 9-31, the -gcutil option tells the jstat tool to collect and print a summary of
Garbage Collection Statistics at an interval of 10000 ms and to do it five times.
There are various other options available as well, such as class, compiler, gccapacity, gcold,
and so on.
The monitoring interfaces added to the HotSpot JVM are proprietary and may or may not be
supported in future versions of the HotSpot JVM, but for the current version, they can be used
to look at different stats to make the right tuning decisions.
To profile WebSphere applications, use IBM Rational Application Developer V7; information
about it can be found at:
http://www.ibm.com/developerworks/rational/products/rad
Performance tuning can yield significant gains in performance even if an application is not
optimized for performance. However, correcting shortcomings of an application typically
results in larger performance gains than are possible with just altering tuning parameters.
Many factors contribute to a high performing application.
376 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
9.5.1 Adjusting the WebSphere Application Server system queues
WebSphere Application Server establishes a queuing network, which is a group of
interconnected queues that represent various components. There are queues established for
the network, Web server, Web container, EJB container, Object Request Broker (ORB), data
source, and possibly a connection manager to a custom back-end system. Each of these
resources represents a queue of requests waiting to use that resource. Queues are
load-dependent resources. As such, the average response time of a request depends on the
number of concurrent clients.
A client request enters the Web server and travels through the WebSphere components in
order to provide a response to the client. Figure 9-8 illustrates the processing path this
application takes through the WebSphere components as interconnected pipes that form a
large tube.
50
40
Data
30 Source
Web
20 Server
10 Response
Web
EJB
Request Container
Container
100 200 300 400 500 600 700 800 Processing Time
(ms)
The width of the pipes (illustrated by height) represents the number of requests that can be
processed at any given time. The length represents the processing time that it takes to
provide a response to the request.
In order to find processing bottlenecks, it is useful to calculate a transactions per second (tps)
ratio for each component. Here are some ratio calculations for a fictional application:
50req
The Web server can process 50 requests in 100 ms = --------------- = 500tps
0.1s
The Web container parts can process 18 requests in 300 ms = 18req
--------------- = 60tps
0.3s
9req
The EJB container parts can process nine requests in 150 ms = -------------- = 60tps
0.15s
40req
The datasource can process 40 requests in 50 ms = --------------- = 800tps
0.05s
This example illustrates the importance of queues in the system. Looking at the operations
ratio of the Web and EJB containers, each is able to process the same number of requests
over time. However, the Web container could produce twice the number of requests that the
EJB container could process at any given time. In order to keep the EJB container fully
utilized, the other half of the requests must be queued.
It should be noted that it is common for applications to have more requests processed in the
Web server and Web container than by EJBs and back-end systems. As a result, the queue
sizes would be progressively smaller moving deeper into the WebSphere components. This is
one of the reasons queue sizes should not solely depend on operation ratios.
EJB container
Bean Cache: 30530
Timeout: 3000
378 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
– set maxWriteTimeout 6000
JDBC
Type 4 driver (pure Java).
Keep it low.
– Connections Min/Max: 100/100
Set statement cache size properly.
– Statement Cache: 60
WebSphere Application Server provides several tunable parameters and options to match the
application server environment to the requirements of your application:
Review the hardware and software prerequisites for WebSphere Application Server.
Install the most current refresh pack, Fix Pack, and recommended interim fixes.
Check the hardware configuration and settings.
Review your application design.
Tune the Solaris Operating System.
Tune the Java Virtual Machine settings.
Use a Type-4 (or pure Java) JDBC driver.
Install the most current refresh pack, Fix Pack, and recommended interim fixes
The list of recommended updates is maintained on the Support site at:
http://www-1.ibm.com/support/docview.wss?uid=swg27004980
380 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Ensure that the transaction log is on a fast disk
Some applications generate a high rate of writes to the WebSphere Application Server
transaction log. Locating the transaction log on a fast disk or disk array can improve response
time.
In many cases, some other component, for example, a database, needs adjustment to
achieve higher throughput for your entire configuration.
From the TPV shown in Figure 9-9, you can view current activity or log Performance
Monitoring Infrastructure (PMI) performance data for the following:
System resources, such as CPU utilization
WebSphere pools and queues, such as a database connection pool
Customer application data, such as average servlet response time
By viewing PMI data, administrators can determine which part of the application and
configuration settings to alter in order to improve performance. For example, in order to
determine what part of the application to focus on, you can view the servlet summary report,
as shown in Figure 9-10 on page 382, enterprise beans, and Enterprise JavaBeans (EJB)
methods, and determine which of these resources has the highest response time. You can
then focus on improving the configuration for those application resources with the longest
response times.
For details on using the Tivoli Performance Viewer, read the WebSphere Application Server
V6.1 Information Center article, Monitoring performance with the Tivoli Performance Viewer
(TPV), found at:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.web
sphere.nd.doc/info/ae/ae/tprf_tpvmonitor.html
View recommendations and data from the Performance Advisor by clicking Advisor in the
Tivoli Performance Viewer.
Because of the sheer magnitude of monitors and tuning parameters, knowing where to start,
what to monitor, and which component to tune first is difficult. Follow these top ten monitoring
steps to check the most important counters and metrics of WebSphere Application Server.
See Figure 9-11 on page 383 for a graphical overview of the most important resources to
check. Consider the following:
If you recognize something out of the ordinary, for example, an overutilized thread pool or
a JVM that spends 50% in garbage collection at peak load time, then concentrate your
tuning actions there first.
Perform your examination when the system is under typical production level load and
make note of the observed values.
382 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Alternatively, save Tivoli Performance Viewer sessions to the file system, where the
monitoring data will be stored for recurring analysis.
Keep in mind that one set of tuning parameters for one application will not work the same
way for another.
HTTP HTTP
Web Server Web Container EJB Container Data Source
8. CPU ORB
9. I/O JDBC
10. Paging
7. Java Virtual
IIOP DB
Machine Memory
8. CPU 8. CPU
9. I/O 9. I/O
10. Paging 10. Paging
Thread pools
Here are some items to consider for thread pool monitoring:
1. Web server threads
For information about how to monitor the IBM HTTP Server, refer to Chapter 14,
“Server-side performance and analysis tools”, in WebSphere Application Server V6
Scalability and Performance Handbook, SG24-6392.
Tip: When the application is experiencing normal to heavy usage, the pools used by
that application should be nearly fully utilized. Low utilization means that resources are
being wasted by maintaining connections or threads that are never used. Consider the
order in which work progresses through the various pools. If the resources near the end
of the pipeline are underutilized, it might mean that resources near the front are
constrained or that more resources than necessary are allocated near the end of the
pipeline.
Data sources
All datasource connection pools can be viewed in the Connection Pools summary report in
the Tivoli Performance Viewer. The connection pool summary lists all data source
connections that are defined in the application server and shows their usage over time.
Tip: When the application is experiencing normal to heavy usage, the pools used by that
application should be nearly fully utilized. Low utilization means that resources are being
wasted by maintaining connections or threads that are never used. Consider the order in
which work progresses through the various pools. If the resources near the end of the
pipeline are underutilized, it might mean that resources near the front are constrained or
that more resources than necessary are allocated near the end of the pipeline.
Information provided about the JVM runtime depends on the debug settings of the JVM and
can be viewed in the Tivoli Performance Viewer by selecting Performance Modules → JVM
Runtime. Use the Java virtual machine Tool interface (JVMTI) for profiling the JVM. JVMTI is
new in JVM V1.5. For WebSphere Application Server V6.0.2 and earlier, use the Java virtual
machine Profiling interface (JVMPI).
For details on how to enable JVMTI data reporting, read the Information Center article,
Enabling the Java virtual machine profiler data, found at:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.web
sphere.nd.multiplatform.doc/info/ae/ae/tprf_jvmpidata.html
384 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Tip: It is important to configure the correct number of processors in the Runtime
Performance Advisor configuration window, otherwise the recommendations can be
inaccurate.
However, keep in mind that both Tivoli Performance Advisor and Runtime Performance
Advisor advice can only be accurate if your application runs with a production level load, and
without exceptions and errors. This is best done by running a performance test, where a
production level load can be simulated in a reproducible manner. Both advisors need the CPU
utilization to be high to provide good (and complete) advice, which is best accomplished
during a load and stress test. If this is not the case, the resulting output may be misleading
and most certainly contradictory.
Performance analysis
Web performance analysis describes the process of finding out how well your system (a Web
application or generally a Web site) performs, and pinpointing performance problems caused
by inadequate resource utilization, such as memory leaks or over- or underutilized object
pools, to name just a few. Once you know the trouble spots of your system, you can take
counter-measures to reduce their impact by taking appropriate tuning actions.
Terminology
Performance analysis is a very comprehensive topic. It fills entire books and is surely out of
the scope of this book. We attempt to provide a short introduction to that subject. What
follows is a concise definition of the three most important concepts used in performance
analysis literature:
Load
Throughput
Response Time
Load
A Web site, and especially the application that is running behind it, typically behaves and
performs differently depending on the current load, that is, the number of users that are
concurrently using the Web site at a given point in time. This includes clients who actively
perform requests at a time, but also clients who are currently reading a previously requested
Web page. Peak load often refers to the maximum number of concurrent users using the site
at some point in time.
Response time
Response time refers to the time interval from the time the client initiates a request until it
receives the response. Typically, the time taken to display the response (usually the HTML
data inside the browser window) is also accounted for in the response time.
Throughput
A Web site can only handle a specific number of concurrent requests. Throughput depends
on that number of requests and on the average time a request takes to process; it is
measured in requests per second. If the site can handle 100 concurrent requests and the
average response time is one second, the Web site’s throughput is 100 requests per second.
Important: The goal here is to drive CPU utilization to nearly 100 percent. If you cannot
reach that state with opened up queues, there is most likely a bottleneck in your
application, in your network connection, some hard limit on a resource, or possibly any
other external component you did not modify.
386 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Throughput and Response Time
Throughput
Buckle Zone
Plateau
500
1400 1296 1315 1300 422 450
Light Load 1236
1137 1183 400
1200
323 350
1000 921
Throughput
300 Transactions/s
800 250
680
200
RT
600
153
150
375
400 Near linear increase
75 100
in response time
200 25 28 32 34 38 50
0 0
10 20 30 40 50 100 200 400 500
Client Load
Figure 9-12 How to find the optimum load level for a system: the saturation point
Here is a short overview of the steps necessary to perform a load test for tuning purposes
with the Runtime Performance Advisor:
1. Enable the PMI service in all application servers and the Node Agent (if using a
WebSphere Network Deployment environment), and restart both. In WebSphere
Application Server V6.1, PMI is enabled by default with the Basic monitoring set, so unless
you disabled it previously, there should be no need to restart the servers. However, you
might want to configure the statistics to be monitored for your test.
2. Enable Runtime Performance Advisor for all application servers.
3. Select Monitoring and Tuning → Performance Viewer → Current Activity and select
the server you want to monitor in the WebSphere Administrative Console.
4. Simulate your representative production level load using a stress test tool.
– Make sure that there are no errors or exceptions during the load test.
– Record throughput and average response time statistics to plot a curve at the end of all
testing iterations.
Measurement Interval
Users
Users
Time
Figure 9-13 Measurement interval: concurrently active users versus elapsed time
388 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
For a Network Deployment cluster, you do not need to repeat all the monitoring steps for
each cluster member if they are all set up identically; monitoring one or two representative
cluster members should be sufficient. What is essential, however, is to check the CPU
statistics for each node to make sure that all cluster member processes are using similar
amounts of CPU and there are no extra processes consuming CPU on any nodes that can
interfere with the application server CPU efficiency.
Test on idle systems.
Make sure you do not perform test runs during database backups, maintenance cycles, or
while other people perform tests on these systems.
Use isolated networks.
Load driving machines should be attached to the same network switch as your first-layer
machines, such as Web servers or network dispatchers, to be able to apply the highest
load possible on the front network interfaces of your infrastructure.
Performance tuning is an iterative process.
10-15 test runs are quite usual during the tuning phase. Perform long lasting runs to detect
resource leaks, for example, memory leaks, where the load tested application runs out of
heap space only after a given time.
Following that release, as of this writing, WebSphere Application Server in 64-bit format is
available on all supported Solaris 10 platforms. The platform support for the WebSphere
Application Server can be found at the following IBM Web site:
http://www-1.ibm.com/support/docview.wss?rs=180&uid=swg27006921
IBM has tested WebSphere Application Server 64-bit performance and has published a
report with the results. Although the report does not cover Solaris 10 itself, the results should
apply in principle to all platforms. You can read the document, IBM WebSphere Application
Server and 64-bit platforms, found at:
ftp://ftp.software.ibm.com/software/webserver/appserv/was/64bitPerf.pdf
Applications that require a large amount of memory can now leverage 64-bit WebSphere
Application Server and achieve virtually unlimited heap space. However, proper JVM tuning is
still required to address the challenges that are discussed in the previously referenced
document. The proper sizing of the heap’s old and new generations can be a greater
challenge on 64-bit WebSphere Application Server. Since WebSphere Application Server on
Solaris 10 uses Sun’s JVM, the information found at the following Web site may be useful for
examining the performance comparison of 32-bit and 64-bit in the context of performance
benchmarks:
http://blogs.sun.com/dagastine/entry/no_tuning_required_java_se
As you will observe from the information provided in the previously referenced link, a minimal
tuning of Sun’s Java SE can get you started and may provide some performance advantages
without getting into more complex tuning procedures.
Benchmark results are meant to provide the customer with guidelines for system performance
characteristics and comparisons. Some of the benchmarks available for WebSphere
Application Server are as follows:
SPECjAppServer2004
Trade (v2, 3 or 6)
Apache DayTrader
However, you have to remember that your own application performance may vary depending
on the quality of code and tuning applied. These applications, even when written to simulate
some real-world scenario, have purposes that are different from ours, so the data from these
benchmarks should be used with caution.
Performance characteristics
The benchmark scales linearly with multiple application server instances (locally or
horizontally).
It heavily exercises all parts of the underlying infrastructure that make up the application
environment: hardware, JVM software, database software, JDBC drivers, and the system
network.
A single instance on a single node matters the most. It shows the infrastructure’s
capability to scale under a highly stressed/contended environment.
390 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
9.7.2 IBM Trade Performance Benchmark Sample for WebSphere Application
Server
The IBM Trade Performance Benchmark Sample for WebSphere Application Server (known
as Trade 6) provides a suite of IBM developed workloads for characterizing the performance
of the WebSphere Application Server. The workloads consist of an end-to-end Web
application and a full set of primitives. The applications are a collection of Java classes, Java
servlets, Java Server Pages, Web services, and Enterprise Java Beans built for open J2EE
APIs. Together these provide versatile and portable test cases designed to measure aspects
of scalability and performance. The IBM Trade Performance Benchmark Sample for
WebSphere Application Server (Trade 6) can be downloaded from the following Web site:
http://www-306.ibm.com/software/webservers/appserv/was/performance.html
Once the application design and scalability techniques have been decided, you need to
determine the number of machines along with their capacity requirements for the project. It is
given that the application design will evolve over time and capacity planning is usually done in
the early stages of design. However, when planning for capacity, it is important that you have
a static version of the application design with which to work. The better view you have of the
application design, the better your sizing estimate will be. This would include all the
components that will support the application, such as:
Operating system
Infrastructure component sizing
Choice of hardware systems
WebSphere Application Server sizing or capacity planning
You should also consider which hardware platforms you want to use. This decision is primarily
dependent on your platform preference, which platforms have sizing information available,
and which platforms WebSphere Application Server supports. Hardware decisions might also
be driven by the availability of hardware that forces a limitation on the operating systems that
can be deployed.
Next, determine whether you want to scale up or out. Scaling up means to do vertical scaling
on a small number of machines with multiple processors. This can present significant single
points of failure. Scaling out, however, means using a larger number of smaller machines.
What you need to understand, though, is that the sizing estimates are solely based on your
input, which means the better the input, the more accurate the estimation can be. Sizing work
assumes an average standard of application performance behavior and an average response
time is assumed for each transaction. Calculations based on these criteria are performed to
determine the estimated number of machines and processors your application will require. If
your enterprise has a user experience team, they might have documented standards for a
typical response time that your new project will be required to meet.
If you need a more accurate estimation of your hardware requirements and you already have
your application, consider benchmarking your own application on the actual hardware that
you are considering. This will provide you with the best and accurate estimation.
Based on your estimation, you might have to update your production implementation design
and the designs for the integration and development environments accordingly. Changes to
the production environment should be incorporated into the development and testing
environments if at all possible.
Benchmarking is the process used to take an application environment and determine the
capacity of that environment through load testing. This determination enables you to make
reasonable judgements as your environment begins to change. Using benchmarks, you can
determine the current work environment capacity and set expectations as new applications
and components are introduced.
Many sophisticated enterprises maintain a benchmark of their application stack and change it
after each launch or upgrade of a component. These customers usually have well-developed
application testing environments and teams dedicated to the cause. For those that do not, an
alternative is to do some mathematical analysis of the different available benchmarks and to
try to map the complexity of the application slated for deployment to the one for which the
actual benchmark data is available.
There are also third-party benchmark companies that provide such services. When choosing
a service, make sure that the team that will perform your benchmark tests has adequate
knowledge of the environment and a clearly defined set of goals. This helps reduce the costs
of the benchmark tests and creates results that are much easier to quantify. If these
techniques are applied without sufficient knowledge, you may end up with an under
performing production system, thereby causing user dissatisfaction. For example, if Servlet or
Java Server Page (JSP) processing in a customer application may be three times more
processor intensive than the benchmark application, you will have to pay attention to these
facts. Similarly, if your processing involves complex third-party transactions, your mileage will
vary based on that as well. Response time, defined as time taken to serve a client request, is
also a very relative term and has to be used in the right context. An acceptable response time
of one second in one application may be totally unacceptable in other application contexts.
However, this will provide a starting point for sizing the system.
392 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Capacity planning solely depends on the choice of technology used, details of application
complexity, and the methodology used for capacity planning. Wrong estimates about any one
of them could lead to a system that would not meet your customer’s expectations. All of these
considerations have to be accurate and applied carefully to get the best estimation.
Customers also need to forecast appropriately any growth in usage the application may
experience in the near future. If this growth is not addressed, the application may meet the
initial criteria, but when the load on application increases, it will fail to meet expectations. The
capacity planning team needs to have an understanding of peak and average loads on the
system. If the peak usage scenarios are not addressed, the system may have unexpected
behaviors at those moments. The capacity to support peak loads can be achieved by using
the different clustering and load balancing techniques available with WebSphere software.
Choice of hardware and its scalability also impacts capacity planning. For example, a system
can be deployed with a minimal configuration of the system but have additional capability for
expansion (that is, adding more memory or CPU) in the future to accommodate the additional
workload requirements. Some systems may address the current needs, but if there is an
increase in the load profile, the system cannot be expanded. In these situations, the
architecture has to planned in such a way that systems can be added in parallel and removed
without affecting the current deployment. This can be done by using one of the various
technologies available with WebSphere:
WebSphere Clustering
Load Balancing through the Edge Component
Use of Solaris virtualization, as discussed in Chapter 5, “Configuration of WebSphere
Application Server in an advanced Solaris 10 Environment” on page 115
Hardware clustering using Sun Clusters
Extended Deployment technology within WebSphere Software
Systems have different characteristics that also need to be considered when choosing the
right one for your deployment. A target deployment system may be suitable for a scenario that
might not be appropriate in another application deployment due to its nature, technology, and
its capability to expand. Some of the technology like LDoms are not available with all types of
Sun systems, while some of the technologies like Solaris Container are available on all the
systems running Solaris 10. Even when these characteristics are considered together, the
system may not be perfect, but you should at least have a better performing and more stable
application environment.
Each problem that you encounter has a different level of complexity and a different level of
impact on your business. These factors determine how closely you follow the procedures in
this book. For less complex problems, it might only be necessary to follow the most basic
procedures. More complex problems can involve multiple components and maybe even
multiple software products and systems. These problems require more time and effort with
more thorough problem determination techniques. Obviously, the impact that the problem has
on your business influences the urgency to resolve the problem and that also determines
which problem determination techniques you follow.
396 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
10.1.3 General steps for problem determination
Figure 10-1 shows a flowchart of basic problem determination steps. A good problem
determination methodology should begin with ensuring that you have system documentation
and a diagnostic data collection plan to be used in the event that any problems occur.
A large percentage of problems are common and well documented so that the identification of
the problem and its resolution are relatively straight forward. Some problems are more
complex and require a more in-depth analysis and, possibly, the application of several fixes,
before the problem is completely resolved. In the following sections, each step in this
methodology will be described in detail.
Fix packs for Version 6.1 are delivered on a regular basis, about once every three months.
They only contain fixes for APARs, and they do not introduce any new features or functionality
to the product. You can think of a Fix Pack as preventive maintenance. Each Fix Pack
includes a list of defects, which lists every APAR that is fixed in that Fix Pack. It contains all of
the APAR fixes that were included in the previous Fix Pack as well as fixes for APARs that
have been opened since the last Fix Pack. The fix packs add a fourth number to your
WebSphere Application Server version. For example, if you were to install Fix Pack 13 for
WebSphere Application Server 6.1, you would be upgrading to WebSphere Application
Server 6.1.0.13.
Fix packs do not contain upgrades to the Java Software Development Kit (SDK). They are
tested with the latest Java SDK service release, but the upgrades to the Java SDK are
delivered as separate fixes. You can also download the SDK fixes from the WebSphere
Application Server Support site.
Because fix packs do not introduce new functionality to the product and because they have
been fully regression tested, we recommend installing new fix packs as soon as they become
available. However, we also recommend some level of testing with your applications.
Proactively installing fix packs as soon as they become available is an effective way to prevent
problems from occurring. When you install a Fix Pack, you can be assured that you will not
encounter any of the WebSphere Application Server code defects that are fixed in the Fix
Pack. This saves the time and frustration of seeing one of these problems occur on your
system.
For more information about the WebSphere Application Server V6.1 Update Strategy, you can
review the Update Strategy document, which is available on the Support site:
http://www-1.ibm.com/support/docview.wss?rs=180&uid=swg27009276
You can learn when fix packs are scheduled to be released by reviewing the Recommended
Updates for WebSphere Application Server Base and Network at:
http://www-1.ibm.com/support/docview.wss?rs=180&uid=swg27004980#ver61
To determine which fix packs have been installed on your system, you can usually just check
the version of the product. This is available at the Welcome page of the administrative console
and at the top of all SystemOut.log files for each WebSphere Application Server process. You
can also use the historyInfo and versionInfo commands in the <was_home>/bin directory.
398 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
If some of the software or hardware in your environment is not listed or if you are using older
versions than the versions listed, you are at a greater risk for problems to occur. If you need to
open a PMR, the support team will recommend moving to a supported configuration before
proceeding with any other problem determination steps. Therefore, proactively ensuring that
all of the software and hardware in your environment meets WebSphere Application Server’s
prerequisites is an important problem prevention and preparation technique.
System documentation
In the event that a problem occurs in your environment, it is possible that you will need to
enlist the help of other people, either internal or external to your organization, to determine
the root cause of the problem. When that happens, you will want everyone involved to
understand thoroughly the details of the systems that are involved in your environment.
To this end, it is important to document the details of your configuration. Document all of the
changes that have been made to your environment. In addition to that, you should maintain a
high-level description of your basic topology. This is known as system documentation. System
documentation is useful in the following circumstances:
A problem occurs and you need to get assistance from others who might not be as
familiar with your application and topology as you are. The system documentation allows
you to bring them up to speed as quickly as possible.
A problem occurs and you want to identify from which parts of your environment you
should collect diagnostic data or monitor. Your system documentation shows the software
components that are involved and the flow of your application, that is, how different
software components are used when your application processes a request.
Your system documentation should consist of written documents and diagrams. Which
information is included in the written documents and which is included in diagrams is a matter
of preference. Overall, the information should be detailed and should show the specific
versions and maintenance levels of the operating system and all software products involved,
the hardware and network configuration, and specific host names and IP addresses of the
systems that are involved.
Machine: Mars
AppServer
Machine: Web1
NA
WAS version: x,y
WebServer OS version: x,y
IP address: a.b.c.d
DB
WAS version: x,y
OS version: x,y Machine: Venus DB version: x,y
IP address: a.b.c.d OS version: x,y
IP address: a.b.c.d
AppServer
Firewall
Firewall
Load
Balancer NA
Machine: Web2
WAS version: x,y
OS version: x,y
WebServer IP address: a.b.c.d
With a quick look at the diagram, you can see that HTTP requests are being load balanced
between two Web servers and that applications are receiving HTTP requests from each of
those Web servers. The applications are sending data to a DB2 database. An LDAP server is
being used for the WebSphere security user repository.
In this example, specific software and hardware levels are indicated on each machine, but this
information could also be included in accompanying written documentation in order to keep
the diagram more readable if many more machines are involved.
Detailed system documentation is an integral part of your problem planning strategy that
should not be overlooked.
400 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
There are some recommendations that apply to your diagnostic data collection plan
regardless of the types of problems for which you are preparing. For example:
Ensure that the clocks on all systems in your environment are synchronized. This helps in
the analysis of diagnostic logs and traces. Often, a request is sent from one system to
another, and it helps to match up the time stamps from both systems when analyzing the
diagnostic data. If some systems are located in different time zones, make sure that this is
documented in your system documentation (as discussed in “System documentation” on
page 399).
Configure WebSphere Application Server logging and tracing so that it captures a
sufficient amount of data when a problem occurs. The WebSphere Application Server
Support team has found that many times that the diagnostic data needed to determine the
root cause of a problem has been overwritten before a client realizes that the problem has
occurred. This type of situation can be prevented by increasing the amount of log and
trace data that is saved before the files are rolled over. For both logging and tracing, you
can configure file rotation properties for this purpose.
You can find the instructions for configuring logging and tracing, including the file rotation
properties, in WebSphere Application Server V6: Diagnostic Data, found at:
http://www.redbooks.ibm.com/redpapers/pdfs/redp4085.pdf
Plan to have extra disk space available on your system to store diagnostic data when
problems occur. Many of the most severe problems, such as application server crashes,
hangs, and out of memory conditions, require the largest amount of diagnostic data. If
enough disk space is not available when a problem such as this occurs, you might need to
reproduce the problem several times to collect the necessary data. To avoid this situation,
we recommend that you have between 2 GB and 5 GB of extra disk space available on
each system.
After you resolve a problem, either delete or archive the diagnostic data that you collected
for the problem. This will prevent the old diagnostic data from being confused with the new
diagnostic data the next time that a problem occurs.
Configure the thread monitor for hang detection. WebSphere Application Server V6.1
includes a thread monitor feature. This is also included in Version 6.0 and 5.1.1. The
thread monitor is notified when the Web container, ORB, or asynchronous bean thread
pools give work to a thread. By default, the thread monitor checks the status of all active
threads every three minutes. If it finds a thread that has been active for more than ten
minutes, it outputs a warning to the SystemOut log, similar to the one shown in
Example 10-1.
The thread monitor makes it easier to determine that a problem has occurred. If you see a
WSVR0605W warning in your SystemOut log, you know that a thread has stopped
responding. You can then perform further diagnostic steps to determine the cause of the
hung thread. The thread monitor does not take any action to fix the problem beyond
notifying you of the problem.
402 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
You should collect specific details about the symptoms in order to form a detailed and
thorough problem description. You can ask the following questions in order to get as specific a
problem description as possible:
What were the specific problem symptoms that were observed? Did an error occur? Did
the application produce an unexpected result? Did the application fail to respond to an
incoming request?
What was the context under which the problem occurred? Did the user execute a specific
function? Did the problem happen only after there was an unusually high workload on the
system? Did it occur immediately after the application was restarted?
How do you know when this particular problem occurs? Is there a something specific to
watch for in order to recognize if the problem occurs again?
How would you know that the problem was resolved? Would it be that an error message
no longer occurred? Would the application behave differently? What specifically would
confirm a problem resolution?
Where did the problem occur? Did the problem occur only in your test environment, only in
your production environment, or on both? Did it only occur on one system in your
environment? Did it occur on multiple systems? Did it occur on every cluster member or
only one?
When did the problem occur? What was the time stamp of the error or unexpected
behavior? Did the problem occur only once or many times? How often did the problem
occur? Did it occur at certain intervals, or did it seem to occur at random times? Was there
some event that occurred that might have triggered the problem? For example, did the
user attempt a specific application function?
Why might the problem have occurred? You might not be able to answer this right away.
Was this the first time something specific was tried? Was there a recent change to the
application code or the configuration of your environment? Does it happen in all of your
environments or only one? If it only happens in one environment, how is this environment
different from the others?
Has the diagnostic data that is identified in your diagnostic data collection plan been
collected? Does the data provide any other details about the problem or offer immediate
clues as to why the problem occurred?
Compile your answers in a document that gives as much specific detail about the problem
occurrence as possible. We refer to this document as the problem log. Ensure that
everyone who is involved in the problem determination process has access to the problem
log and can update it with new information as it is uncovered. Also, ensure that the
problem log lists the location of the diagnostic data.
It is possible that the answers themselves will reveal the cause of the problem. For
example, the original problem symptom could have been that the user received an error
message when accessing a specific servlet in the application. You might have found that a
new version of this servlet was just installed in the environment where the problem occurs
but was not installed in other environments that are still functioning as expected. This
could lead you to determine that an application code change caused the problem.
On the other hand, the answer might require more investigation. If this is the case, you
have developed a solid and complete problem description to use as the basis for more
problem determination efforts.
However, if the problem occurred in production, you will probably want to consider how to
quickly alleviate the problem symptoms so that customers and users will experience the least
possible negative effects. In this section, we refer to this process as reverting to safe
conditions. You will want to do this in parallel with your problem determination efforts.
Establishing safe conditions to which you can revert when a problem occurs and then
reverting to those conditions when a production problem occurs is a good way to reduce the
business impact of your problem. After you have evaluated whether reverting to safe
conditions is necessary and taken the appropriate steps, you can begin the problem
determination process.
404 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
10.1.7 Initial investigation: Phase 1 problem determination techniques
A large percentage of problems can be classified as Phase 1 problems. These are well-known
problems for which there is a fix readily available. All that is required is that the problem be
correctly characterized, essential diagnostic data be collected, find the remedy, and apply it,
as shown in Figure 10-3.
START
Consult Knowledge
Base/Expert System
Initial characterization to decide on next
of the problem PD step
(specific to each
problem and
current state)
Collect basic
diagnostics Iterate until
Go to resolved
Collect new specialized
Phase 2 diagnostics
Quick scan through (may or may not
basic diagnostics involve recreate)
(common problems)
Phase 1 Phase 2
As shown in Figure 10-3, it might be the case that the problem is more complex and the
remedy that you apply does not fix the problem, or no remedy is found. In that case, you need
to move on to a Phase 2 investigation. The Phase 2 investigation will be discussed in 10.1.8,
“Advanced investigation: Phase 2 problem determination techniques” on page 407.
These documents explain what data to collect and how to collect it for dozens of well-known
problems.
If you are able to access the administrative console, it is helpful to look at the runtime error
messages. Select Troubleshooting → Runtime Messages → Runtime Error. You will see a
formatted message, as shown in Example 10-3.
Message Originator
com.ibm.ws.webcontainer.servlet.ServletWrapper
Source object type
RasLoggingService
Timestamp
Nov 30, 2007 6:29:23 AM GMT+10:00
Thread Id
33
Node name
edgeNode01
406 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Server name
server1
In general, Phase 1 problems once identified, have well documented solutions. If a quick scan
of the log files does not provide explicit information or the log file exceptions and stack traces
are too difficult to analyze, you might try searching the IBM APAR database for key words that
relate to the problem you are having. You can use the Search feature the IBM Support
Assistant to search all IBM documentation, APARs, Technotes, Information Centers, and so
forth. For more information about the IBM Support Assistant, go to:
http://www.ibm.com/software/support/isa/
A good place to begin your search of the WebSphere documentation is, again, the IBM
Support Assistant. It might also be helpful to read the following problem determination IBM
Redbooks publications:
WebSphere Application Server V6 Problem Determination for Distributed Platforms,
SG24-6798
WebSphere Application Server V6.1 Problem Determination: IBM Redpaper Collection,
SG24-7461
Also there are numerous tools that can be run from the command line in the <was_home>/bin
directory, such as:
dumpNameSpace.sh
installver.sh
ivt.sh
EARExpander.sh
All of these tools and many others are documented in the WebSphere Application Server 6.1
Information Center at:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp
3. Select your WebSphere product from the drop-down menu. Enter a local Output directory.
Enter a descriptive name for the file. The file name must include the.jar extension, as
shown in Figure 10-5
4. Click the Export button.
408 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Figure 10-6 ISA Export Status
Once you create the portable collector, you can FTP the jar file to your Solaris machine.
Note: You should realize that though the portable collector file has a jar extension it is not
to be executed with the java -jar command. Rather, the file is compressed like a zip file
and must be extracted on your Solaris machine, as shown in “Run the portable collector on
the Solaris machine”.
The collector will run in interactive mode prompting you to input information from the console.
Example 10-4 shows what an interactive session with the collector might look like.
1234.137.us.collectWas.zip
yes
Enter the number or title of the Automated Problem Determination tool collection
option you want or enter RECOVER to recover from a previous Automated Problem
Determination Tool collection or enter QUIT to end the tool
Enter the number or title of the Automated Problem Determination tool collection
option you want or enter RECOVER to recover from a previous Automated Problem
Determination Tool collection or enter QUIT to end the tool
1: General
2: Security and Administration
3: Runtime
4: Connector
5: HTTP
6: Service Oriented Architecture
7: WebSphere Application Server for z/OS
8: Return to Previous Menu
Enter the number or title of the Automated Problem Determination tool collection
option you want or enter RECOVER to recover from a previous Automated Problem
Determination Tool collection or enter QUIT to end the tool
1: General Problem
2: RAS Collector Tool
3: Analysis Report
4: Collect Product Information
5: Return to Previous Menu
**************************************************
* Input Required
* WebSphere Application Server root directory</opt/IBM/WebSphere/AppServer>:
/opt/IBM/WebSphere/AppServer
**************************************************
OPTIONS FOR COMPLETING THE INPUT DIALOG
1: OK<Continue the collection using the values you set during the INPUT DIALOG>
2: Cancel<Stop the collection>
1
**************************************************
* Input profile name of your WebSphere Application Server
* WebSphere Application Server profile name:
* 1: Dmgr01
* 2: AppSrv01
2
**************************************************
OPTIONS FOR COMPLETING THE INPUT DIALOG
1: OK<Continue the collection using the values you set during the INPUT DIALOG>
2: Cancel<Stop the collection>
1
**************************************************
* Input the server name of your WebSphere Application Server.
* WebSphere Application Server server name:
* 1: server1
* 2: nodeagent
1
410 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
**************************************************
OPTIONS FOR COMPLETING THE INPUT DIALOG
1: OK<Continue the collection using the values you set during the INPUT DIALOG>
2: Cancel<Stop the collection>
1
**************************************************
* Input WebSphere Application Server Admin Information
* Admin User Name
wasadmin
* Administrator Password:
*
secret
**************************************************
OPTIONS FOR COMPLETING THE INPUT DIALOG
1: OK<Continue the collection using the values you set during the INPUT DIALOG>
2: Cancel<Stop the collection>
1
**************************************************
* Question Regarding How the Script Should Proceed
* You will be asked in a moment to choose how you want the
* script to enable tracing for the application server. You
* must choose whether to enable tracing dynamically or by
* restarting the server.
* If you choose to enable tracing by restarting the server,
* the application server will be stopped, trace settings
* will be modified, and the application server will be
* restarted.
* If you choose to enable tracing without restarting the
* server, trace settings will be modified dynamically.
* When the collection has been completed, the startup trace
* specification (and the runtime trace if the server was
* running when the collection began) will be restored to
* their original values.
* Select one::
* 1: Enable Tracing by Restarting the Server
* 2: Enable Tracing without Restarting the Server
1
**************************************************
OPTIONS FOR COMPLETING THE INPUT DIALOG
1: OK<Continue the collection using the values you set during the INPUT DIALOG>
2: Cancel<Stop the collection>
1
**************************************************
* Proceeding with enabling tracing by restarting the server
**************************************************
OPTIONS FOR COMPLETING THE INPUT DIALOG
1: OK<Continue the collection using the values you set during the INPUT DIALOG>
2: Cancel<Stop the collection>
2
In addition, it can also report complete heap dumps and states of all the monitors and threads
in the JVM. In terms of diagnosing problems, HPROF is useful when analyzing:
Performance
Lock contention
Memory leaks
A complete list of options, as shown in Example 10-5, is printed if the HPROF agent is
provided with the help option:
$ java -agentlib:hprof=help
412 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
verbose=y|n print messages about dumps y
Obsolete Options
----------------
gc_okay=y|n
Examples
--------
- Get sample cpu information every 20 millisec, with a stack depth of 3:
java -agentlib:hprof=cpu=samples,interval=20,depth=3 classname
- Get heap usage information based on the allocation sites:
java -agentlib:hprof=heap=sites classname
Notes
-----
- The option format=b cannot be used with monitor=y.
- The option format=b cannot be used with cpu=old|times.
- Use of the -Xrunhprof interface can still be used, e.g.
java -Xrunhprof:[help]|[<option>=<value>, ...]
will behave exactly the same as:
java -agentlib:hprof=[help]|[<option>=<value>, ...]
Warnings
--------
- This is demonstration code for the JVMTI interface and use of BCI,
it is not an official product or formal part of the J2SE.
- The -Xrunhprof interface will be removed in a future release.
- The option format=b is considered experimental, this format may change
in a future release.
For additional information about HPROF and examples, read the Java 2 Platform, Standard
Edition 5.0 Troubleshooting and Diagnostic Guide, found at:
http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf
HAT requires a heap dump in binary format as input. There are several ways to generate a
heap dump:
The application can be run with the HPROF profile, as shown in 10.2.2, “Heap Profiler
(HPROF)” on page 412.
HAT queries
The following is a brief description of the different queries available with HAT.
All Classes Query
Shows all classes in a heap, excluding platform classes.
Class Query
Shows information about a specific class. Includes its superclass, subclasses, instance
data members, and static data members.
Object Query
Provides information about a specific object. Within this query, you can navigate to the
object’s class and to the value of object members of the object. You can also see objects
that refer to the current object.
Instances Query
Shows all instances of a given class.
Roots Query
Provides reference chains from the rootset to a given object. It shows a chain for each
member of the rootset from which a specific object is reachable. The chains are calculated
using a depth-first search, thus providing chains with minimal length. This query usually is
the most useful query for troubleshooting memory leaks (unintentional object retention)
since once you find a retained object, the query can tell you why it is being retained.
Reachable Objects Query
This query can be accessed from an Object query and shows the transitive closure of
objects that are reachable from a specific object.
Instance Counts for All Classes Query
Shows counts of class instances for every class, excluding platform classes.
All Roots Query
Shows all members of the rootset.
New Instances Query
This query is only available if you invoke HAT with two heap dumps. Similar to the Instance
query, except it can find new instances by comparing the recent heap dump with the older
heap dump.
414 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Using the HAT tool
Suppose an application throws an OutOfMemoryError exception and a heap dump called
java_pid18014.hprof is generated. You can invoke HAT on the heap dump by issuing the
following command:
#hat java_pid18014.hprof
The contents of the HAT analysis will become available on an HTTP server that uses port
7000 by default, but you can specify another port on the command line.
As you can see, an HTTP server is launched, which allows you to browse the different HAT
queries described in “HAT queries” on page 414.
You can access the HTTP server using port 7000, as shown in Figure 10-7, and browse
through all the queries.
HAT can be a very useful tool for resolving JVM heap-related problems. For more details on
how to use it, see the HAT documentation found at:
http://hat.dev.java.net/doc/README.html
The JHAT utility is part of Java SE 6, but can read and analyze heap dumps created on Java
SE 5.0 systems. If you have access to a machine with Java SE 6 installed, you can create a
heap dump on the Java SE 5.0 system and transport it to the Java SE 6 system in order to
parse and browse it with the JHAT utility.
For detailed information about JHAT, see the Troubleshooting Guide for Java SE 6 with
HotSpot VM, found at:
http://java.sun.com/javase/6/webnotes/trouble/TSG-VM/TSG-VM.pdf
416 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Table 10-1 Solaris Operating System tools
OS tool Description
418 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Installation files and diagnostic data
The following logs and traces are located in the directory <was_home>/logs/install/:
The log.txt installation log
This log contains information about all events that occur during installation. At the end of
the log, you will see one of the following flags:
– INSTCONFSUCCESS
– INSTCONFPARTIALSUCCESS
– INSTCONFFAILED
If you see a partial success or failed message, try to determine what WebSphere
component failed to install and look for additional messages about why the installation
failed.
trace.txt.gz and trace.xml.gz
These are two compressed forms of the same installation information. As each component
is installed, one of the flags listed above is written to the log. If an installation fails, these
traces will show you exactly what component failed and a brief message saying why.
installconfig.log.gz
A log of the configuration events that occur during installation.
If problems arise during profile creation, examine the creation log for failure messages.
In addition to the creation log, several log files are written to when the profile is created.
These additional logs are located in the directory
<was_home>/logs/install/manageprofiles/<profile_name>. The number and type of logs
depend on the kind of profile (deployment manager, application server, or custom)
Some of the logs for a deployment manager profile include the following.
collect_metadata.log
createDefaultServer.log
defaultapp_config.log
filetransfer_config.log
keyGeneration.log
SetSecurity.log
SIBDeployRA.log
Check these logs after installation to make sure that all components of the profile were
configured successfully.
Introduction
The JCA specification provides a standard mechanism that allows modern J2EE applications
to connect and use heterogeneous resources from various existing enterprise information
systems (EIS) as well as modern relational database systems. Based on the JCA,
WebSphere provides client applications to all system services regarding connection,
transaction, and security management on behalf of the resource managers.
It also specifies a requirement for packaging and deployment facilities for a resource adapter
to plug into an application server. Figure 10-9 illustrates the contracts or relationships
between these four components.
Application Component
Client API
Application
Server Resource Adapter
System
contracts EIS-specific interface
Based on the JCA specification, an EIS vendor can develop a standard resource adapter
(RA) for its EIS to plug into any application sever that supports JCA. A resource adapter runs
within the address space of an application server, while the EIS itself runs in a separate
address space. For example, a DB2 database (EIS) and WebSphere Application Server each
run in a separate machine. An application component is able to access the EIS through the
resource adapter.
420 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The relationships between the four major components can be described as follows:
The contracts between the application component and the resource adapter are provided
through some form of client API. The client API can either be specific to a particular type
of resource adapter, for example, JDBC for relational data base, or a standard common
client interface (CCI). The JCA recommends, but does not require, that a resource adapter
implement the CCI. WebSphere Application Server V6 provides a relational resource
adapter (RRA) that has an implementation for both the CCI and the traditional JDBC
interfaces.
The resource adapter and application server implement system contracts to provide the
common mechanisms for connection, transaction, and security management.
The contracts between the resource adapter and the EIS are specific to each underlying
EIS. Thus, JCA does not impose any requirement on this proprietary relationship.
This section will discuss problems that are experienced during connection to enterprise
information systems or databases using JCA.
Users with the following initial symptoms might be experiencing a JCA-related problem:
Symptom: A JDBC call returns incorrect data.
Symptom: Failure to connect to a new data source.
Symptom: Failure to connect to an existing data source.
Symptom: Failure to access a resource through JDBC.
Symptom: Failure to access a non-relational resource.
Such symptoms can be observed in the WebSphere Application Server JVM logs in error
messages with any of the following prefixes: WTRN, J2CA, WSCL, or DSRA.
GetConnection
PD
If the data in the database is correct, try to write a stand-alone JDBC program that runs the
same query to see if the same symptom occurs. If the data returned from the stand-alone
JDBC program is the same as the data that your application returned, then problem is with
the JDBC driver. In this case, you need to contact the JDBC vendor for further help.
422 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
As shown in Figure 10-11, you might be in one of the following two situations:
Sporadically incorrect data, which means that the data that is returned to you is correct
sometimes but not always.
If your application uses a prepared statement or callable statement, you can experience a
problem with WebSphere’s statement cache.
A prepared statement is a precompiled SQL statement that is stored in a prepared
statement object for parameterized queries. This object is used to run the given SQL
statement efficiently multiple times. A callable statement is used to invoke stored
procedures.
In general, the more prepared statements and callable statements that your application
has, the larger the cache should be. Be aware, however, that specifying a larger statement
cache size than needed wastes application memory and does not improve performance.
Determine the value for your cache size by adding the number of uniquely prepared
statements and callable statements (as determined by the SQL string, concurrency, and
the scroll type) for each application that uses this data source on a particular server. This
value is the maximum number of possible prepared statements and callable statements
that are cached on a given connection over the life of the server.
You can try to disable the statement cache by setting Statement cache size to 0. This
setting can be found in the administrative console by selecting Resources → JDBC
Providers → <JDBC_provider> → Data sources → <data_source> → WebSphere
Application Server connection properties.
You should call IBM Technical Support to help you with this scenario.
Consistently incorrect data is when the data that is returned to you is always incorrect. In
this case, it is highly possible that your application is connected to the wrong database or
resource manager. In this case, you might have a problem with your environment or
configuration.
Repeat data
access scenarios
Data to collect
All relevant error messages and exceptions needed for our analysis are available in
SystemOut.log and SystemErr.log.
For more information, see Test connection service in the WebSphere Information Center at:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.web
sphere.base.doc/info/aes/ae/cdat_testcon.html
[DSRA8025I]
Configuration
[Failed to lookup data source]
problems
Naming
problems
Authentication
problems
424 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Messages DSRA8030I, DSRA8040I, or DSRA8041I
These messages indicate that your connection is either failing or having problems. In this
case, the TestConnection service produces additional DSRA messages in the range of 8000
to 8499 in the SystemOut.log.
Message DSRA8025I
This message indicates that the TestConnection service can connect to the data source
successfully. There is nothing wrong with the data source configuration. So, the next step
is to verify the data source lookup code in your application. One possible way to perform
this verification is to run the application through a debugger with a breakpoint set after the
lookup() method call on your java.naming.Context object. Another way is to create and run
a small program with the data source lookup code cut and pasted from your application.
If the data source lookup is successful, the new data source is OK, and you should go to
“The next step” on page 428.
If the data source lookup is not successful, you might have a naming or authentication
problem.
If there is no problem with naming, determine if you have a problem authenticating with the
database. Refer to Chapter 6, “JCA connection problem determination”, in WebSphere
Application Server V6 Problem Determination for Distributed Platforms, SG24-6798 for
information about this problem.
If you reach this point without identifying the problem, go to “The next step” on page 428.
Data to collect
All relevant error messages and exceptions that are needed for our analysis are available in
SystemOut.log and SystemErr.log.
Start GetConnection PD
Examining the situation where there is a failure in getting a new connection can lead you to
one of the following two cases:
Connection consistently fails, which means that you might have a problem with your
application configuration.
Connection fails sporadically, which means that it is very likely that you have a connection
leak.
Data to collect
All relevant error messages and exceptions that are needed for our analysis are available in
SystemOut.log and SystemErr.log.
426 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Start relational resource PD
[StaleConnectionException]
Depending on what you find, you may need to investigate the following items:
ConnectionWaitTimeoutException
StaleConnectionException
Other SQL/database exceptions
Data to collect
All relevant error messages and exceptions that are needed for our analysis are available in
SystemOut.log and SystemErr.log.
If, after going through this process, you still have an undiagnosed problem, we recommend
that you go back to 10.1.5, “How to organize the investigation after a problem occurs” on
page 402 and review the problem classifications to see if there are any other components that
might be causing the problem.
428 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
10.3.9 Class loader problem determination
For class loader problem determination, refer to Chapter 4, “WebSphere Application Server
V6.1: Class loader problem determination”, in WebSphere Application Server V6.1 Problem
Determination: IBM Redpaper Collection, SG24-7461.
Occasionally, the hang may be due to a bug in the HotSpot Virtual Machine itself.
At times, what may appear to a hang at first turns out to be the JVM process consuming all
available CPU cycles. CPU usage of 100% can be caused by a code defect that puts one or
more threads into an infinite loop.
An initial step when diagnosing a hang is to determine the status of the JVM process. Is the
process idle or is it consuming all available CPU cycles? Various operating system utilities
can be used to answer this question.
If the process appears to be working and is consuming all available CPU cycles, then most
likely that the problem is a looping thread rather than deadlocked threads.
For a WebSphere Application Server JVM, the thread dump will be written to the
native_stdout.log file for the server.
430 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
.java:989)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletW
rapper.java:501)
at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(Servle
tWrapper.java:464)
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3276)
at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:2
67)
<..stack trace truncated...>
For each Java thread, there is a header line with information about the thread, and this is
followed by the thread stack. The header line prints the thread name, indicates if the thread is
a daemon thread, and also prints the Java thread priority, as shown in Example 10-8 on
page 430.
In the header, the tid is the thread ID that is the address of a thread structure in memory. The
nid is the ID of the native thread and this is followed by the thread state. The thread state
indicates what the thread is doing at the time of the thread dump. Table 10-2 shows the
possible states that can be printed.
allocated
initialized
runnable
waiting on condition
sleeping
When trying to diagnose a looping thread, a good place to start is the thread stacks of the
threads that are in the runnable state. It is often necessary to get a sequence of thread
dumps in order to determine which threads appear to be continuously busy.
The jstack utility can also be used to obtain a stack thread if you cannot trigger a thread
dump for any reason. Also try using jstack if the thread dump does not show signs that a
Java thread is looping. When using jstack, look for threads that are in the following states:
IN_JAVA
IN_NATIVE
IN_VM
These are the most likely states for threads that are looping. Again, it is a good idea to run
jstack several times to better identify the threads that are looping. If you find a thread looping
indefinitely while in the IN_VM state, this might indicate a HotSpot VM bug.
Attention: All of the examples shown in this section were generated by running a J2EE
application called BadApp.ear. In particular, this application was used to create a hung
threads scenario when it was accessed from two different browsers simultaneously. The
BadApp.ear file, including source code, is available for download (see Appendix C,
“Additional material” on page 457 for more information).
The messages identify the threads that are potentially hung based on a configurable
threshold, which is 600000 milliseconds by default.
To trigger the thread dump, use the methods described in “Troubleshooting a looping process”
on page 430.
Finding a deadlock
If the hung process is responsive enough to generate a thread dump, then the output will be
printed to the application server’s native_stdout.log file. HotSpot VM also executes a deadlock
detection algorithm. If a deadlock is detected, it will be printed along with the stack trace of
the threads involved in the deadlock. Example 10-10 on page 433 shows how the deadlock
output will appear in the thread dump.
432 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Example 10-10 Deadlock detected in thread dump data
Found one Java-level deadlock:
=============================
"WebContainer : 1":
waiting to lock monitor 0x00275188 (object 0xf4f1bf28, a java.lang.Object),
which is held by "WebContainer : 0"
"WebContainer : 0":
waiting to lock monitor 0x002755c0 (object 0xf4f1bf30, a java.lang.Object),
which is held by "WebContainer : 1"
434 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
In J2SE 5.0, deadlock detection works only with locks that are obtained using the
synchronized keyword. Therefore, deadlocks that arise through the use of the
java.util.concurrency package will not be detected.
If the deadlock detection algorithm detects a deadlock, then you must examine the output in
more detail so that the deadlock can be understood. In Example 10-10 on page 433, you can
see that thread "WebContainer : 1" is waiting to lock object 0xf4f1bf28, which is held by
"WebContainer : 0", and "WebContainer : 0" is waiting to lock monitor object
0xf4f1bf30, which is held by "WebContainer : 1".
Additional details in the stack traces should provide helpful information about the deadlock.
If you suspect the second situation, it could be a timing issue or a general logic bug.
The line number information provides direction as to what code you should examine. In most
cases, you will need to understand the application logic or library is to pursue this issue any
further. In general, you will need to know how the synchronization works in the application
and in particular, the details and conditions under which monitors are notified.
With the jstack output, examine each of the threads in the BLOCKED state. The top frame
can sometimes indicate why the thread is blocked (Object.wait or Thread.sleep, for example),
and the rest of the stack should give an indication as to what the thread is doing. This is
particularly true when the source has been compiled with line number information (the
default) and you can cross reference the source code, as shown in Example 10-11.
-com.ibm.issf.atjolin.badapp.BadAppServlet.doPost(javax.servlet.http.HttpServletRe
quest, javax.
servlet.http.HttpServletResponse) @bci=108, line=259 (Interpreted frame)
- javax.servlet.http.HttpServlet.service(javax.servlet.http.HttpServletRequest,
javax.servlet.ht
If a thread is BLOCKED and the reason is not obvious, use the -m option to get a mixed stack.
From the mixed stack output, it is usually possible to identify why the thread is blocked. For
example, if a thread is blocked trying to enter a synchronized method or block, then you will
see frames like ObjectMonitor::enter near the top of the stack, as shown in Example 10-12.
436 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Other tools for troubleshooting hung processes
Thread Analyzer
The Thread Analyzer tool is described in 9.4.6, “Tools for performance tuning” on
page 364. It is also useful for analyzing thread dump data. With the Thread Analyzer you
can import multiple thread dumps taken at subsequent time intervals and perform a
comparative analysis of the threads over time.
Solaris pstack utility
Another tool to mention in the context of hung processes is the pstack utility on Solaris.
On Solaris 10 and JDK5.0 the output of pstack is similar to the output from jstack -m. As
with jstack, the Solaris 10 implementation of pstack prints the fully qualified class name,
method name, and bci. It will also print line numbers for the cases where the source was
compiled with line number information (the default).
10.4.2 Crashes
When a crash, or fatal error, occurs, the JVM process will terminate. If an application server
crashes on a WebSphere node, the node agent will attempt to restart the server. If the server
can be restarted successfully, it might not be immediately apparent that the server crashed.
However, if you notice performance degradation on a server and you examine the server’s
SystemOut.log, you might notice that the server process is terminating and is being restarted
periodically.
There many possible reasons for a crash. A crash can arise due to a bug in any of the
following:
The HotSpot VM
A system library
A J2SE library/API
Application native code
The operating system
External factors may also be involved. For example, a crash can arise due to resource
exhaustion in the operating system.
Crashes caused by bugs in the HotSpot VM or J2SE library code should be rare. In the event
that you do encounter a crash, then this section provides suggestions on how to examine the
crash. In some cases, it may be possible to workaround a crash until the cause of the bug is
diagnosed and fixed.
The first step with any crash is to locate the fatal error log. The fatal error log is a file named
hs_err_pid<pid>.log (where <pid> is the process ID of the process). Normally the file is
created in the working directory of the process. For WebSphere Application Server, the
location is <was_home>/profiles/<profile_name>, for example,
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01.
However due to constraints on disk space, directory permissions, or other reasons, the file
may instead be created in the temporary directory of the operating system. On Solaris, the
temporary directory is /tmp.
In this section, we describe the format of the fatal error log, describe how the error log can be
analyzed, and in a few cases, provide suggestions on how the issue may be worked around.
Important: In some cases, only a sub-set of this information is written to the error log. This
can happen when a fatal error is of such severity that the error handler is unable to recover
and report all details.
The fatal error log in JDK 5.0 consists of the following sections:
Header
Thread
Process
System
Note: The fatal error log used in the following examples is from the crash of an application
server Java process.
Header
The header gives a brief description about the problem that caused the crash. Example 10-13
shows a header from a fatal error log.
438 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The thread ID (TID)
The type of JVM (Client or Server). It is Server in this example.
Function frame that caused the crash: C [libc.so.1+0xc65c8].
The “C” in the function frame line signifies a native C frame. Table 10-3 shows other possible
frame types.
C Native C frame
V VM frames
Thread
Information about the thread that crashed is shown in this section. Example 10-14 shows the
thread section from a fatal error log.
Registers:
O0=0x0000005b O1=0x00000000 O2=0xffbfbf58 O3=0x00000000
O4=0x009a41bf O5=0x00000000 O6=0xffbfbe98 O7=0xff2b965c
G1=0x000000b7 G2=0xffbfc250 G3=0x00000002 G4=0xff02d02c
G5=0x00008374 G6=0x00000000 G7=0xff3a2000 Y=0x083126e9
PC=0xff2c65c8 nPC=0xff2c65cc
Instructions: (pc=0xff2c65c8)
0xff2c65b8: 81 c3 e0 08 01 00 00 00 82 10 20 b7 91 d0 20 08
0xff2c65c8: 0a bd 73 be 01 00 00 00 81 c3 e0 08 01 00 00 00
The Thread section in Example 10-14 on page 439 tells you the following information:
Thread type: Java Thread
State: _thread_blocked
_thread_new The thread has been created, but it has not yet
been started.
..._trans If you see any of the above states but they are
followed by "_trans", it means
the thread is changing to a different state.
For more details on threads, refer to Java 2 Platform, Standard Edition 5.0 Troubleshooting
and Diagnostic Guide, found at:
http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf
Process
The process section contains information about the whole process, including the thread list
and memory usage of the process.The thread list includes threads known to the VM. Included
are all Java threads and some VM internal threads, but any native threads created by the user
application that have not attached to the VM are not included. Example 10-15 shows an
example of the process section.
440 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
0x0057b078 JavaThread "Deferrable Alarm : 3" daemon [_thread_blocked, id=131]
0x010cd6e0 JavaThread "HAManager.thread.pool : 1" daemon [_thread_blocked,
id=130]
0x010cd518 JavaThread "HAManager.thread.pool : 0" daemon [_thread_blocked,
id=129]
0x02945af8 JavaThread "TCPChannel.DCS : 2" daemon [_thread_in_native, id=128]
<...this section truncated...>
Other Threads:
0x00272728 VMThread [id=34]
0x0053dac0 WatcherThread [id=44]
The Process section in Example 10-15 on page 440 tells you the following:
Current thread (=>)
Thread type: JavaThread, for example
Names: "WebContainer : 2", "HAManager.thread.pool : 1", and so on
State: _thread_blocked, _thread_in_native, and so on
For more details on process, refer to Java 2 Platform, Standard Edition 5.0 Troubleshooting
and Diagnostic Guide, found at:
http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf
System
The final section in the fatal error log contains system information. The type of output you see
depends on the operating system. In general, you will see the operating system version, CPU
information, and a summary of the memory configuration. Example 10-16 is from Solaris 10.
http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf
442 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
10.4.3 Troubleshooting memory leaks
Application developers must often deal with applications that terminate with an
java.lang.OutOfMemoryError exception. This exception is thrown when there is not enough
space to allocate an object on the heap. When an allocation failure occurs, a garbage
collection will be performed in order to make room for the object. The OutOfMemoryError
exception is not thrown unless the following are true:
The garbage collection cannot make any additional space available to allocate the new
object.
The heap cannot be further expanded.
An OutOfMemoryError does not necessarily mean the application is leaking memory. The
problem might be that the specified heap size is not sufficient for the application.
Another memory related problem can occur when applications terminate because there is no
available virtual memory on the operating system. In this section, we will discuss
OutOfMemoryError conditions and how to troubleshoot memory leaks in applications.
Diagnosing an OutOfMemoryError
The first step in diagnosing an OutOfMemoryError error is to determine what the error means.
Is the Java heap full or is the native heap full? Here are some error messages that you might
see when the OutOfMemoryError is thrown. Example 10-17 shows a different
OutOfMemoryError error that may be found in the native_stderr.log file.
If you have a heap dump that was generated at the time that the OutOfMemoryError
happened, you can use HAT to process the dump and create a snapshot of the heap, which
you can then examine in any browser.
To make effective use of the information that HAT provides, you need to have some
knowledge of the application and, additionally, some knowledge about the libraries/APIs that
the application uses. Generally the HAT information can help answer two important questions:
What is keeping an object alive?
Where was this object allocated?
Analyze the HAT information to determine why the object is being kept alive
For any suspected object instances, check the objects listed in the section named “References
to this object” to see which objects directly reference these objects.
You can also use a “roots query” to provide you with the reference chains from the root set to
the given object. Reference chains show a path from a root object to this object. These chains
can quickly show how an object is reachable from the root set.
Two roots queries are available: One excludes weak references (roots), and the other includes
them (allRoots). A weak reference is a reference object that does not prevent its referent from
being made finalizable, finalized, and then reclaimed. If an object is only referred to by a weak
reference, it usually is not considered to be retained, because the garbage collector can
collect it as soon as it needs the space.
HAT sorts the rootset reference chains by the type of the root, in this order:
Static data members of Java classes.
Java local variables. For these roots, their thread is also shown. Since a thread is a Java
object, the link is active (clickable). Therefore, you can navigate to the name of the thread.
Native static values.
Native local variables. These roots are also identified with their thread.
Analyze the HAT information to determine where the object was allocated
For object instances, the section entitled “Objects allocated from” shows the allocation site as
a stack trace so you can determine where the object was created.
If the leak cannot be identified using a single object dump, then another approach is to collect
a series of dumps and to focus on the objects created in the interval between each dump.
HAT provides this capability using the -baseline option, where you can provide two dumps for
HAT to process.
444 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
If you start HAT with two heap dumps, the “Show instance counts for all classes” query
includes another column that shows the count of new objects for that type. An instance is
considered new if it is in the second heap dump, and there is no object of the same type with
the same ID in the baseline. If you click a new count, then HAT lists the new objects of that
type. For each instance, you can view where it was allocated, which objects these new
objects reference, and which other objects reference the new object.
The baseline option can be very useful if the objects that need to be analyzed are created
during the time between the successive dumps.
446 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
A
More details about what is new in this version can be found at:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/topic/com.ibm.websphere.base
.doc/info/aes/ae/welc_newinrelease.html
All of these can be combined with Solaris 10 to take the advantage of Solaris 10 capability.
This version supports J2EE 1.4 specification and works with JDK 1.4.2, which poses a
different set of challenges for WebSphere Application Server admins.
One of the important changes to note here is that we do not have to deal with the MQ as a
separate component, which has been integrated into WebSphere Application Server itself, so
we do not need to set any /etc/system parameters, which we used to with the previous
version. This adds much simplicity.
Also, note that this is the first release in which all platforms have been made available at the
same time with the same feature-functionality. Common code and delivery enhance the
mobility of applications. You can now deploy your applications on the platforms best suited to
your needs without delay or trade-off. Starting with this release, WebSphere Application
Server became available on Sun Solaris 10 on X86 based system. However, on those
systems, it was only available as 64-bit, which gives you additional capability to exploit larger
heap space and a choice of a Solaris 10 hardware platform.
448 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Note: Some of the features we describe in 9.4, “Java Virtual Machine (JVM) Performance
Management” on page 346 are specific to JDK 1.5; therefore, the following
recommendations apply to WebSphere Application Server V6.1 only:
JVM Ergonomics (Self tuning)
JVM Arguments, such as:
– -XX:+ParallelOldGC
– -XX:+AggressiveOpts
You can use the following JVM tuning guidelines for WebSphere Application Server V6.0.2. If
you want to put in the ergonomics feature’s tuning for 1.5, you need to explicitly apply the
proper arguments for the JVM.
Some of the JVM options, which a WebSphere Application Server admin can tweak to get
better performance, are:
-server (needs to be specified explicitly due to an absence of JVM Ergonomics)
-Xmn<size-of-heap-for-young-generation>m
-XX:-ScavengeBeforeFullGC
-XX:MaxTenuringThreshold
-XX:+UseParallelGC
-XX:ParallelGCThreads (number of CPU or processing cores on multi core systems)
-XX:+AggressiveHeap
-XX:PermSize
Now you do not have to configure two different response files for installation and profile
creation. Refer to Table A-1 for more information about the options that replace V6.0.x profile
creation response files and options in those response files.
-W -OPT createProfile
ndsummarypanelInstallWizardBean.launchPCT
(responsefile.nd.txt)
-W N/A
pctresponsefilelocationqueryactionInstallWizard
Bean.fileLocation (responsefile.nd.txt)
-W -OPT PROF_dmgrProfileName
profilenamepanelInstallWizardBean.profileName
(responsefile.pct.NDdmgrProfile.txt)
-W -OPT PROF_nodeName
nodehostandcellnamepanelInstallWizardBean.n
odeName (responsefile.pct.NDdmgrProfile.txt)
-W -OPT PROF_hostName
nodehostandcellnamepanelInstallWizardBean.h
ostName (responsefile.pct.NDdmgrProfile.txt)
-W -OPT PROF_cellName
nodehostandcellnamepanelInstallWizardBean.c
ellName (responsefile.pct.NDdmgrProfile.txt)
-W -OPT PROF_defaultPorts
pctdmgrprofileportspanelInstallWizardBean.WC
_adminhost -OPT PROF_startingPort
-W
pctdmgrprofileportspanelInstallWizardBean.WC (-OPT PROF_nodeStartingPort)
_adminhost_secure This option is used if you are creating a
-W deployment manager profile and one federated
pctdmgrprofileportspanelInstallWizardBean.BO cell in the same machine.
OTSTRAP_ADDRESS
-W
pctdmgrprofileportspanelInstallWizardBean.SOA
P_CONNECTOR_ADDRESS
-W
pctdmgrprofileportspanelInstallWizardBean.SAS
_SSL_SERVERAUTH_LISTENER_ADDRESS
-W
pctdmgrprofileportspanelInstallWizardBean.CSI
V2_SSL_SERVERAUTH_LISTENER_ADDRES
S
-W
pctdmgrprofileportspanelInstallWizardBean.CSI
V2_SSL_MUTUALAUTH_LISTENER_ADDRES
S
-W
pctdmgrprofileportspanelInstallWizardBean.OR
B_LISTENER_ADDRESS
-W
pctdmgrprofileportspanelInstallWizardBean.CEL
L_DISCOVERY_ADDRESS
-W
pctdmgrprofileportspanelInstallWizardBean.DC
S_UNICAST_ADDRESS
(responsefile.pct.NDdmgrProfile.txt)
450 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
V6.0 option V6.1 option
-W -OPT PROF_cellName
setnondmgrcellnameinglobalconstantsInstallWiz
ardBean.value
(responsefile.pct.NDstandAloneProfile.txt)
-W -OPT PROF_federateLater
pctfederationpanelInstallWizardBean.federateLa
ter (responsefile.pct.NDmanagedProfile.txt)
-W -OPT PROF_dmgrHost
pctfederationpanelInstallWizardBean.hostname
(responsefile.pct.NDmanagedProfile.txt)
-W profiletypepanelInstallWizardBean.selection N/A
(responsefile.pct.NDmanagedProfile.txt)
Refer to the InfoCenter for the correct and appropriate values for these options at:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp
Security considerations
This section outlines some of the major changes from WebSphere Application Server V6.0 to
WebSphere Application Server V6.1.
The following security-related items are new for WebSphere Application Server V6.1:
Administrative security can be enabled out of the box. Access to the administrative system
and its data is now protected by default.
Simplified security configuration and administration. The administrative console security
panels are simplified, and new wizards and a configuration reporting tool are provided.
Automatically generated server IDs. You no longer need to specify a server user ID and
password during security configuration, unless using a mixed cell environment.
Federate various repositories, so you can manage them as one. The inclusion of virtual
member manager in this release provides a single model for managing organizational
entities. You can configure a realm that consists of identities in the file-based repository
that is built into the system in one or more external repositories or in both the built-in,
file-based repository and in one or more external repositories.
WebSphere key and certificate management has been simplified.
Interoperability with other vendors of WS-Security. The product now supports the WS-I
Basic Security Profile 1.0, which promotes interoperability by addressing the most
common problems encountered from implementation experience to date.
Separate Web authentication and authorization. Now, Web authentication can be
performed with or without Web authorization. A Web client’s authenticated identity is
available whether or not Web authorization is required.
Enhanced control over Web authentication behavior.
452 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
B
In addition, it also gives the customer the capability of running different containers with
different Solaris 8 updates or different sets of patches. This may be essential, as you may
have one vendor’s application that is only supported on one specific Solaris 8 and its patch
level while another vendor’s application requires Solaris 8 with a different kernel patch level.
The Sun Solaris 8 Migration Assistant provides easier transition to Solaris 10.
This is a part of the virtualization technology offering from Sun to aid customers who have
applications that have not migrated to Solaris 10 and customers who are willing to migrate to
Solaris 10 in near future. Previously, the Solaris 10 containers only supported applications
that were already running on Solaris 10. As a result, the containers could not be used for
applications that still needed a Solaris 8 environment. To assist the customer with these types
of situations, Sun developed a technology called BrandZ. BrandZ architecture was released
by Sun in October 2007. Complete details on BrandZ can be found at:
http://www.opensolaris.org/os/community/brandz/
BrandZ is a framework that extends the Solaris Containers infrastructure to create Branded
Containers, which are containers that contain non-native operating environments like Solaris
8 in a Solaris 10 environment. Solaris 8 is the non-native operating environment and Solaris
10 is the native operating environment. The Solaris 8 Migration assistant utilizes the BrandZ
technology in Solaris to allow Solaris 8 applications to run in a Solaris 10 container. While
many Solaris 8 applications have been already migrated to Solaris 10, there are still many
applications running old Solaris 8 applications and in the process of migrating to Solaris 10.
Some of these application may be nearing their end of life (EOL) very soon due to vendor
support or perhaps a customer wants to move to a newer version of software or hardware that
provides new features, cost effectiveness, or better scalability and reliability.
Note: Solaris 8 end-of-support milestones are still in effect: Vintage I support ends March
31, 2009; end of service life is March 31, 2012. See http://sun.com/solaris/8 for details.
The Migration Assistant software can capture the entire environment of the Solaris 8 source
system that will be migrated and then transfer this environment to a Solaris Container running
on the target Solaris 10 system. However, we should note that this is provided for migration
purposes and should not be planned to be used for the long term. Sun also offers additional
services to customers going through the migration cycle:
http://sun.com/service/serviceplans/software/overview.xml
The services part of the Migration Assistant helps customers from the initial feasibility study to
the actual implementation, and ultimately, support for the technology.
454 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
How it works
The Solaris 8 Migration Assistant consists of two software components:
Physical to virtual (P2V) conversion tool
Solaris 8 Container.
The P2V tool helps to move the application and its environment from the Solaris 8 system to
the Solaris 10 system and converts it so it can run on this new platform.
The second component, the Solaris 8 Container, is the runtime environment for the migrated
Solaris 8 environment.
When the existing Solaris 8 environment is transferred to the destination system and placed
in a Solaris 8 Container, the installer adds a few patches to the Solaris 8 image to make it
work correctly on the new system.
This installation does not install a new version of Solaris 8. It creates a Solaris Container that
will hold the Solaris 8 bits that were copied from the source system and puts those bits into
the Container ready to run on the new system.
The Solaris 8 Container is a new type of Solaris Container that is based on the previously
mentioned BrandZ architecture.
The operating system requirements are the same as a Solaris 8 operating environment. For
example, System V related IPC parameters can be specified the same way as it used to be
for Solaris 8. In a nutshell, once the Solaris 8 container is created and deployed, WebSphere
Application Server V5.1 can be deployed as though it is on a stand-alone Solaris 8 system.
You can keep this Solaris 8 Container to host the WebSphere Application Server V5.1
application environment until the application environment is transitioned to a newer
environment, such as WebSphere Application Server V6.1. When the system transition is
completed, you can take the Solaris 8 container out of your production environment.
Similarly, the Solaris 8 Migration Assistant can help deploy other WebSphere products. We
strongly recommend that you perform tests and analysis of your application environments for
each product to ensure that there are no technical issues.
Select the Additional materials and open the directory that corresponds with the IBM
Redbooks form number, SG247584.
Use the following procedure to compile and execute the sample Java code:
1. Implement all the methods in the interface except for the CreateCredential method, which
is implemented by WebSphere Application Server. The FileRegistrySample.java file is
provided for reference.
Attention: The sample provided is intended to familiarize you with this feature. Do not
use this sample in an actual production environment.
3. Copy the class files that are generated in the previous step to the product class path.
The preferred location is the %was_install_root%/lib/ext directory. Copy these class files to
all of the product process class paths.
4. Follow the steps in 8.2.1, “Custom registry interface” on page 274 to configure your
implementation using the administrative console. This step is required to implement
custom user registries.
What to do next
If you enable security, make sure that you complete the remaining steps:
1. Save and synchronize the configuration and restart all of the servers.
2. Try accessing some J2EE resources to verify that the custom registry implementation is
correct.
458 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
The source is included in the ear file so that you can examine it to see how the hang condition
and out of memory problems are built into the code.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see “How to get Redbooks” on page 462.
Note that some of the documents referenced here may be available in softcopy only.
Optimizing Operations with WebSphere Extended Deployment V6.1, SG24-7422
WebSphere Application Server Network Deployment V6: High Availability Solutions,
SG24-6688
WebSphere Application Server V6 Problem Determination for Distributed Platforms,
SG24-6798
WebSphere Application Server V6 Scalability and Performance Handbook, SG24-6392
WebSphere Application Server V6 System Management & Configuration Handbook,
SG24-6451
WebSphere Application Server V6.1: System Management and Configuration, SG24-7304
Sun Blueprints
These Sun publications are also relevant as further information sources:
Solaris Containers--What They Are and How to Use Them, found at:
http://sun.com/blueprints/0505/819-2679.pdf
Solaris Containers Technology Architecture Guide, found at:
http://sun.com/blueprints/0506/819-6186.pdf
Service Management Facility (SMF) in the Solaris 10 OS, found at:
http://sun.com/blueprints/0206/819-5150.pdf
Beginners Guide to LDoms: Understanding and Deploying Logical Domains for Logical
Domains 1.0 Release, found at:
http://sun.com/blueprints/0207/820-0832.html
Limiting Service Privileges in the Solaris 10 Operating System, found at:
http://sun.com/blueprints/0505/819-2680.pdf
Developing and Tuning Applications on UltraSPARC T1 Chip Multithreading Systems,
found at:
http://sun.com/blueprints/0107/819-5144.html
462 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Index
Building a system using profiles 59
Numerics Built-in certificate management 285
64-bit considerations 389
C
A Cache 423
A sample topology with XD ODR 262 Caching proxy 5, 8
Action 334 Callable statement 423
Additional dtrace resources 344 Capacity on Demand (COD) 116
Additional performance tuning tools 375 Capacity planning 391
addNode 85 Causes of problems 396
Adjusting the WebSphere Application Server system Cell 57
queues 377 Cell name 62, 79, 89
Administrative console Cell profile 57
Deploying during profile creation 62, 69 Central administration 57
Administrative console port 66, 73 Central management 55
Administrative console secure port 66, 73 Certificate expiration monitoring 284
Administrative security 32, 62, 65, 70, 72, 77, 79, 266, Certificate list (CRL) 291
451 Certificate management using iKeyman 296
AdminTask configuration management 286 Certificate request 285
Advanced investigation gets information 297
Phase 2 problem determination techniques 407 Challenges 314
Apache DayTrader benchmark 391 Characterize the problem and identify problem symptoms
Application binary interface (ABI) 5 402
Application client 8 Check hardware configuration and settings 380
Application deployment problem determination 428 Check the status of sockets (ports) 25
Application programming interface (API) 5, 283 Check your results 68, 76
Application server 1, 32, 137, 266, 283 Checking the prerequisites 398
Endpoint level 283 Checking your results 84
Multiple instances 153 Choosing a garbage collector for young and old (tenured)
Requested information 8 generations 358
Security policies 39 Choosing the right virtualization technology for your solu-
SSL protocol 302 tion 120
Application server node 251 Class data sharing 349
Application server profile 56–57, 59–60, 69, 88, 90, 92 Class loader problem determination 429
Application server toolkit 7–8 Cloudscape 10
Applying resource management 144 Cluster 243
Applying WebSphere maintenance 397 cluster
Authentication mechanism 267 horizontal scaling 251
Automatic selection of tuning parameters 348 vertical scaling 250
Automatic tuning of the throughput collector based on de- Cluster member 244
sired behavior 348 Security 247
Availability 236, 238 Collecting diagnostic data 405
Failover 238 Common installation problems 418
Hardware-based high availability 238 Common WebSphere Application Server problems 418
Maintainability 239 Concurrent mark-sweep (cms) collector 356
Configuration considerations 129
B Configuration of the KSSL module on Solaris 303
Background 236 Configuration scenarios for WebSphere Application Serv-
Backing up a profile 229 er in zones 135
Backup and recovery 229 Configuring network interfaces 25
backupConfig command 229 Configuring single sign-on using the trust association in-
Base DN 300 terceptor 308
Before you begin uninstalling the product 108 Configuring single sign-on using trust association inter-
Benchmarking 392 ceptor ++ 309
464 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
StaleConnectionException 427 Header 438
Exec_method name 152 Heap 389
Exporting and importing profiles 232 Heap and generation sizes 362
Heap configuration and usage 369
Heap histogram 370
F Heap sizing recommendations 357
Failover 57, 238, 242, 245 High availability software 255
Fair-Share Scheduler (FSS) 142 horizontal scaling 246, 251–252
Fault isolation 240 Hotspot generations 353
Features of the installation factory 94 How to be prepared before problems occur 397
Federate 56–57, 60, 79, 82, 85 How to organize the investigation after a problem occurs
Application server 88 402
Node 85 HPROF
Federated repository 4 Heap profiler 412
Federating a custom node to a cell 85 HTTP session
Federating an application server profile to a cell 88 Timeout 241
File system 129, 162 Hypertext transfer protocol (HTTP) 282
File-based registry 4
Final metric – jAppServer Operations/sec (JOPS) 390
Finding a deadlock 432 I
Firewall 240 IBM DB2 10
First steps 62, 68–69, 76–78, 84 IBM HTTP server 5, 8
Footprint goal 349 IBM Rational Application Developer V7 376
Format of the fatal error log 438 IBM Software licensing 133
IBM software licensing (pvu) 133
IBM Support Assistant portable collector tool 408
G IBM Support for Sun virtualization technologies 121
Garbage collector designs 351 IBM Trade Performance Benchmark Sample for Web-
Garbage collector selection 361 Sphere Application Server 391
Garbage collector statistics 362 IBM WebSphere
General steps for problem determination 397 Application environment 1
Generate JVM GC output 364 Application server 265
Generate thread dumps 367 Application server environment 116
Generational garbage collection 352 deployment 167
getConnection() 426 Iillegal transaction usage 428
Getting information on the permanent generation 371 Implementation considerations 264
Global Zone 12, 121, 130, 135, 141, 306 Implementing system level redundancy with Solaris clus-
Base installation 131 ter 256
Centralized WebSphere Application Server binaries Implementing workarounds for crashes 442
130 IMS 10
Configure WebSphere Application Server 135 Incorrect data 423
WebSphere Application Server installation 138 Independent WebSphere Application Server binary instal-
Global Zone preparations and actions 137 lation 225
Goal 315 Independent WebSphere Application Server binary instal-
Goal priorities 349 lation in zones 130
Graphical user interface installation 32, 52 Informix 10
Initial characterization of the problem 405
H Initial investigation
HAManager 239 Phase 1 problem determination techniques 405
Hanging and looping processes 429 Install the most current refresh pack, Fix Pack, and rec-
Hardware cryptographic keystores 291 ommended interim fixes 380
Hardware-based high availability 238 Installation and runtime user of WebSphere Application
HAT Server 25
Heap Analysis Tool 413 Installation factory 93
HAT - analyze information to determine where the object Installation factory overview 94
was allocated 444 Installation files and diagnostic data 419
HAT - analyze information to determine why the object is Installation logs 106
being kept alive 444 Installation problem determination 45, 418
HAT queries 414 Installation with non-root privileges 26
HAT tool - using 415 Installation with the root privileges 26
Installation wizard 33
Index 465
Check box 42 JMS problem determination 429
Procedure results 43 JSR 116 7
Welcome panel 34 JSR 116 specification 4
Installing customized installation package 213 JSR 168 7
Installing Fix Pack to the Global Zone 182 jstack command 372
Installing stand-alone WebSphere Application Server 32 JVM ergonomics (self tuning) 347
Installing update installer to the Global Zone 176 JVM performance metrics 352
Installing WebSphere Application Server V6.1 Network JVM performance tuning 350
Deployment 52 JVM problem determination
Integrating third-party HTTP reverse proxy servers 306 Hangs, crashes, out of memory exceptions 429
Interceptor class name 307 Jvmpi 387
Interpreting the thread dump output 431 jvmstat tools 375
Introduction 314, 420
Introduction to D-scripts 332
Introduction to garbage collector concepts 350 K
IP address 116 Key options related to garbage collection 361
network interface e1000g2 138 Key store 290
IP sprayer implementation - an example 253 Keystore configurations 291
IP sprayer see load balancer KSSL setup 305
J L
J2EE Connector Architecture (JCA) 7 LDAP registry 4
JACC authorization provider 4 LDAP server 11
Java 2 Platform, Enterprise Edition (J2EE) 3 Lightweight Directory Access Protocol (LDAP) 266
Java 2 security for deployers and administrators 272 Limit parameter 277
Java 2 Standard Edition (J2SE) 3 Load 385
Java 5 7 Load balancer 5, 8, 252
Java Authentication and Authorization Service (JAAS) Load balancer node 252
272 Load balancing 238, 242, 245, 252
Java Authorization Contract for Containers (JACC) 4–5 Logical domains 117
Java Development Kit (JDK) 12 Logs
Java heap analysis tool (HAT) 376 Profile creation 67, 75, 83
Java Management Extensions (JMX) 271 Startserver.log 68, 76
Java Messaging Service (JMS) 268 Systemout.log 68, 76
Java process 142, 273 Lotus Domino Enterprise Server 11
Java Runtime Environment (JRE) 13
Java runtime environment (jre) 33 M
Java Runtime Extension (JRE) 282 Maintainability 236, 239
Java SE 5.0 Sun HotSpot JVM garbage collectors 353 Configuration
Java Secure Sockets Extension (JSSE) 282 Dynamic 239
Java technology for WebSphere on Solaris 12 Mixed 239
Java Virtual Machine 384 Fault isolation 240
Java run 5 Maintenance of Solaris 10 patches and zones 174
SvrSslCfg run 312 Maintenance of WebSphere Application Server V6.1 in
Java Virtual Machine (JVM) 5, 272, 448 zones 175
Java Virtual Machine (JVM) Performance Management Maintenance planning 172
346 Manageprofiles 90, 92–93
Java.naming.context 425 Managing profiles 90
Javax.resource.cci.connectionfactory 426 Maximum pause time goal 349
Javax.sql.datasource 426 Memory 262, 388
Jca connection Leak 385
Initial symptoms 422 Memory-to-memory replication 249
JDBC 379 Message
JDBC program DSRA8025I 425
Standalone 422 DSRA8030I 425
JDK 1.4.2 and 5.0 337 DSRA8040I 425
JDK 6.0 and the future 344 DSRA8041I 425
JDK versions for WebSphere 347 DSRA8042I 424
JHAT Message prefix
Java Heap Analysis Tool 416
466 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Data source resource adapter 426 Part 1
JCA connector 427 Configure the Edge system 254
Transaction 426–427 Part 2
Method 1 59 Configure the back-end servers 254
Method 2 60 Peak
Method return 276 Load 385
Method_credential user 155 Performance 237
Microsoft SQL server 10 Monitoring - tuning - testing 387
Migrating a zone from one system to another 132 Testing 386
Mixed configuration 239 Tuning
Monitoring - tuning - testing cycle 387 Top ten monitoring list 382
Moving between two different types of hosts 227 Performance advisors 382, 387
Moving between two similar types of hosts 225 Performance analysis 385
Load test
Measure steady-state 388
N Ramp-down time 388
Netscape Security Services (NSS) 303 Ramp-up time 388
Network configuration for WebSphere Application Server Production level workload 386
communication 24 Repeatable tests 386
Network deployment Saturation point 386
Package 4 Stress test tool 386
V6.1 8 Terminology 385
Network utilization 388 Load 385
Niagara Cryptographic Provider (NCP) 302 Peak load 385
No available thread dump 435 Requests/second 385
Node agent 69, 84–85, 89 Response time 385
Starting 85 Throughput 385
Stopping 84 Performance benchmarks 390
Node group 69 Performance characteristics 390
Node name 62, 79 Performance impact of WebSphere Application Server
Non-Global Zone preparations 138 security 240
Non-relational resource Performance management 314
Fail to connect 427 Performance monitoring guidelines 382
Not finding a deadlock 435 Performance related security settings 241
Novell eDirectory 11 Performance testing 386
Performance tuning
O Top-ten monitoring list
Object pools 385 Average response Time 383
Object Request Broker (ORB) 13, 448 EJB container thread pool 384
Observing Java programs 337 Garbage collection statistics 384
Observing running processes 334 JVM memory 384
ODC 263 Live number of HTTP Sessions 383
Offload SSL processing to SJSWS 260 Number of requests per second 383
On demand computing 249 Web container tread pool 384
On demand router 262 Web server threads 383
Operating system (OS) 1, 32, 163, 303 Performance tuning check list 379
Options for the CMS collector 363 Persistence
Options for the parallel and parallel compacting collectors Database 249
363 Persistence manager (CMP) 379
Oracle 10 Personal certificate 284
Other tools for troubleshooting hung processes 437 Gets information 297
Other tools for troubleshooting OutOfMemoryError issues Signer part 297
445 Physical to virtual (P2V) 455
Plug-in
See ORB
P Plug-in
Packaging summary 8 See Web server plug-in
Paging activity 388 Workload management 243
Parallel collector 355 PMI 382
Parallel compacting collector 356 Pmi 382
Index 467
Service 387 Finer granularity 142
policytool 273 Scope 144
Portlet applications 7 Resource control for Solaris Zones 148
Ports 62, 66, 70, 72, 79, 87 Resource control mechanisms 144
Precompiled SQL statement 423 Resource leaks 389
Predicate 333 Resource management 121, 142
Prepared statement 423 Resource pool 118, 122
Preparing Solaris systems for WebSphere Application Processor sets 147
Server installation 16 Resource pools 147
Probe description 333 Response time 237, 385
Problem determination 105 restoreConfig command 229
Problem determination methodology 396 Restoring a profile 231
Problem determination tools 407 Result object 277
Procedure for preparing the operating system 17 Boolean attribute 277
Procedures to configure the dispatcher 253 Boolean set 277
Process 440 Return On Investment (ROI) 133
Process Rights Management 157 Reverse proxy configuration 257
Processor set 142 Revert to safe conditions 404
Possible uses 145 Review hardware and software prerequisites for Web-
Product overview 3 Sphere Application Server 380
Profile Review your application design 380
Deleting 92 Rity 274
Profile archives 233 RMI/IIOP security 271
Profile creation logs 419 Rmi/iiop security 271
Profile creation wizard 60, 62–63, 69–70, 77, 80, 83, 90, Role-based access control (RBAC) 6
92 Rule out a database or JDBC driver problem 422
Profile management tool 61 Run the portable collector on the Solaris machine 409
Profile name 78–79 Running WebSphere Application Server as a non-root
Profile registry 90, 92–93 user 155
Profile_home 58 Runtime Performance Advisor 384
Profilecreator 61 Runtime performance advisor 384
Profileregistry.xml 90 Runtime user of WebSphere Application Server process-
Profiles 55 es 27
Programmatic selection 282
S
Q Sample applications 78
Quality of service 261 Sample file 274
Sample scenarios
135
R Sample scenarios using SJSWS 257
Rational Application Developer 7–8 Saturation point 386–387
Rational Web developer 8 Scalability 236, 246, 249, 264
Reasons for administrative security 266 Horizontal and vertical combined 247
Recommendations for tuning the JVM 356 horizontal scaling 246
Recommended update path 173 vertical scaling 246
Recovering from a failed installation 420 Scaling techniques 264
Redbooks Web site 462 Scaling-out 391
Contact us xiv Scan diagnostic data 406
Rehosting of an existing WebSphere Application Server Scenario 1
environment 195 WebSphere Application Server in Global Zone 135
Reliability, availability, serviceability (RAS) 1 Scenario 2
Reliability, availability, serviceablility (RAS) 120 WebSphere Application Server in a Whole Root Zone
Relief options and considerations 404 (independent installation) 135
Relocating WebSphere Application Server environment Scenario 3
using CIP 196 WebSphere Application Server in a Sparse Root Zone
Relocating WebSphere Application Server environment (independent installation) 136
with containers 223 Scenario 4
removeNode 92 Share the WebSphere Application Server installation
Removing a service from SMF 157 in zones from the Global Zone 137
Resource control 122–123 Scenario 5
468 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
IHS in a Non-Global Zone to front-end WebSphere Setting up dtrace probes in WebSphere Application Serv-
Application Server 140 er V6.1 340
Scope 78 Shared WebSphere Application Server binary installation
Scope of resource control 144 227
Scope selection 283 Shared WebSphere Application Server binary installation
Scripts 95 for zones 130
Secure administration 269 Signer certificate 284, 292
User account repository section 273 gets information 297
Secure application cluster members 247 Silent installation 43, 52
Secure communications using Secure Sockets Layer Simple WebSphere Authentication Mechanism (SWAM)
282 308
Secure Hash Algorithm (SHA) 284 Single point of failure see SPOF
Secure Sockets Layer (SSL) 266 Single sign-on (SSO) 308
Secure Sockets Layer configurations 286 SIP applications 7
Secure sockets layer node, application server, and cluster Sizing 391
isolation 291 SMF core concepts 150
Securing your environment after installation 267 SMF daemons 151
Securing your environment before installation 267 SMF repository 151
Security 236, 240 SOAP connector port 66, 73, 79, 82, 88
Cluster member 247 soap.client.props 268
Ssl communication 240 Socket states 388
Security authentication service (sas) 270, 272 Solaris 5, 11, 115
Security cache timeout 241 Solaris 10 1, 31, 118, 303, 447, 453
Security problem determination 428 New local zones 131
Selecting a different garbage collector 357 WebSphere Application Server 447
Selecting ssl configurations 282 Solaris 8 453
Self-signed certificate 283 Application 454
Serial collector 354 Bit 455
Server archives 232 Container 455
Server cluster End-of-support milestone 454
Workload management 245 Environment 453
Server template 69 Hardware support 453
Server weights 243 Image 455
Server1 56 Migration assistant 118, 453
Server1 application server New version 455
Runtime environment 43 Operating environment 455
Serverindex.xml 90 OS image 118
Servers Source system 454
Database 10 System 455
Directory 11 Update 453
Web 10 Solaris container 6, 115, 454
Service integration bus 88, 379 Integral part 142
Service level agreement (SLA) 145 Solaris containers 118
Service Management Facility (SMF) 115, 150 Solaris kernel parameters 22
Service manifest 151 Solaris maintenance 173
Service policy 262 Solaris Operating System 12
Service release (sr) 12 Solaris Operating System tools 416
Service states 150 Solaris OS 1, 120
Service_fmri value 152 Light-weight layer 122
Service-oriented architecture (SOA) 448 WebSphere Application Server 13
Servlet clustering 243 Solaris OS installation requirements 16
Servlets and Enterprise JavaBeans 383 Solaris overview 5
Session Initiation Protocol (SIP) 4 Solaris packaging 9
Session management 236 Solaris patches for Java 317
Database persistence 249 Solaris processor sets 145
Memory-to-memory replication 249 Solaris supported platforms and WebSphere 11
Persistent sessions 249 Solaris system 43, 121, 144
Session persistence 249 Solaris system monitoring 319
Session persistence considerations 249 Solaris ZFS 6, 162
Session state 240 Solaris Zone 6, 118, 148, 306
Index 469
Different types 129 A JDBC call returns incorrect data 422
Many other possibilities 126 Failure to access a non-relational resource 427
WebSphere Application Server 129 Failure to access a resource through JDBC 426
SPARC system 1, 119 Failure to connect to a new data source 423
Linux OS instances 119 Failure to connect to an existing data source 425
Sparse Root Symptoms of common problems 396
Zone 129 Syntax 90
Sparse root System 441
Zone 129 System documentation 399
Sparse Root Zone 129, 141 System management problem determination 428
Special considerations 316 System performance management 317
SPECjAppServer 2004 benchmark 390 System prerequisites for WebSphere Application Server
SPOF 238, 255 V6.1 on Solaris 16
SSL 240 System Secure Sockets Layer (SSSL) 288
Accelerator hardware 241
Handshaking 240
SSL configuration 270–271 T
Alias 271 TCP tuning parameters 25
Assignment 283 Template 86, 92
Attribute 282 Terminology 385
Management panel 286 TestConnection 424–425
Management scope 287 Testing the environment 254
Reference 292 Thread pool sizes 378
Repertoire alias 271 Thread pools 383
Repertoire entry 287 Throughput 237, 385
Selection 283 Throughput goal 349
Type 288 Tivoli Access Manager 5
SSL configuration in the security.xml file 287 Tivoli Access Manager servers for WebSphere Applica-
SSL configurations 282 tion Server 9
SSL handshake Tivoli Directory Server 4, 11
Failure 285 Tivoli Directory Server for WebSphere Application Server
Protocol 289 9
StaleConnectionException 427 Tivoli Performance Advisor 382
Stand-alone server 57 Tivoli Performance Viewer 382, 387
Stand-alone server environment 59 Summary reports
startManager 69 Connection pool summary 384
startNode 85 Thread pool summary 384
startServer 77 Tools for performance tuning 364
Startup and footprint 373 Top ten monitoring hotlist 382
Statement cache size 423 topologies
Static versus dynamic 261 horizontal scaling 251
stopNode 84 vertical scaling 250
stopServer 78 Topology for dynamic scalability with WebSphere XD
Stored procedures 423 261
Strategies for scalability and availability 249 Topology for horizontal scaling 251
Strategy for tuning the parallel collector 357 Topology for vertical scaling 250
String userSecurityName 275 Topology selection criteria 248
Sub-capacity licensing 134 Topology with IP sprayer front end 252
Sun One Directory Server 11 Topology with other Web and reverse proxy servers 256
Sun One directory server 11 Topology with redundancy of several components 255
Sun Solaris TPV advisor 382
10 448 Transport layer
8 453 Authentication 276
8 Migration Assistant 454 Security 282, 289
environment 37 Troubleshooting a hung process 432
Sun Studio Performance Analyzer 345 Troubleshooting a looping process 430
Sun xVM 119 Troubleshooting leaks in native code 446
svc.startd 151 Troubleshooting memory leaks 443
Sybase 10 Troubleshooting memory leaks in Java code 444
Symptom Trust association interceptor collection 307
Trust association interceptor settings 307
470 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
Trust association settings 307 W
Trust store 292 Was_home 58
Tunable parameters in Solaris 10 20 Wasprofile 91–92
Tune related components, such as data bases, messag- wasprofile 58, 90, 92
ing providers, and so on 381 Web container HTTP transport 378
Tune the Java Virtual Machine settings 380 Web container problem determination 428
Tune the Solaris Operating System 380 Web performance analysis 385
Tune WebSphere Application Server JDBC resources Web server
and associated connection pools 380 Failover 251
Types of crashes identifiable in the fatal error log 442 Single point of failure 251
Types of garbage collections 354 Workload management 242
Types of profiles 56 Web server definition 79
Web server plug-in 5, 8
U Workload management 242–243
uninstall command 108 Web servers 10
Uninstallation of WebSphere Application Server 45 Web Services problem determination 428
Uninstallation problems 50 WebSphere
Uninstallation procedure - Network Deployment 107 Cluster 243
Uninstalling 107 Deployment manager 238
Uninstalling Fix Pack from the Global Zone 190 EJS WLM 248
Uninstalling Network Deployment 109 Plug-in
Uninstalling update installer from the Global Zone 180 WLM 243
Unique ID 278 WebSphere Application Server xi, 458
Update strategy for WebSphere Application Server V6.1 base package 7
170 binary 130
Use a Type-4 (or pure Java) JDBC driver 380 Cell 292, 299
Use ISA to create a portable collector for WebSphere Ap- certificate management 296
plication Server 408 Class 272
User ID 268 class path 274
User name 276 client 284
User registry 4, 265, 267, 273, 281 Configuration 121, 129, 271, 284, 296
Different types 273 Deployment 126, 129
Password combination 276 Environment 13, 291
User information 276 Family 7
Valid user 278 Information center 36, 41
Using custom installation package for update installations installation 121, 130
95 installation medium 141
Using dtrace with WebSphere Application Server V6.1 Instance 136, 306
340 Java process 146
Using Java 2 security 271 managed resource 5
Using Sun Java System Directory Server as the LDAP network deployment 8
server 298 node 130
Using the jmap command 368 package 7
Using the manageprofiles command 90 Packaging option 8
Using WebSphere Application Server with KSSL in So- performance 449
laris Zone 306 Primary administrative task 268
Using WebSphere Application Server with Solaris kernel process 145, 270
SSL proxy 302 Product 7, 137
profile 138
Runtime 284
V security 272, 274
Verification of KSSL setup for WebSphere Application Security component 281
Server 305 service 124
Verify and configure a trust association interceptor 306 SSL configuration 293
Verifying profile installation 221 system requirement 9
Version 6.1 template zone 131
Cell 270 Topology 282
Silent installation 449 V6.1 administrative security 266
vertical scaling 246, 250 V6.1 installation 269
V6.1 was61.xml 152
Index 471
Workload 142, 144
Zone 124
zone migration 133
Websphere Application Server
load 291
WebSphere Application Server - Express 7
WebSphere Application Server - Express V6.0 7
WebSphere Application Server and Java versions 346
WebSphere Application Server clients
Signer-exchange requirements 284
WebSphere Application Server deployment
Environment 145
WebSphere Application Server maintenance 170
WebSphere Application Server Network Deployment 7
Installation 32
Link 34
WebSphere Application Server performance manage-
ment 376
WebSphere Application Server performance tuning 378
WebSphere Application Server security components 266
WebSphere Application Server supported platforms and
software 9
WebSphere Application Server V6
Installation 36
Migration guide 36
WebSphere Application Server V6.1
was61.xml 155
WebSphere Application Server V6.1 JVM tuning parame-
ters on Solaris 10 359
WebSphere component monitoring 381
WebSphere configuration
Dynamic 239
Mixed 239
WebSphere in Solaris containers 121
WebSphere Information Integrator 10
WebSphere performance issues 379
WebSphere product overview 3
WebSphereMQ 4
Whole Root Zone 130
Windows Active Directory 11
WLM see workload management
Workload 262
Workload management 57, 236–237, 241, 245
Server cluster 245
Web server 242
Workload management problem determination 429
Workload management using WebSphere clustering 243
Workload management with Web server plug-in 243
Workload management with Web servers and load bal-
ancers 242
Y
yourzone 135
Z
Z/OS security server 11
Z/OS.e security server 11
Zone cloning 131
472 IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
IBM WebSphere Application Server
V6.1 on the Solaris 10 Operating
System
IBM WebSphere Application Server V6.1
on the Solaris 10 Operating System
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
IBM WebSphere Application Server V6.1 on the Solaris 10 Operating
IBM WebSphere Application Server V6.1 on the Solaris 10 Operating System
IBM WebSphere Application Server
V6.1 on the Solaris 10 Operating
System
IBM WebSphere Application Server
V6.1 on the Solaris 10 Operating
System
Back cover ®
Learn detailed This IBM Redbooks publication tells the reader how to run WebSphere
Application Server V6.1 on the Sun Solaris 10 Operating System. While INTERNATIONAL
deployment and
WebSphere Application Server is platform-independent, each platform TECHNICAL
configuration
it runs on requires some additional platform-specific knowledge and SUPPORT
strategies configuration in order to ensure it operates efficiently and at maximum ORGANIZATION
capability.
Leverage Solaris 10
virtualization and Our primary focus is to explain the installation, configuration, and best
other new features practices for deployment and performance tuning of WebSphere
Application Server in the Solaris environment. We take you through all BUILDING TECHNICAL
the steps required to run WebSphere Application Server on Solaris 10, INFORMATION BASED ON
Optimize WebSphere including installing WebSphere Application Server and Solaris, tuning PRACTICAL EXPERIENCE
Application Server the operating system and application server, security, administrative
performance on tasks, problem determination, and advanced topologies. IBM Redbooks are developed
Solaris 10 by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.