Sun Microsystems, Inc. hasintellectual property rights relating to technology embodied in the product that is describedin this document. Inparticular, and without
limitation, these intellectual property rights may include oneor more U.S. patentsor pending patent applications in the U.S. and in other countries.
U.S. Government Rights – Commercial software. Governmentusers are subject to the Sun Microsystems, Inc. standardlicense agreement and applicable provisions
of the FAR and its supplements.
This distribution may include materials developed by thirdparties.
Parts of the productmay be derived from Berkeley BSD systems, licensedfrom the University of California. UNIX is a registered trademark in the U.S. and other
countries, exclusively licensed through X/Open Company, Ltd.
Sun, Sun Microsystems, the Sun logo, the Solarislogo, the Java Coee Cuplogo, docs.sun.com, JavaServer Pages, JSP, JVM, JDBC, Java HotSpot, Java, and Solaris are
trademarks or registered trademarks of Sun Microsystems, Inc.in the U.S. and other countries. All SPARC trademarks are used underlicense and are trademarksor
registered trademarks of SPARC International,Inc. in the U.S. and other countries. Products bearingSPARC trademarks are based upon an architecture developed
by Sun Microsystems, Inc.Netscape is a trademark orregistered trademark of Netscape Communications Corporation in the United Statesand other countries.
The OPEN LOOK andSun
of Xerox in researching and developing the conceptof visual or graphicaluser interfaces for the computer industry. Sun holds a non-exclusive license from Xeroxto
the Xerox Graphical UserInterface, which license also covers Sun's licensees who implement OPEN LOOK GUIs and otherwise comply with Sun's writtenlicense
agreements.
Products covered by andinformation contained in thispublication are controlled byU.S. Export Control lawsand may be subject to the export or import laws in
other countries. Nuclear, missile, chemical or biological weapons or nuclear maritime end usesor end users, whetherdirect or indirect, are strictly prohibited. Export
or reexport to countries subject to U.S. embargoor to entities identiedon U.S. export exclusion lists, including, but not limited to, the denied personsand specially
designated nationals lists is strictly prohibited.
DOCUMENTATION IS PROVIDED “AS IS”AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONSAND WARRANTIES, INCLUDINGANY
IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FORA PARTICULARPURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO
THE EXTENT THAT SUCH DISCLAIMERS AREHELD TO BE LEGALLY INVALID.
TM
Graphical User Interface wasdeveloped by Sun Microsystems, Inc.for its users andlicensees. Sun acknowledges the pioneering eorts
Sun Microsystems, Inc. détientles droits de propriété intellectuelle relatifs à la technologie incorporée dans le produitqui est décrit dansce document. En particulier,
et ce sans limitation, ces droits de propriétéintellectuelle peuvent inclure unou plusieurs brevets américains ou des applications de brevet en attente aux Etats-Unis
et dans d'autres pays.
Cette distribution peutcomprendre des composants développés par des tierces personnes.
Certaines composants dece produit peuvent être dérivées du logiciel Berkeley BSD, licenciés par l'Université de Californie. UNIX est une marque déposée aux
Etats-Unis et dans d'autrespays; elle est licenciée exclusivement par X/Open Company, Ltd.
Sun, Sun Microsystems, le logo Sun, le logoSolaris, le logo Java Coee Cup,docs.sun.com, JavaServer Pages, JSP, JVM, JDBC,Java HotSpot, Java et Solaris sontdes
marques de fabrique ou des marques déposées deSun Microsystems, Inc. aux Etats-Uniset dans d'autres pays. Toutes les marques SPARC sont utilisées sous licence
et sont des marques de fabrique ou desmarques déposées de SPARC International,Inc. auxEtats-Unis et dans d'autres pays. Les produits portant les marques SPARC
sont basés sur une architecture développée par SunMicrosystems, Inc. Netscape est unemarque de Netscape Communications Corporation aux Etats-Unis et dans
d'autres pays.
L'interface d'utilisation graphique OPENLOOK et Sun aété développée par Sun Microsystems, Inc. pour ses utilisateurset licenciés. Sun reconnaît les eorts de
pionniers de Xerox pour la recherche et ledéveloppement du concept desinterfaces d'utilisation visuelle ou graphique pour l'industrie de l'informatique. Sun détient
une licence non exclusive de Xerox sur l'interfaced'utilisation graphique Xerox, cettelicence couvrant également les licenciés de Sun qui mettent en place l'interface
d'utilisation graphique OPEN LOOKet qui, en outre,se conforment aux licences écrites de Sun.
Les produits qui font l'objet de cette publication et les informations qu'ilcontient sont régis parla legislation américaine en matière de contrôle des exportations et
peuvent être soumis au droit d'autres pays dansle domaine des exportationset importations. Les utilisationsnales, ou utilisateurs naux, pour des armes nucléaires,
des missiles, des armes chimiques ou biologiques oupour le nucléaire maritime,directement ou indirectement, sont strictement interdites. Les exportations ou
réexportations vers des pays sous embargo des Etats-Unis,ou vers des entités gurant sur les listes d'exclusiond'exportation américaines, y compris,mais de manière
non exclusive, la liste de personnes qui fontobjet d'un ordre dene pas participer, d'une façon directe ou indirecte, aux exportations des produits oudes services qui
sont régis par la legislation américaine en matièrede contrôle des exportationset la liste de ressortissants spéciquement designés, sont rigoureusement interdites.
LA DOCUMENTATION EST FOURNIE "ENL'ETAT" ET TOUTES AUTRESCONDITIONS, DECLARATIONS ET GARANTIESEXPRESSES OU TACITES
SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEEPAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE
IMPLICITE RELATIVE A LA QUALITEMARCHANDE,A L'APTITUDE A UNE UTILISATION PARTICULIEREOU A L'ABSENCE DE CONTREFACON.
Server Farms ......................................................................................................................................... 21
Using nd-pathinfo-forward ............................................................................................................. 82
Using nostat ......................................................................................................................................... 82
Using Busy Functions ......................................................................................................................... 83
Web Server Start Options ...........................................................................................................99
5Sizing and ScalingYour Server ........................................................................................................101
64-Bit Server ....................................................................................................................................... 101
Drive Space ......................................................................................................................................... 102
Study Goals ......................................................................................................................................... 103
Study Conclusion .............................................................................................................................. 104
E-Commerce Web Application Test ....................................................................................... 122
Index ................................................................................................................................................... 127
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •6
Tables
TABLE 1–1Methods of MonitoringPerformance ..................................................................... 22
TABLE 6–21E-Commerce Web ApplicationScalability .......................................................... 125
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •8
Figures
FIGURE 2–1Web Server Connection Handling .......................................................................... 40
9
10
Preface
This guide discusses adjustments you can make that may improve the performance of Sun Java
System Web Server (henceforth known as Web Server). The guide provides tuning, scaling, and
sizing tips and suggestions; possible solutions to common performance problems; and data
from scalability studies. It also addresses miscellaneous conguration and platform-specic
issues.
Who Should UseThis Book
This guide is intended for advanced administrators only. Be sure to read this guide and other
relevant server documentation before making any changes. Be very careful when tuning your
server, and always back up your conguration les before making any changes.
BeforeYou Read This Book
Web Server can be installed as a stand-alone product or as a component of Sun Java
Enterprise System (Java ES), a software infrastructure that supports enterprise applications
distributed across a network or Internet environment. If you are installing Web Server as a
component of Java ES, you should be familiar with the system documentation at
http://docs.sun.com/coll/1286.3.
WebServer Documentation Set
The Web Server documentation set describes how to install and administer the Web Server.
You can access the Web Server documentation at http://docs.sun.com/coll/1653.1. For an
introduction to Web Server , refer to the books in the order in which they are listed in the
following table.
TM
11
Preface
TABLE P–1 Books inthe Web Server Documentation Set
DocumentationTitleContents
Sun Java System Web Server 7.0 Update 1 Documentation
Center
Sun Java System Web Server 7.0 Update 1 Release Notes
Sun Java System Web Server 7.0 Update 1 Installation and
Migration Guide
Sun Java System Web Server 7.0 Update 1 Administrator’s
Guide
Web Server documentation topics organized by tasks and subject
■
Late-breaking informationabout the software and documentation
■
Supported platforms and patch requirements for installing Web
Server
Performing installation and migration tasks:
■
Installing Web Server and its various components,
■
Migrating data from Sun ONE Web Server 6.0 or 6.1 to Sun Java
System Web Server 7.0
Performing the following administration tasks:
■
Using the Administration GUI and command-line interface
■
Conguring serverpreferences
■
Using server instances
■
Monitoring and logging server activity
■
Using certicates and public key cryptography to secure the server
■
Conguring accesscontrol to secure the server
■
Using Java Platform Enterprise Edition (Java EE) security features
■
Deploying applications
■
Managing virtual servers
■
Dening server workload and sizing the system to meet performance
needs
■
Searching the contents and attributes of server documents, and
creating a text search interface
■
Conguring theserver for content compression
■
Conguring theserver for web publishing and content authoring
using WebDAV
Sun Java System Web Server 7.0 Update 1 Developer’s
Guide
Sun Java System Web Server 7.0 Update 1 NSAPI
Developer’s Guide
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •12
Using programming technologies and APIs to dothe following:
■
Extend and modify Sun Java System Web Server
■
Dynamically generatecontent in response to client requests and
modify the content of the server
Creating custom Netscape Server Application Programmer’s Interface
(NSAPI) plug-ins
TABLE P–1 Books inthe Web Server Documentation Set(Continued)
DocumentationTitleContents
Preface
Sun Java System Web Server 7.0 Update 1 Developer’s
Guide to Java Web Applications
Sun Java System Web Server 7.0 Update 1 Administrator’s
Conguration File Reference
Sun Java System Web Server 7.0 Update 1 Performance
Tuning, Sizing, and Scaling Guide
Sun Java System Web Server 7.0 Update 1 Troubleshooting
Guide
Implementing Java Servlets and JavaServer PagesTM(JSPTM) technology in
Sun Java System Web Server
Editing conguration les
Tuning Sun Java System Web Server to optimize performance
Troubleshooting Web Server
Related Books
The URL for all documentation about Sun Java Enterprise System (Java ES) and its components
is
http://docs.sun.com/coll/1286.3.
Default Paths and File Names
The following table describes the default paths and le names that are used in this book.
13
Preface
TABLE P–2 Default Paths andFile Names
PlaceholderDescriptionDefault Value
install-dirRepresents the base installation directory for Web
Server
instance-dirDirectory that contains the instance-specic
subdirectories.
Sun Java Enterprise System (JavaES) installations on the
TM
Solaris
platform:
/opt/SUNWwbsvr7
Java ES installations on the Linux andHP-UX platform:
For stand-alone installations, the default location for
instance on Solaris, Linux, and HP-UX:install-dir
For stand-alone installations, the default location for
instance on Windows:
system-drive:\Program Files\sun\WebServer7
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •14
Typographic Conventions
The following table describes the typographic changes that are used in this book.
TABLE P–3 TypographicConventions
TypefaceMeaningExample
Preface
AaBbCc123The names of commands, les, and
directories, and onscreen computer
output
AaBbCc123What you type, contrasted with onscreen
computer output
AaBbCc123A placeholder to be replaced with a real
name or value
AaBbCc123Book titles,new terms, and terms to be
emphasized (note that some emphasized
items appear bold online)
Edit your .login le.
Use ls -a to list all les.
machine_name% you have mail.
machine_name% su
Password:
The command to remove a le is rm lename.
Read Chapter 6 in the User's Guide.
A cache is a copy that is stored locally.
Do not save the le.
Symbol Conventions
The following table explains symbols that might be used in this book.
TABLE P–4 SymbolConventions
SymbolDescriptionExampleMeaning
[]Contains optionalarguments and
command options.
ls [-l]The -l option is not required.
{|}Contains aset of choices for a
required command option.
${ }Indicates a variable reference.${com.sun.javaRoot}References the value of the com.sun.javaRoot
-Joins simultaneous multiple
keystrokes.
+Joins consecutive multiple
keystrokes.
-d {y|n}The -d option requires that you use either the y
argument or the n argument.
variable.
Control-APress the Control key while you press the A
key.
Ctrl+A+NPress the Control key,release it, and then press
the subsequent keys.
15
Preface
TABLE P–4 SymbolConventions(Continued)
SymbolDescriptionExampleMeaning
→Indicates menu item selection in a
graphical user interface.
File → New → TemplatesFrom the File menu, choose New. Fromthe
Documentation, Support, and Training
The Sun web site provides information about the following additional resources:
■
Documentation (http://www.sun.com/documentation/)
■
Support (http://www.sun.com/support/)
■
Training (http://www.sun.com/training/)
Searching Sun Product Documentation
Besides searching Sun product documentation from the docs.sun.com web site, you can use a
search engine by typing the following syntax in the search eld:
search-term site:docs.sun.com
For example, to search for “Web Server,” type the following:
Web Server site:docs.sun.com
New submenu, choose Templates.
To include other Sun web sites in your search (for example, java.sun.com, www.sun.com, and
developers.sun.com), use “sun.com” in place of “docs.sun.com” in the search eld.
Third-PartyWeb Site References
Third-party URLs are referenced in this document and provide additional, related information.
Note – Sun is notresponsible for the availability of third-party web sitesmentioned in this
document. Sun does not endorse and is not responsible or liable for any content, advertising,
products, or other materials that are available on or through such sites or resources. Sun will not
be responsible or liable for any actual or alleged damage or loss caused or alleged to be caused by
or in connection with use of or reliance on any such content, goods, or services that are available
on or through such sites or resources.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •16
Sun Welcomes Your Comments
Sun is interested in improving its documentation and welcomes your comments and
suggestions. To share your comments, go to http://docs.sun.com and click Send Comments.
In the online form, provide the full document title and part number. The part number is a
7-digit or 9-digit number that can be found on the book's title page or in the document's URL.
For example, the part number of this book is 819-2635.
Preface
17
18
CHAPTER 1
1
Performance and Monitoring Overview
Sun Java System Web Server (henceforth known as Web Server) is designed to meet the needs
of the most demanding, high-trac sites in the world. It can serve both static and dynamically
generated content. Web Server can also run in SSL mode, enabling the secure transfer of
information.
This guide helps you to dene your server workload and size a system to meet your
performance needs. Your environment is unique, however, so the impacts of the suggestions
provided here also depend on your specic environment. Ultimately you must rely on your own
judgement and observations to select the adjustments that are best for you.
This chapter provides a general discussion of server performance considerations, and more
specic information about monitoring server performance.
This chapter includes the following topics:
■
“Performance Issues” on page 19
■
“Conguration” on page 20
■
“Virtual Servers” on page 20
■
“Server Farms”on page 21
■
“64–Bit Servers” on page 21
■
“SSL Performance” on page 21
■
“Monitoring Server Performance” on page 22
Performance Issues
The rst step toward sizing your server is to determine your requirements. Performance means
dierent things to users than to webmasters. Users want fast response times (typically less than
100 milliseconds), high availability (no “connection refused” messages), and as much interface
control as possible. Webmasters and system administrators, on the other hand, want to see high
connection rates, high data throughput, and uptime approaching 100%. In addition, for virtual
19
Conguration
servers the goal might be to provide a targeted level of performance at dierent price points.
You need to dene what performance means for your particular situation.
Here are some areas to consider:
■
The number of peak concurrent users
■
Security requirements
Encrypting your Web Server’s data streams with SSL makes an enormous dierence to your
site’s credibility for electronic commerce and other security conscious applications, but it
can also seriously impact your CPU load. For more information, see
page 21
■
The size of the document tree
■
Dynamic or static content
The content you serve aects your server’s performance. A Web Server delivering mostly
static HTML can run much faster than a server that must execute CGIs for every query.
Conguration
Certain tuning parameters are set at the conguration level, so that every server instance that is
based on the conguration has the same tuning information. In addition, some monitoring
information is available at the conguration level, so you can monitor the performance of all
instances based on the conguration. However, the bulk of the monitoring information is
available at the individual server instance, or virtual server level. If you are using a single Web
Server instance per conguration (your server is not part of a server farm), the
conguration-level statistics show the information for the single server instance based on that
conguration.
“SSL Performance” on
.
Virtual Servers
Virtual servers add another layer to the performance improvement process. Certain settings are
tunable for the conguration, while others are based on an individual virtual server.
You can also use the quality of service (QoS) features to set resource utilization constraints for
an individual virtual server. For example, you can use QoS features to limit the amount of
bandwidth and the number of connections allowed for a virtual server. You can set these
performance limits, track them, and optionally enforce them.
For more information about using the quality of service features, see Sun Java System WebServer 7.0 Update 1 Administrator’s Guide.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •20
Server Farms
The clustering features of Web Server allow you to easily deploy to a server farm. Because all
servers in a server farm share identical congurations, tuning is not done on a server-by-server
basis.
64–Bit Servers
The performance for the 64–bit Web Server is not necessarily better than the performance for
the 32–bit Web Server, but the 64–bit server scales better. Because the 32–bit Web Server
process is conned to 4 GB of address space, it can run out of address space attempting to
support simultaneous sessions beyond a certain limit. Even if the host machine has available
memory and CPU resources, the 32–bit Web Server might not be able to take advantage of it
because of the address space limit. The 64–bit Web Server can run more applications and
servlets than the 32-bit server. Also, the 64–bit Web Server can cache several GBs of static
content, while the 32-bit Web Server is conned to 4 GB of address space.
In general, the tuning for the 64–bit Web Server is similar to the tuning for the 32–bit Web
Server. The dierences are mostly tuned at the operating system level. Tuning specics are
discussed in
page 97
SSL Performance
“Tuning UltraSPARC T1–Based Systems for Performance Benchmarking” on
.
SSL Performance
SSL always has a signicant impact on throughput, so for best performance minimize your use
of SSL, or consider using a multi-CPU server to handle it.
For SSL, the Web Server uses the NSS library. However, there are other options available for
SSL:
■
If you are using the Solaris 10 operating system, kernel SSL (KSSL) is available. It does not
contain all the algorithms available, as does NSS, but it often provides better performance.
■
A cryptographic card hardware accelerator for SSL can also improve performance.
■
If you are using the 64–bit Web Server on Solaris, you can use the cryptographic accelerator
of the UltraSPARC T1 processor.
Chapter 1 • Performanceand Monitoring Overview21
Monitoring Server Performance
Monitoring Server Performance
Making the adjustments described in this guide without measuring their eects doesn’t make
sense. If you don’t measure the system’s behavior before and after making a change, you won’t
know whether the change was a good idea, a bad idea, or merely irrelevant. You can monitor the
performance of Web Server in several dierent ways.
TABLE 1–1 Methods of Monitoring Performance
Monitoring MethodHow to EnableHow to AccessAdvantages and Requirements
Statistics through the
Admin Console
Statistics through
individual wadm
commands
XML-formatted statistics
(stats-xml) through a
browser
XML-formatted statistics
(stats-xml) through the
command-line interface
perfdump through a
browser
perfdump through the
command-line interface
Enabled by defaultIn the Admin Console, for a
conguration, click the
Monitor tab
Enabled by defaultThrough wadm commands:
get-config-stats
get-virtual-server-stats
get-webapp-stats
get-servlet-stats
Enable through
Admin Console or
through editing a
conguration le
Enabled by defaultThrough the wadm command
Enable through
Admin Console or
through editing a
conguration le
Enabled by defaultThrough wadm command
Through a URIAdministration Server
get-stats-xml
Through a URIAdministration Server
get-perfdump
Accessible when session
threads are hanging.
Administration Server
must be running.
Accessible when session
threads are hanging.
Administration Server
must be running.
need not be running.
Accessible when session
threads are hanging.
Administration Server
must be running.
need not be running.
Accessible when session
threads are hanging.
Administration Server
must be running.
Java ES MonitoringEnabled by defaultThrough the Java ES
Monitoring Console
Only for Java ES
installations.
Administration Server
must be running.
Monitoring the server does have some impact on computing resources. In general, using
perfdump through the URI is the least costly, followed by using stats-xml through a URI.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •22
Monitoring Server Performance
Because using the Administration Server takes computing resources, the command-line
interface and the Admin Console are the most costly monitoring methods.
For more information on these monitoring methods, see the following sections:
■
“About Statistics” on page 23
■
“Monitoring Current Activity Using the Admin Console” on page 25
■
“Monitoring Current Activity Using the CLI” on page 26
■
“Monitoring Current Activity Using stats.xml” on page 29
■
“Monitoring Current Activity Using perfdump” on page 31
■
“Monitoring Current Activity Using the Java ES Monitoring Console” on page 37
About Statistics
You can monitor many performance statistics through the Admin Console user interface,
through the command-line interface, through the stats-xml URI, and through perfdump.For
all these monitoring methods, the server uses statistics it collects. Noneof these monitoring
methods will work if statistics are not collected.
The statistics give you information at the conguration level, the server instance level, or the
virtual server level. The statistics are broken up into functional areas.
For the conguration, statistics are available in the following areas:
■
Requests
■
Errors
■
Response Time
For the server instance, statistics are available in the following areas:
■
Requests
■
Errors
■
Response Time
■
General
■
Java Virtual Machine (JVMTM)
■
Connection Queue
■
Keep Alive
■
DNS
■
File Cache
■
Thread Pools
■
Session Replication
■
Session Threads, including Proling data (available if proling is enabled)
Chapter 1 • Performanceand Monitoring Overview23
Monitoring Server Performance
■
For the virtual server, statistics are available in the following areas:
■
■
■
■
■
Some statistics default to zero if Quality of Service (QoS) is not enabled, for example, the count
of open connections, the maximum open connections, the rate of bytes transmitted, and the
maximum byte transmission rate.
Enabling Statistics
Statistics are activated by default on Web Server. However, if you have disabled them, you need
to enable them again to monitor your server for performance. To enable statistics, use Admin
Console or the wadm command-line utility (CLI).
Java DataBase Connectivity (JDBCTM) (available if a JDBC resource is created and the
connection pool is accessed)
General
Response
Web Applications
Proling Data (available if proling is enabled)
Servlet and Servlet Response Cache (available if the Servlet cache is enabled in
sun.web.xml)
Note – Collecting statistics causesa slight hit to performance.
▼
To Enable Statistics from the Admin Console
From the AdminConsole Common Tasks page, select the conguration.
1
2
Click Edit Conguration.
3
Click the General tab.
4
Click the Monitoring Settings sub tab.
5
On the Monitoring Settings page,under GeneralSettings, select the Statistics Enabled
checkbox.
6
Congure the interval and proling.
■
The Interval is the period in seconds between statistics updates. A higher setting (less
frequent) improves performance. The minimum value is .001 seconds; the default value is 5
seconds.
■
Proling is activated by default. Deactivating it results in slightly less monitoring overhead.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •24
Monitoring Server Performance
Restart theserver.
7
▼
To Enable Statistics from the CLI
Enter the followingCLI commandto enable statistics collection:
To setthe interval and enableproling, use the set-stats-prop interval and profiling
properties. For moreinformation, see the help forset-stats-prop.
3
Restart theserver.
Monitoring Current Activity Using the Admin Console
Frequently-used statistics are available through the Admin Console, viewed as general statistics,
instance statistics, and virtual server statistics.
▼
To Monitor Statistics from the Admin Console
In the Admin Console, from the Common Tasks page, select the Monitoringtab.
1
Select theconguration.
2
The conguration statistics are displayed.
From the drop-down list, select a View interval.
3
The statistics displayed in your browser are automatically updated at this interval.
Select thetype ofstatistics to display.
4
The initial list of statistics types includes General Statistics, Instance Statistics, and Virtual
Server Statistics.
If you choose Instance Statistics, click the name of the instance to monitor. Detailed statistics
are then displayed, including information on processes and session replications.
If you choose Virtual Server Statistics, click the name of the virtual server to monitor. Statistics
for the virtual server are displayed, including response statistics and web application statistics.
This information is not provided through perfdump.
Chapter 1 • Performanceand Monitoring Overview25
Monitoring Server Performance
Monitoring Current Activity Using the CLI
You can also view statistics information using the wadm commands get-config-stats,
get-virtual-server-stats, get-webapp-stats and get-servlet-stats. Note that the
examples below do not contain all possible command options. For the complete syntax, see the
help for the command.
▼
To Monitor Statistics from the CLI
To getstatistics for a conguration deployed on a singlenode, enter:
Using the node option in this syntax restricts the output to a single node. To get the statistics at
the conguration level, use the command without the node option.
The following shows an example of the output for a single node:
Because the node option is not used, this syntax gives the aggregate statistics for the virtual
server across all the nodes where the conguration has been deployed. Using the node option
restricts the output to a single node.
To getstatistics for a deployed web application, enter:
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •28
Monitoring Server Performance
The syntax gets the statistics for a given web application deployed on the given virtual server of
the given instance. To get the aggregated web application statistics for a given conguration
across all the nodes where the conguration has been deployed, use the command without the
node option.
The following example shows the output for the URI hello:
You can also display statistics using stats-xml, which displays statistics in XML format. The
output of stats-xml is in XML so that various tools can easily parse the statistics. You can view
the stats-xml output through a URI, which you have to enable, or you can view the stats-xml
output through the CLI, which is enabled by default.
▼
To Enable the stats-xml URIfrom the Admin Console
If you enable the stats-xml URI, you can access statistics for your server in XML format
through a browser. Note that when you use the stats-xml URI, you can access statistics even
when the Administration Server is not running. Also, with the stats-xml URI activated, users
can see the statistics information for your server, unless you take precautions to deny access.
1
On the CommonTaskspage, select the conguration fromthe pull-down menu on the left.
2
Select thevirtual server from thepull-down menuon the right, then click Edit Virtual Server.
3
On the Server Settings tab, click theMonitoring Settings sub tab.
Use the uri-prefix option to set the stats-xml URI.
2
Deploy the conguration using the wadm deploy-config command.
3
Access the stats-xml URI, forexample:
http://yourhost:port/stats-xml
The statistics are displayed in XML format.
▼
To Limit the stats-xml StatisticsDisplayed in theURI
You can modify the stats-xml URI to limit the data it provides.
Modify the stats-xml URI to limit theinformation by setting elements to0 or1. An element set
●
to 0 is not displayedon the stats-xml output. Forexample:
http://yourhost:port/stats-xml?thread=0&process=0
This syntax limits the stats-xml output so that thread and process statistics are not included.
By default all statistics are enabled (set to 1).
Most of the statistics are available at the server level, but some are available at the process level.
Use the following syntax elements to limit stats-xml:
■
cache-bucket
■
connection-queue
■
connection-queue-bucket (process-level)
■
cpu-info
■
dns-bucket
■
jdbc-resource-bucket
■
keepalive-bucket
■
process
■
profile
■
profile-bucket (process-level)
■
request-bucket
■
servlet-bucket
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •30
Monitoring Server Performance
■
session-replication
■
thread
■
thread-pool
■
thread-pool-bucket (process-level)
■
virtual-server
■
web-app-bucket
▼
To View stats-xml Output from the CLI
In addition to a URI, you can also access stats-xml output through the command-line
interface. It is enabled by default. Unlike viewing stats-xml output through the URI, the
Administration Server must be running to view stats-xml output at the command-line.
However, if request processing threads are hanging in your server (for example, because they
are busy), and you cannot use the URI, you can still access stats-xml output through the CLI.
To viewthe stats-xml output through the command-line interface,enter:
perfdump is a Server Application Function (SAF) built into Web Server that collects various
pieces of performance data from the Web Server internal statistics and displays them in ASCII
text. The perfdump output does not display all the statistics available through the
command-line statistics or the AdminConsole, but it can still be a useful tool. For example, you
can still use perfdump even if the AdministrationServer is not running. You can view the
perfdump output through the CLI, which is enabled by default, or you can view the perfdump
output through a URI, which you have to enable. If you enable the URI, you must control access
to the perfdump URI, otherwise it can be visible to users.
With perfdump, the statistics are unied. Rather than monitoring a single process, statistics are
multiplied by the number of processes, which gives you an accurate view of the server as a
whole.
For information on tuning the information displayed by perfdump, see
Data to Tune Your Server” on page 48
▼
To Enable the perfdump URI fromthe Admin Console
.
You can enable perfdump URI for a virtual server through the Admin Console.
Chapter 1 • Performanceand Monitoring Overview31
“Using Monitoring
Monitoring Server Performance
Note – The statistics displayed by perfdump are for the server as a whole. If you enable perfdump
on one virtual server, it displays statistics for the whole server, not an individual virtual server.
From Common Tasks, select a conguration.
1
Select thevirtual server and clickEdit Virtual Server.
2
Click the Monitoring Settings tab.
3
Select thePlain Text Report Enabled checkbox.
4
Provide a URIfor accessing the report, forexample /.perf .
5
Click Save.
6
Deploy the conguration.
7
To access perfdump,access the URI on the virtual server.
8
For example: http://localhost:80/.perf
You can request the perfdump statistics and specify how frequently (in seconds) the browser
should automatically refresh. The following example sets the refresh to every 5 seconds:
Deploy the conguration using the wadm deploy-config command.
2
To access perfdump, access the URI on the virtual server.
3
For example: http://localhost:80/.perf
You can request the perfdump statistics and specify how frequently (in seconds) the browser
should automatically refresh. The following example sets the refresh to every 5 seconds:
http://yourhost/.perf?refresh=5
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •32
▼
To View the perfdump Data from the CLI
In addition to a URI, you can also access perfdump output through the command-line interface.
It is enabled by default. Unlike viewing perfdump output through the URI, the Administration
Server must be running to view perfdump output at the command-line. However, if request
processing threads are hanging in your server (for example, because they are busy), and you
cannot use the URI, you can still access perfdump output through the CLI.
To viewthe perfdump output through the command-line interface,enter:
Total number of requests:62647125
Request processing time:0.0343 2147687.2500
default-bucket (Default bucket)
Number of Requests:62647125(100.00%)
Number of Invocations:3374170785(100.00%)
Latency:0.000847998.2500( 2.23%)
Function Processing Time:0.0335 2099689.0000( 97.77%)
Total Response Time:0.0343 2147687.2500(100.00%)
Performance buckets allow you to dene buckets and link them to various server functions.
Every time one of these functions is invoked, the server collects statistical data and adds it to the
bucket. Forexample, send-cgi and service-j2ee are functions used to serve the CGI and Java
servlet requests respectively. You can either dene two buckets to maintain separate counters
for CGI and servlet requests, or create one bucket that counts requests for both types of
dynamic content. The cost of collecting this information is minimal, and the impact on the
server performance is usually negligible. This information can later be accessed using perfdump.
The following information is stored in a bucket:
■
Name of the bucket. This name associates the bucket with a function.
■
Description. A description of the functions with which the bucket is associated.
■
Number of requests for this function. The total number of requests that caused this
function to be called.
■
Number of times the function was invoked. This number might not coincide with the
number of requests for the function, because some functions might be executed more than
once for a single request.
■
Function latency or the dispatch time. The time taken by the server to invoke the function.
■
Function time. The time spent in the function itself.
The default-bucket is predened by the server. It records statistics for the functions not
associated with any user-dened bucket.
Chapter 1 • Performanceand Monitoring Overview35
Monitoring Server Performance
Conguration
You must specify all conguration information for performance buckets in the magnus.conf
and obj.conf les. Only the default-bucket is automatically enabled.
First, you must enable performance statistics collection and perfdump.
The following examples show how to dene new buckets in magnus.conf:
The examples above create three buckets: acl-bucket, file-bucket, and cgi-bucket.To
associate these buckets with functions, add bucket=bucket-name to the obj.conf function for
which to measure performance.
...
Service method="(GET|HEAD|POST)" type="*~magnus-internal/*"
fn="send-file" bucket="file-bucket"
...
<Object name="cgi">
ObjectType fn="force-type" type="magnus-internal/cgi"
Service fn="send-cgi" bucket="cgi-bucket"
</Object>
For more information, see “The bucket Parameter” in Sun Java System Web Server 7.0 Update 1
Administrator’s Conguration File Reference.
Performance Report
The server statistics in buckets can be accessed using perfdump. The performance buckets
information is located in the last section of the report returned by perfdump.
The report contains the following information:
■
Average, Total, and Percent columns give data for each requested statistic.
■
Request Processing Time is the total time required by the server to process all requests it
has received so far.
■
Number of Requests is the total number of requests for the function.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •36
Monitoring Server Performance
■
Number of Invocations is the total number of times that the function was invoked. This
diers from the number of requests in that a function could be called multiple times while
processing one request. The percentage column for this row is calculated in reference to the
total number of invocations for all of the buckets.
■
Latency is the time in seconds that Web Server takes to prepare for calling the function.
■
Function Processing Time is the time in seconds that Web Server spent inside the function.
The percentage of Function Processing Time and Total Response Time is calculated with
reference to the total Request Processing Time.
■
Total Response Time is the sum in seconds of Function Processing Time and Latency.
The following is an example of the performance bucket information available through
Total number of requests:62647125
Request processing time:0.0343 2147687.2500
default-bucket (Default bucket)
Number of Requests:62647125(100.00%)
Number of Invocations:3374170785(100.00%)
Latency:0.000847998.2500( 2.23%)
Function Processing Time:0.0335 2099689.0000( 97.77%)
Total Response Time:0.0343 2147687.2500(100.00%)
Monitoring Current Activity Usingthe Java ES
Monitoring Console
The statistics available through the Web Server AdminConsole and the command-line
interface are also available through the Java ES Monitoring Console. Though the information is
the same, it is presented in a dierent format, using the Common Monitoring Data Model
(CMM). Though this guide covers monitoring using tools available in the Web Server, you
could also monitor your server using the Java ES monitoring tools. For more information on
using the Java ES monitoring tools, see Sun Java EnterpriseSystem 5 Monitoring Guide. Use the
same settings to tune the server, regardless of the what monitoring method you are using.
Chapter 1 • Performanceand Monitoring Overview37
38
CHAPTER 2
2
Tuning Sun Java System Web Server
This chapter describes specic adjustments you can make that might improve Sun Java System
Web Server performance. It provides an overview of Web Server's connection-handling process
so that you can better understand the tuning settings. The chapter includes the following topics:
■
“General Tuning Tips” on page 39
■
“Understanding Threads, Processes, and Connections” on page 40
■
“Mapping Web Server 6.1 Tuning Parameters to Web Server 7.0” on page 46
■
“Using Monitoring Data to Tune Your Server” on page 48
■
“Tuning the ACL User Cache” on page 77
■
“Tuning Java Web Application Performance” on page 78
■
“Tuning CGI Stub Processes (UNIX/Linux)” on page 81
■
“Using nd-pathinfo-forward” on page 82
■
“Using nostat” on page 82
■
“Using Busy Functions” on page 83
Note – Be very careful when tuning your server. Always back up your conguration les before
making any changes.
GeneralTuning Tips
As you tune your server, it is important to remember that your specic environment is unique.
The impacts of the suggestions provided in this guide will vary, depending on your specic
environment. Ultimately you must rely on your own judgement and observations to select the
adjustments that are best for you.
As you work to optimize performance, keep the following guidelines in mind:
■
Work methodically
As much as possible, make one adjustment at a time. Measure your performance before and
after each change, and rescind any change that doesn’t produce a measurable improvement.
39
Understanding Threads, Processes,and Connections
■
Adjust gradually
When adjusting a quantitative parameter, make several stepwise changes in succession,
rather than trying to make a drastic change all at once. Dierent systems face dierent
circumstances, and you might leap right past your system’s best setting if you change the
value too rapidly.
■
Start fresh
At each major system change, be it a hardware or software upgrade or deployment of a
major new application, review all previous adjustments to see whether they still apply. After
a Solaris upgrade, you should start over with an unmodied /etc/system le.
■
Stay informed
Read the Sun Java System Web Server 7.0 Update 1 Release Notes and the release notes for
your operating system whenever you upgrade your system. The release notes often provide
updated information about specic adjustments.
Understanding Threads, Processes,and Connections
Before tuning your server, you should understand the connection-handling process in Web
Server. This section includes the following topics:
■
“Connection-Handling Overview” on page 40
■
“Custom Thread Pools” on page 42
■
“The Native Thread Pool” on page 43
■
“Process Modes” on page 44
Connection-Handling Overview
In Web Server, acceptor threads on a listen socket accept connections and put them into a
connection queue. Request processing threads in a thread pool then pick up connections from
the queue and service the requests.
Acceptor
Threads
Requests
FIGURE 2–1 Web Server Connection Handling
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •40
Connection
Queue
Web Server
Request
Processing
Threads
Thread Pool
Understanding Threads, Processes,and Connections
A request processing thread might also be instructed to send the request to a dierent thread
pool for processing. For example, if the request processing thread must perform some work that
is not thread-safe, it might be instructed to send part of the processing to the NativePool. Once
the NativePool completes its work, it communicates the result to the request processing thread
and the request processing thread continues processing the request.
At startup, the server only creates the number of threads dened in the thread pool minimum
threads, by default 16. As the load increases, the server creates more threads. The policy for
adding new threads is based on the connection queue state.
Each time a new connection is returned, the number of connections waiting in the queue (the
backlog of connections) is compared to the number of request processing threads already
created. If the number of connections waiting is greater than the number of threads, more
threads are scheduled to be added the next time a request completes.
The process of adding new session threads is strictly limited by the maximum threads value. For
more information on maximum threads, see
Requests)” on page 58
.
“Maximum Threads (Maximum Simultaneous
You can change the settings that aect the number and timeout of threads, processes, and
connections in the Admin Console, on the conguration's Performance tab (HTTP settings),
and on the HTTP listener. You can also use the wadm commands set-thread-pool-prop and
set-http-listener-prop and set-keep-alive-prop.
Low Latency and High Concurrency Modes
The server can run in one of two modes, depending upon the load. It changes modes to
accommodate the load most eciently.
■
In low latency mode, for keep-alive connections, session threads themselves poll for new
requests.
■
In high concurrency mode, after nishing the request, session threads give the connection to
the keep-alive subsystem. In high concurrency mode, the keep-alive subsystem polls for new
requests for all keep-alive connections.
When the server is started, it starts in low latency mode. When the load increases, the server
moves to high concurrency mode. The decision to move from low latency mode to high
concurrency mode and back again is made by the server, based on connection queue length,
average total sessions, average idle sessions, and currently active and idle sessions.
Disabled Thread Pools
If a thread pool is disabled, no threads are created in the pool, no connection queue is created,
and no keep-alive threads are created. When the thread pool is disabled, the acceptor threads
themselves process the request.
Chapter 2 • Tuning Sun Java SystemWeb Server41
Understanding Threads, Processes,and Connections
Connection–Handling magnus.confDirectives for NSAPI
In addition to the settings discussed above, you can edit the following directives in the
magnus.conf le to congure additional request-processing settings for NSAPI plug-ins:
■
KernelThreads – Determines whether NSAPI plug-ins always run on kernel-scheduled
threads (Windows only)
■
TerminateTimeout – Determines the maximum amount of time to wait for NSAPI plug-ins
to nish processing requests when the server is shut down
For detailed information about these directives, see the Sun Java System Web Server 7.0Update 1 Administrator’s Conguration File Reference.
Note – For the safest way to edit conguration les such as magnus.conf, use the wadm
commands get-config-file and set-config-file to pull a local copy for editing and push it
back to the Web Server. For more information on these commands, see the help for these
commands.
CustomThread Pools
By default, the connection queue sends requests to the default thread pool. However, you can
also create your own thread pools in magnus.conf using a thread pool Init function. These
custom thread pools are used for executing NSAPI Service Application Functions (SAFs), not
entire requests.
If the SAF requires the use of a custom thread pool, the current request processing thread
queues the request, waits until the other thread from the custom thread pool completes the SAF,
then the request processing thread completes the rest of the request.
For example, the obj.conf le contains the following:
In this example, the request is processed as follows:
1. The request processing thread (in this example, called A1) picks up the request and executes
the steps before the NameTransdirective.
2. If the URI starts with /testmod, the A1 thread queues the request to the my-custom-pool
queue. The A1 thread waits.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •42
Understanding Threads, Processes,and Connections
3. A dierent thread in my-custom-pool, called the B1 thread in this example, picks up the
request queued by A1. B1 completes the request and returns to the wait stage.
4. The A1 thread wakes up and continues processing the request. It executes the ObjectType
SAF and moves on to the Service function.
5. Because the Service function must be processed by a thread in my-custom-pool2, the A1
thread queues the request to my-custom-pool2.
6. A dierent thread in my-custom-pool2, called C1 in this example, picks up the queued
request. C1 completes the request and returns to the wait stage.
7. The A1 thread wakes up and continues processing the request.
In this example, three threads, A1, B1, and C1 work to complete the request.
Additional thread pools are a way to run thread-unsafe plug-ins. By dening a pool with a
maximum number of threads set to 1, only one request is allowed into the specied service
function. In the previous example, if testmod_service is not thread-safe, it must be executed
by a single thread. If you create a single thread in the my-custom-pool2, the SAF works in a
multi-threaded Web Server.
For more information on dening thread pools, see “thread-pool-init” in Sun Java System WebServer 7.0 Update 1 Administrator’s Conguration File Reference.
The NativeThreadPool
On Windows, the native thread pool (NativePool) is used internally by the server to execute
NSAPI functions that require a native thread for execution.
Web Server uses Netscape Portable Runtime (NSPR), which is an underlying portability layer
providing access to the host OS services. This layer provides abstractions for threads that are
not always the same as those for the OS-provided threads. These non-native threads have lower
scheduling overhead, so their use improves performance. However,these threads are sensitive
to blocking calls to the OS, such as I/O calls. To make it easier to write NSAPI extensions that
can make use of blocking calls, the server keeps a pool of threads that safely support blocking
calls. These threads are usually native OS threads. During request processing, any NSAPI
function that is not marked as being safe for execution on a non-native thread is scheduled for
execution on one of the threads in the native thread pool.
If you have written your own NSAPI plug-ins such as NameTrans, Service,orPathCheck
functions, these execute by default on a thread from the native thread pool. If your plug-in
makes use of the NSAPI functions for I/O exclusively or does not use the NSAPI I/O functions
at all, then it can execute on a non-native thread. For this to happen, the function must be
loaded with a NativeThread="no" option, indicating that it does not require a native thread.
For example, add the following to the load-modules Init line in the magnus.conf le:
The NativeThread ag aects all functions in the funcs list, so if you have more than one
function in a library, but only some of them use native threads, use separate Init lines. If you set
NativeThread to yes, the thread maps directly to an OS thread.
For information on the load-modules function, see “load-modules” in Sun Java System WebServer 7.0 Update 1 Administrator’s Conguration File Reference.
Process Modes
You can run Sun Java System Web Server in one of the following modes:
■
“Single-Process Mode” on page 44
■
“Multi-Process Mode” on page 44
Note – Multi-process mode is deprecated for Java technology-enabled servers. Most applications
are now multi-threaded, and multi-process mode is usually not needed. However,
multi-process mode can signicantly improve overall server throughput for NSAPI
applications that do not implement ne-grained locking.
Single-Process Mode
In the single-process mode, the server receives requests from web clients to a single process.
Inside the single server process, acceptor threads are running that are waiting for new requests
to arrive. When a request arrives, an acceptor thread accepts the connection and puts the
request into the connection queue. A request processing thread picks up the request from the
connection queue and handles the request.
Because the server is multi-threaded, all NSAPI extensions written to the server must be
thread-safe. This means that if the NSAPI extension uses a global resource, like a shared
reference to a le or global variable, then the use of that resource must be synchronized so that
only one thread accesses it at a time. All plug-ins provided with the Web Server are thread-safe
and thread-aware, providing good scalability and concurrency. However,your legacy
applications might be single-threaded. When the server runs the application, it can only execute
one at a time. This leads to server performance problems when put under load. Unfortunately,
in the single-process design, there is no real workaround.
Multi-Process Mode
You can congure the server to handle requests using multiple processes with multiple threads
in each process. This exibility provides optimal performance for sites using threads, and also
provides backward compatibility to sites running legacy applications that are not ready to run
in a threaded environment. Because applications on Windows generally already take advantage
of multi-thread considerations, this feature applies to UNIX and Linux platforms.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •44
Understanding Threads, Processes,and Connections
The advantage of multiple processes is that legacy applications that are not thread-aware or
thread-safe can be run more eectively in Sun Java System Web Server. However, because all of
the Sun Java System extensions are built to support a single-process threaded environment, they
might not run in the multi-process mode. The Search plug-ins fail on startup if the server is in
multi-process mode, and if session replication is enabled, the server will fail to start in
multi-process mode.
In the multi-process mode, the server spawns multiple server processes at startup. Each process
contains one or more threads (depending on the conguration) that receive incoming requests.
Since each process is completely independent, each one has its own copies of global variables,
caches, and other resources. Using multiple processes requires more resources from your
system. Also, if you try to install an application that requires shared state, it has to synchronize
that state across multiple processes. NSAPI provides no helper functions for implementing
cross-process synchronization.
When you specify a MaxProcs value greater than 1, the server relies on the operating system to
distribute connections among multiple server processes (see
page 45
for information about the MaxProcs directive). However, many modern operating
“MaxProcs (UNIX/Linux)” on
systems do not distribute connections evenly, particularly when there are a small number of
concurrent connections.
Because Sun Java System Web Server cannot guarantee that load is distributed evenly among
server processes, you might encounter performance problems if you set Maximum Threads to 1
and MaxProcs greater than 1 to accommodate a legacy application that is not thread-safe. The
problem is especially pronounced if the legacy application takes a long time to respond to
requests (for example, if the legacy application contacts a back-end database). In this scenario, it
might be preferable to use the default value for Maximum Threads and serialize access to the
legacy application using thread pools. For more information about creating a thread pool, see
“thread-pool-init” in Sun Java System Web Server 7.0 Update 1 Administrator’s CongurationFile Reference.
If you are not running any NSAPI in your server, you should use the default settings: one
process and many threads. If you are running an application that is not scalable in a threaded
environment, you should use a few processes and many threads, for example, 4 or 8 processes
and 128 or 512 threads per process.
MaxProcs (UNIX/Linux)
To run a UNIX or Linux server in multi-process mode, set the MaxProcs directive to a value
that is greater than 1. Multi-process mode might provide higher scalability on multi-processor
machines and improve the overall server throughput on large systems such as the Sun Fire
T2000 server. If you set the value to less than 1, it is ignored and the default value of 1 is used.
Use the MaxProcs directive to improve overall server throughput for the following types of
applications:
■
NSAPI applications that do not implement ne-grained locking
Chapter 2 • Tuning Sun Java SystemWeb Server45
TM
Mapping Web Server 6.1 Tuning Parameters to Web Server 7.0
■
Java applications that do not require session management
Do not use the MaxProcs directive when the Sun Java System Web Server performs session
management for Java applications.
You can set the value for MaxProcs by editing the MaxProcs parameter in magnus.conf.
Note – You will receiveduplicate startup messages when running your server in MaxProcs mode.
Mapping Web Server 6.1 Tuning ParameterstoWeb Server 7.0
Many of the tuning parameters that were tunable by editing the magnus.conf and nsfc.conf
les in Web Server 6.1 have moved to the server.xml le. These tuning parameters are now
tunable through the Admin Console and command-line interface. The following table shows
selected tuning parameters, including the Web Server 6.1 parameter, the new server.xml
element used for tuning, and the way to change the parameters through the user interface.
Editing the server.xml le directly can be error-prone, so using the user interface to set values
is preferable. For a complete list of all elements in server.xml, see Chapter 3, “Elements in
server.xml,” in Sun Java System Web Server 7.0 Update 1 Administrator’s Conguration FileReference.
TABLE 2–1 Parameter Mapping to server.xml
Web Server 6.1 parameter
AcceptTimeout in
magnus.conf
ACLGroupCacheSize in
magnus.conf
ACLUserCacheSize in
magnus.conf
ConnQueueSize in
magnus.conf
dns-cache-init Init SAFenabled element of the
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •46
Web Server 7.0 server.xml
element or attributeAdmin Console Location wadm command
Mapping Web Server 6.1 Tuning Parameters to Web Server 7.0
TABLE 2–1 Parameter Mapping to server.xml(Continued)
Web Server 6.1 parameter
Web Server 7.0 server.xml
element or attributeAdmin Console Location wadm command
dns-cache-init Init SAF
cache size
FileCacheEnabled in
nsfc.conf
KeepAliveThreads in
magnus.conf
KeepAliveTimeout in
magnus.conf
KernelThreads in
max-entries element ofthe dns-cache element
enabled element of the
file-cache element
threads element of the
keep-alive element
timout element of the
keep-alive element
Unchanged
magnus.conf (Windows only)
ListenQ in magnus.conflisten-queue-size
element of the
http-listener element
LogVerbose in magnus.conflog-level element of the
log element
MaxAge in nsfc.conf lemax-age element of the
file-cache element
Conguration's
Performance tab ⇒
DNS tab
Conguration's
Performance tab ⇒
Cache tab
Conguration's
Performance tab ⇒
HTTP tab
Conguration's
Performance tab ⇒
HTTP tab
Conguration's
HTTP Listeners tab
Conguration's
General Tab ⇒ Log
Settings
Conguration's
Performance tab ⇒
Cache tab
set-dns-cache-prop
command's max-entries
property
set-file-cache-prop
command's enabled
property
set-keep-alive-prop
command's threads
property
set-keep-alive-prop
command's timeout
property
set-http-listener-prop
command's
listen-queue-size
set-error-log-prop
command's log-level
property
set-file-cache-prop
command's max-age
property
MaxFiles in nsfc.conf lemax-entries element of
the file-cache element
MaxKeepAliveConnections in
magnus.conf
max-connections
element of the
keep-alive element
Conguration's
Performance tab ⇒
Cache tab
Conguration's
Performance tab ⇒
HTTP tab
set-file-cache-prop
command's max-entries
property
set-keep-alive-prop
command's
max-connections
property
MaxProcs in magnus.confDeprecated for Java
technology-enabled
servers
NativePoolMaxThreads in
Unchanged
magnus.conf
NativePoolMinThreads in
Unchanged
magnus.conf
Chapter 2 • Tuning Sun Java SystemWeb Server47
Using Monitoring Data toTuneYour Server
TABLE 2–1 Parameter Mapping to server.xml(Continued)
Web Server 6.1 parameter
Web Server 7.0 server.xml
element or attributeAdmin Console Location wadm command
NativePoolQueueSize in
magnus.conf
NativePoolStackSize in
magnus.conf
RqThrottle in magnus.confmax-threads element of
RqThrottleMin in
magnus.conf
TerminateTimeout in
magnus.conf
Unchanged
Unchanged
the thread-pool element
min-threads element of
the thread-pool element
Unchanged
Conguration's
Performance tab ⇒
HTTP tab
Conguration's
Performance tab ⇒
HTTP tab
Using Monitoring Data toTune Your Server
This section describes the performance information available through the Admin Console,
perfdump, the command-line interface, and stats-xml. It discusses how to analyze that
information and tune some parameters to improve your server’s performance.
The default tuning parameters are appropriate for all sites except those with very high volume.
The only settings that large sites might regularly need to change are the thread pool and keep
alive settings. Tune these settings at the conguration level in the Admin Console or using wadm
commands. It is also possible to tune the server by editing the elements directly in the
server.xml le, but editing the server.xml le directly can lead to complications.
set-thread-pool-prop
command's max-threads
property
set-thread-pool-prop
command's min-threads
property
perfdump monitors statistics in the following categories, which are described in the following
sections. In most cases these statistics are also displayed in the Admin Console, command-line
interface, and stats-xml output. The following sections contain tuning information for all
these categories, regardless of what method you are using to monitor the data:
■
“Connection Queue Information” on page 49
■
“HTTP Listener (Listen Socket) Information” on page 51
■
“Keep-Alive Information” on page 53
■
“Session Creation (Thread) Information” on page 57
■
“File Cache Information (Static Content)” on page 59
■
“Thread Pool Information” on page 65
■
“DNS Cache Information” on page 68
In addition, the statistics information displayed through the Admin Console, the
command-line interface, and stats-xml contains other categories not contained in the
perfdump output. Tuning these statistics is discussed in the following sections:
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •48
Using Monitoring Data toTuneYour Server
■
“Java Virtual Machine (JVM) Information” on page 70
■
“Web Application Information” on page 71
■
“JDBC Resource Information” on page 72
Once you have viewed the statistics you need, you can tune various aspects of your server’s
performance at the conguration level using the AdminConsole's Performance tab. The Admin
Console Performance tab includes settings for many performance categories, including:
■
HTTP Settings (includes Thread Pool and Keep Alive)
■
DNS Settings
■
SSL and TLS Settings
■
Cache Settings
■
CGI Settings
■
Access Log Buer Settings
You can also view and set tuning parameters using the appropriate wadm commands. In general,
when you set tuning properties using wadm commands, the names of the properties are the same
as displayed in stats.xml.
Connection Queue Information
In Web Server, a connection is rst accepted by acceptor threads associated with the HTTP
listener. The acceptor threads accept the connection and put it into the connection queue.
Then, request processing threads take the connection in the connection queue and process the
request. Formore information, see
“Connection-Handling Overview” on page 40.
Connection queue information shows the number of sessions in the connection queue, and the
average delay before the connection is accepted by the request processing thread.
The following is an example of how these statistics are displayed in perfdump:
ConnectionQueue:
-----------------------------------------
Current/Peak/Limit Queue Length0/1853/160032
Total Connections Queued11222922
Average Queue Length (1, 5, 15 minutes)90.35, 89.64, 54.02
Average Queueing Delay4.80 milliseconds
The same information is displayed in a dierent format through the Admin Console or
command-line interface, with some slight dierences. The following table shows the
information as displayed in the Admin Console when accessing monitoring information for the
server instance:
Chapter 2 • Tuning Sun Java SystemWeb Server49
Using Monitoring Data toTuneYour Server
TABLE 2–2 Connection Queue Statistics
Present Number of Connections Queued0
Total Number of Connections Queued11222922
AverageConnections Over Last 1 Minute90.35
AverageConnections Over Last 5 Minutes89.64
AverageConnections Over Last 15 Minutes54.02
Maximum Queue Size160032
Peak Queue Size1853
Number of Connections Overowed0
Ticks Spent5389284274
Total Number of Connections Added425723
Current /Peak /Limit Queue Length
Current/Peak/Limit queue length shows, in order:
■
The number of connections currently in the queue.
■
The largest number of connections that have been in the queue simultaneously.
■
The maximum size of the connection queue. This number is:
Maximum Queue Size = Thread Pool Queue Size + MaximumThreads + Keep-Alive Queue
Size
Once the connection queue is full, new connections are dropped.
Tuning
If the peak queue length (maximum queue size) is close to the limit, you can increase the
maximum connection queue size to avoid dropping connections under heavy load.
You can increase the maximum connection queue size in the AdminConsole by changing the
value of the thread pool Queue Size eld on the conguration's Performance tab ⇒ HTTP sub
tab. The default is 1024.
To change the queue size using the command-line interface, use the wadmset-thread-pool-prop command's queue-size property.
Total Connections Queued
Total Connections Queued is the total number of times a connection has been queued. This
number includes newly-accepted connections and connections from the keep-alive system.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •50
Using Monitoring Data toTuneYour Server
This setting is not tunable.
AverageQueue Length
The Average Queue Length shows the average number of connections in the queue over the last
one-minute, ve-minute, and 15–minute intervals.
This setting is not tunable.
AverageQueuing Delay
The Average Queueing Delay is the average amount of time a connection spends in the
connection queue. This represents the delay between when a request connection is accepted by
the server and when a request processing thread begins servicing the request. It is the Ticks
Spent divided by the Total Connections Queued, and converted to milliseconds.
This setting is not tunable.
Ticks Spent
A tick is a system-dependent value and provided by the tickPerSecond attribute of the server
element in stats.xml. The ticks spent value is the total amount of time that connections spent
in the connection queue and is used to calculate the average queueing delay.
This setting is not tunable.
Total Number of Connections Added
The new connections added to the connection queue. This setting is not tunable.
HTTP Listener (Listen Socket) Information
The following HTTP listener information includes the IP address, port number, number of
acceptor threads, and the default virtual server. For tuning purposes, the most important eld in
the HTTP listener information is the number of acceptor threads.
You can have many HTTP listeners enabled for virtual servers, but at least one is enabled for
your default server instance (usually http://0.0.0.0:80). The monitoring information
available through the Admin Console does not show the HTTP listener information, because
that information is available in the Admin Console on the conguration's HTTP Listeners tab.
The following is an example of how the HTTP listeners information appears in perfdump:
If you have created multiple HTTP listeners, perfdump displays all of them.
To edit an HTTP listener using the AdminConsole, for the conguration, select the HTTP
Listeners tab. Click the listener name to edit the listener.
To congure an HTTP listener using the command-line interface, use the command wadmset-http-listener-prop.
For more information about adding and editing listen sockets, see the Sun Java System WebServer 7.0 Update 1 Administrator’s Guide.
Address
The Address eld contains the base address on which this listen socket is listening. A host can
have multiple network interfaces and multiple IP addresses. The address contains the IP
address and the port number.
If your listen socket listens on all network interfaces for the host machine, the IP part of the
address is 0.0.0.0.
Tuning
This setting is tunable when you edit an HTTP listener. If you specify an IP address other than
0.0.0.0, the server makes one less system call per connection. Specify an IP address other than
0.0.0.0 for best possible performance.
AcceptorThreads
Acceptor threads are threads that wait for connections. The threads accept connections and put
them in a queue where they are then picked up by worker threads. For more information, see
“Connection-Handling Overview” on page 40.
Ideally, you want to have enough acceptor threads so that there is always one available when a
user needs one, but few enough so that they do not provide too much of a burden on the system.
A good rule is to have one acceptor thread per CPU on your system. You can increase this value
to about double the number of CPUs if you nd indications of TCP/IP listen queue overruns.
Tuning
This setting is tunable when you edit an HTTP listener. The default value is 1.
Other HTTP listener settings that aect performance are the size of the send buer and receive
buer. Formore information regarding these buers, see your operating system
documentation.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •52
Using Monitoring Data toTuneYour Server
Default Virtual Server
Virtual servers work using the HTTP 1.1 Hostheader. If the end user’s browser does not send
the Hostheader, or if the server cannot nd the virtual server specied by the Host header, Web
Server handles the request using a default virtual server. You can congure the default virtual
server to send an error message or serve pages from a special document root.
Tuning
This setting is tunable when you edit an HTTP listener.
Keep-Alive Information
This section provides information about the server’s HTTP-level keep-alive system.
Note – The name keepalive should not be confused with TCP keep-alives. Also, note that the
name keep-alive was changed to PersistentConnections in HTTP 1.1, but Web Server
continues to refer to them as keep-alive connections.
The following example shows the keep-alive statistics displayed by perfdump:
The following table shows the keep-alive statistics displayed in the AdminConsole:
TABLE 2–3 Keep-AliveStatistics
Number of Connections Processed0
Total Number of Connections Added198
Maximum Connection Size200
Number of Connections Flushed0
Number of Connections Refused56844280
Number of IdleConnections Closed365589
Chapter 2 • Tuning Sun Java SystemWeb Server53
Using Monitoring Data toTuneYour Server
TABLE 2–3 Keep-AliveStatistics(Continued)
Connection Timeout10
Both HTTP 1.0 and HTTP 1.1 support the ability to send multiple requests across a single
HTTP session. A web server can receive hundreds of new HTTP requests per second. If every
request was allowed to keep the connection open indenitely, the server could become
overloaded with connections. On UNIX and Linux systems, this could lead to a le table
overow very easily.
To deal with this problem, the server maintains a counter for the maximum number of waiting
keep-alive connections. A waiting keep-alive connection has fully completed processing the
previous request, and is now waiting for a new request to arrive on the same connection. If the
server has more than the maximum waiting connections open when a new connection waits for
a keep-alive request, the server closes the oldest connection. This algorithm keeps an upper
bound on the number of open waiting keep-alive connections that the server can maintain.
Sun Java System Web Server does not always honor a keep-alive request from a client. The
following conditions cause the server to close a connection, even if the client has requested a
keep-alive connection:
■
The keep alive timeout is set to 0.
■
The keep alive maximum connections count is exceeded.
■
Dynamic content, such as a CGI, does not have an HTTP content-length header set. This
applies only to HTTP 1.0 requests. If the request is HTTP 1.1, the server honors keep-alive
requests even if the content-length is not set. The server can use chunked encoding for
these requests if the client can handle them (indicated by the request header
transfer-encoding: chunked).
■
The request is not HTTP GET or HEAD.
■
The request was determined to be bad. For example, if the client sends only headers with no
content.
The keep-alive subsystem in Web Server is designed to be massively scalable. The
out-of-the-box conguration can be less than optimal if the workload is non-persistent (that is,
HTTP 1.0 without the KeepAlive header), or for a lightly loaded system that’s primarily
servicing keep-alive connections.
Keep-Alive Count
This section in perfdump has two numbers:
■
Number of connections in keep-alive mode (total number of connections added)
■
Maximum number of connections allowed in keep-alive mode simultaneously (maximum
connection size)
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •54
Using Monitoring Data toTuneYour Server
Tuning
You can tune the maximum number of connections that the server allows to wait at one time
before closing the oldest connection in the AdminConsole by editing the Maximum
Connections eld on the conguration's Performance tab ⇒ HTTP tab, under Keep Alive
Settings. The default is 200. In the command-line interface, use the max-connections property
in the wadm set-keep-alive-prop command.
Note – The number of connections specied by the maximum connections setting is divided
equally among the keep-alive threads. If the maximum connections setting is not equally
divisible by the keep-alive threads setting, the server might allow slightly more than the
maximum number of simultaneous keep-alive connections.
Keep-Alive Hits
The keep-alive hits (number of connections processed) is the number of times a request was
successfully received from a connection that had been kept alive.
This setting is not tunable.
Keep-Alive Flushes
The number of times the server had to close a connection because the total number of
connections added exceeded the keep-alive maximum connections setting. The server does not
close existing connections when the keep-alive count exceeds the maximum connection size.
Instead, new keep-alive connections are refused and the number of connections refused count
is incremented.
Keep-Alive Refusals
The number of times the server could not hand o the connection to a keep-alive thread,
possibly due to too many persistent connections (or when total number of connections added
exceeds the keep-alive maximum connections setting). The suggested tuning is to increase the
keep-alive maximum connections.
Keep-AliveTimeouts
The number of times the server closed idle keep-alive connections as the client connections
timed out without any activity. This statistic is useful to monitor; no specic tuning is advised.
Keep-AliveTimeout
The time (in seconds) before idle keep-alive connections are closed. Set this value in the Admin
Console in the Timeout eld on the conguration's Performance tab ⇒ HTTP tab, under Keep
Alive Settings. The default is 30 seconds, meaning the connection times out if idle for more than
30 seconds. The maximum is 3600 seconds (60 minutes). In the command-line interface, use
the timeout property in the wadm set-keep-alive-prop command.
Chapter 2 • Tuning Sun Java SystemWeb Server55
Using Monitoring Data toTuneYour Server
Keep-Alive PollInterval
The keep-alive poll interval species the interval (in seconds) at which the system polls
keep-alive connections for further requests. The default is 0.001 second, the lowest value
allowed. It is set to a low value to enhance performance at the cost of CPU usage.
To tune the poll interval, edit the Poll Interval eld on the conguration's Performance tab ⇒
HTTP tab, under Keep Alive Settings. In the command-line interface, use the poll-interval
property in the wadm set-keep-alive-prop command.
Keep-AliveThreads
You can congure the number of threads used in the keep-alive system in the Admin Console
by editing the Threads eld on the conguration's Performance tab ⇒ HTTP tab, under Keep
Alive Settings. The default is 1. In the command-line interface, use the threads property in the
wadm set-keep-alive-prop command.
Tuningfor HTTP 1.0-Style Workload
Since HTTP 1.0 results in a large number of new incoming connections, the default acceptor
threads of 1 per listen socket would be suboptimal. Increasing this to a higher number should
improve performance for HTTP 1.0-style workloads. For instance, for a system with 2 CPUs,
you might want to set it to 2. You might also want to reduce the keep-alive connections, for
example, to 0.
HTTP 1.0-style workloads would have many connections established and terminated.
If users are experiencing connection timeouts from a browser to Web Server when the server is
heavily loaded, you can increase the size of the HTTP listener backlog queue by setting the
HTTP listener listen queue size to a larger value, such as 8192.
The HTTP listener listen queue species the maximum number of pending connections on a
listen socket. Connections that time out on a listen socket whose backlog queue is full fail.
Tuningfor HTTP 1.1-Style Workload
In general, it is a trade-o between throughput and latency while tuning server-persistent
connection handling. The keep-alive poll interval and timeout control latency. Lowering the
value of these settings is intended to lower latency on lightly loaded systems (for example,
reduce page load times). Increasing the values of these settings is intended to raise aggregate
throughput on heavily loaded systems (for example, increase the number of requests per second
the server can handle). However,if there's too much latency and too few clients, aggregate
throughput suers as the server sits idle unnecessarily. As a result, the general keep-alive
subsystem tuning rules at a particular load are as follows:
■
If there's idle CPU time, decrease the poll interval.
■
If there's no idle CPU time, increase the poll interval.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •56
Using Monitoring Data toTuneYour Server
Also, chunked encoding could aect the performance for HTTP 1.1 workload. Tuning the
response buer size could positively aect the performance. A higher response buer size in the
conguration's Performance tab ⇒ HTTP tab would result in sending a Content-length:
header, instead of chunking the response. To set the buer size using the CLI, use the wadmset-http-prop command's output-buffer-size property.
You can also set the buer size for a Service-class function in the obj.conf le, using the
UseOutputStreamSize parameter. UseOutputStreamSize overrides the value set using the
output-buffer-size property. If UseOutputStreamSize is not set, Web Server uses the
output-buffer-size setting. If the output-buffer-size is not set, Web Server uses the
output-buffer-size default value of 8192.
The following example shows using the CLI to increase the output buer size, then deploying
the conguration (used if UseOutputStreamSize is not specied in obj.conf):
The following example shows setting the buer size for the nsapi_test Service function:
<Object name="nsapitest">
ObjectType fn="force-type" type="magnus-internal/nsapitest"
Service method=(GET) type="magnus-internal/nsapitest" fn="nsapi_test"
UseOutputStreamSize=12288
</Object>
Session Creation (Thread) Information
Session (thread) creation statistics are displayed in perfdump as follows:
SessionCreationInfo:
------------------------
Active Sessions128
Keep-Alive Sessions0
Total Sessions Created128/128
Active Sessions shows the number of sessions (request processing threads) currently
servicing requests.
Keep-Alive Sessions shows the number of HTTP request processing threads serving
keep-alive sessions.
Total Sessions Created in perfdump shows both the number of sessions that have been
created and the maximum threads.
Chapter 2 • Tuning Sun Java SystemWeb Server57
Using Monitoring Data toTuneYour Server
The equivalent information as the Total Number of Threads is available through the Admin
Console from the Monitoring tab ⇒ Instances sub tab, under General Statistics. To see the
maximum threads allowed, see the Maximum Threads eld on the conguration's Performance
tab ⇒ HTTP sub tab, under Thread Pool Settings.
To get the equivalent of the perfdump Active Sessions, you can subtract the Number of Idle
Threads from the Total Number of Threads.
Maximum Threads (Maximum Simultaneous Requests)
The maximum threads setting species the maximum number of simultaneous transactions
that the Web Server can handle. The default value is 128. Changes to this value can be used to
throttle the server, minimizing latencies for the transactions that are performed. The Maximum
Threads value acts across multiple virtual servers, but does not attempt to load balance. It is set
for each conguration.
Reaching the maximum number of congured threads is not necessarily undesirable, and you
do not need to automatically increase the number of threads in the server. Reaching this limit
means that the server needed this many threads at peak load, but as long as it was able to serve
requests in a timely manner, the server is adequately tuned. However,at this point connections
queue up in the connection queue, potentially overowing it. If you monitor your server's
performance regularly and notice that total sessions created number is often near the maximum
number of threads, you should consider increasing your thread limits.
To compute the number of simultaneous requests, the server counts the number of active
requests, adding one to the number when a new request arrives, subtracting one when it nishes
the request. When a new request arrives, the server checks to see if it is already processing the
maximum number of requests. If it has reached the limit, it defers processing new requests until
the number of active requests drops below the maximum amount.
In theory, you could set the maximum threads to 1 and still have a functional server. Setting this
value to 1 would mean that the server could only handle one request at a time, but since HTTP
requests for static les generally have a very short duration (response time can be as low as 5
milliseconds), processing one request at a time would still allow you to process up to 200
requests per second.
However, in actuality, Internet clients frequently connect to the server and then do not
complete their requests. In these cases, the server waits 30 seconds or more for the data before
timing out. You can dene this timeout period using the IO Timeout setting on the
conguration's Performance tab ⇒ HTTP Settings page. You can also use the command wadmset-http-prop and set the io-timeout property. The default value is 30 seconds. By setting it
to less than the default you can free up threads sooner, but you might also disconnect users with
slower connections. Also, some sites perform heavyweight transactions that take minutes to
complete. Both of these factors add to the maximum simultaneous requests that are required. If
your site is processing many requests that take many seconds, you might need to increase the
number of maximum simultaneous requests.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •58
Using Monitoring Data toTuneYour Server
Suitable maximum threads values range from 100-500, depending on the load. Maximum
Threads represents a hard limit for the maximum number of active threads that can run
simultaneously, which can become a bottleneck for performance. The default value is 128.
The thread pool minimum threads is the minimum number of threads the server initiates upon
startup. The default value is 16.
Note – When conguring WebServer to be used with the Solaris Network Cache and
Accelerator (SNCA), setting the maximum threads and the queue size to 0 provides better
performance. Because SNCA manages the client connections, it is not necessary to set these
parameters. These parameters can also be set to 0 with non-SNCA congurations, especially for
cases in which short latency responses with no keep-alives must be delivered. It is important to
note that the maximum threads and queue size must both be set to 0.
For information about using SNCA, see
(SNCA)” on page 91
.
“Using the Solaris Network Cache and Accelerator
Tuning
You can increase your thread limits in the Admin Console by editing the MaximumThreads
eld on the conguration's Performance tab ⇒ HTTP tab, under Thread Pool Settings. In the
command-line interface, use the wadm set-thread-pool-prop command's max-threads
property. The default is 128.
File Cache Information (Static Content)
The cache information section provides statistics on how your le cache is being used. The le
cache caches static content so that the server handles requests for static content quickly. The le
cache contains information about les and static le content. The le cache also caches
information that is used to speed up processing of server-parsed HTML. For servlets and JSPs,
other kinds of caching are used.
For sites with scheduled updates to content, consider shutting down the cache while the content
is being updated, and starting it again after the update is complete. Although performance slows
down, the server operates normally when the cache is o.
For performance reasons, Web Server caches as follows:
■
For small les, it caches the content in memory (heap).
■
For large les, it caches the open le descriptors (to avoid opening and closing les).
The following is an example of how the cache statistics are displayed in perfdump:
CacheInfo:
------------------
File Cache Enabledyes
Chapter 2 • Tuning Sun Java SystemWeb Server59
Using Monitoring Data toTuneYour Server
File Cache Entries141/1024
File Cache Hit Ratio652/664 ( 98.19%)
Maximum Age30
Accelerator Entries120/1024
Acceleratable Requests281/328 ( 85.67%)
Acceleratable Responses 131/144 ( 90.97%)
Accelerator Hit Ratio247/281 ( 87.90%)
The following table shows the le cache statistics as displayed in the Admin Console:
TABLE 2–4 FileCache Statistics
Total Cache Hits46
Total Cache Misses52
Total Cache Content Hits0
Number of FileLookup Failures9
Number of FileInformation Lookups37
Number of FileInformation Lookup Failures50
Number of Entries12
Maximum Cache Size1024
Number of Open File Entries0
Number of MaximumOpen Files Allowed1024
Heap Size36064
Maximum Heap Cache Size10735636
Size of Memory Mapped File Content0
Maximum Memory MappedFile Size0
Maximum Age of Entries30
AcceleratorEntries
The number of les that have been cached in the accelerator cache.
Tuning
You can increase the maximum number of accelerator cache entries by increasing the number
of le cache entries as described in “File Cache Entries” on page 62. Note that this number will
typically be smaller than the File Cache Entries number because the accelerator cache only
caches information about les and not directories. If the number is signicantly lower than the
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •60
Using Monitoring Data toTuneYour Server
File Cache Entries number, you can improve the accelerator cache utilization by following the
tuning information described in “AcceleratableRequests” on page 61 and “Acceleratable
Responses” on page 61.
AcceleratableRequests
The number of client requests that were eligible for processing by the accelerator cache. Only
simple GET requests are processed by the accelerator cache. The accelerator cache does not
process requests that explicitly disable caching, for example, requests sent when a user clicks
Reload in the browser and requests that include a query string, that is, requests for URLs that
include a ? character.
Tuning
To maximize the number of acceleratable requests, structure your web sites to use static les
when possible and avoid using query strings in requests for static les.
AcceleratableResponses
The number of times the response to an acceleratable request was eligible for addition to the
accelerator cache.
Tuning
When the server serves a static le from the le cache, the accelerator cache may be able to cache
the response for faster processing on subsequent requests. To maximize performance, you
should maximize the number of responses that are acceleratable. In the default conguration,
all responses to requests for static les can be cached in the accelerator cache. The following
conguration changes may prevent a response from being acceleratable:
■
ACLs that deny read access
■
Additional directives in the default object of the obj.conf le, including third party plug-ins
■
Using <Client> or <If> in the default object of the obj.conf le
■
Custom access log formats
■
Java Servlet lters
To maximize the number of responses that are acceleratable, avoid such congurations.
AcceleratorHit Ratio
The number of times the response for an acceleratable request was found in the accelerator
cache.
Chapter 2 • Tuning Sun Java SystemWeb Server61
Using Monitoring Data toTuneYour Server
Tuning
Higher hit ratios result in better performance. To maximize the hit ratio, see the tuning
information for
File CacheEnabled
If the cache is disabled, the rest of this section is not displayed in perdump. In the Admin
Console, the File Cache Statistics section shows zeros for the values.
Tuning
The cache is enabled by default. You can disable it in the Admin Console by deselecting the File
Cache Enabled box on the conguration's Performance tab ⇒ Cache sub tab, under FileCache.
To disable it using the command-line-interface, use wadm set-file-cache-prop and set the
enabled property to false.
File CacheEntries
The number of current cache entries and the maximum number of cache entries are both
displayed in perfdump. In the Admin Console, they are called the Number of Entries and the
Maximum Cache Size. A single cache entry represents a single URI.
“Acceleratable Responses” on page 61.
Tuning
You can set the maximum number of cached entries in the Admin Console in the Maximum
Entries eld on the conguration's Performance tab ⇒ Cache tab, under File Cache. In the
command-line interface, use wadm set-file-cache-prop and set the max-entries property.
The default is 1024. The range of values is 1-1048576.
File CacheHit Ratio (CacheHits / Cache Lookups)
The hit ratio available through perfdump gives you the number of le cache hits compared to
cache lookups. Numbers approaching 100% indicate that the le cache is operating eectively,
while numbers approaching 0% could indicate that the le cache is not serving many requests.
To gure this number yourself using the statistics provided through the Admin Console, divide
the Total Cache Hits by the sum of the Total Cache Hits and the Total Cache Misses.
This setting is not tunable.
Maximum Age
This eld displays the maximum age of a valid cache entry. The parameter controls how long
cached information is used after a le has been cached. An entry older than the maximum age is
replaced by a new entry for the same le.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •62
Using Monitoring Data toTuneYour Server
Tuning
Set the maximum age based on whether the content is updated (existing les are modied) on a
regular schedule. For example, if content is updated four times a day at regular intervals, you
could set the maximum age to 21600 seconds (6 hours). Otherwise, consider setting the
maximum age to the longest time you are willing to serve the previous version of a content le
after the le has been modied. If your web site’s content changes infrequently, you might want
to increase this value for improved performance.
Set the maximum age in the Admin Console in the Maximum Age eld on the conguration's
Performance tab ⇒ Cache tab, under File Cache. In the command-line interface, use wadmset-file-cache-prop and change the max-age property. The default value is 30 seconds. The
range of values is 0.001-3600.
Maximum Heap CacheSize
The optimal cache heap size depends upon how much system memory is free. A larger heap size
means that the Web Server can cache more content and therefore get a better hit ratio.
However, the heap size should not be so large that the operating system starts paging cached
les.
Tuning
Set the maximum heap size in the AdminConsole in the Maximum Heap Space Size eld on the
conguration's Performance tab ⇒ Cache tab, under File Cache. In the command-line
interface, use wadm set-file-cache-prop and change the max-heap-space property. The
default value is 10485760 bytes. The range of values is 0-9223372036854775807. In a 32–bit
Web Server, since processes have four GBs of address space for the le cache, the value should
be well under four GB.
Using the nocache Parameter
You can use the parameter nocache for the Service function send-file to specify that les in a
certain directory should not be cached. Make this change by editing obj.conf. For example, if
you have a set of les that changes too rapidly for caching to be useful, you can put them into a
directory and instruct the server not to cache les in that directory by editing obj.conf.
...
</Object>
<Object name="myname">
Service method=(GET|HEAD) type=*~magnus-internal/* fn=send-file
nocache=""
</Object>
In the above example, the server does not cache static les from /export/mydir/ when
requested by the URL prex /myurl. For more information on editing obj.conf, see Sun JavaSystem Web Server 7.0 Update 1 Administrator’s Conguration File Reference.
File CacheDynamic Control and Monitoring
You can add an object to obj.conf to dynamically monitor and control the le cache while the
server is running.
<Object name="nsfc">
Service fn="service-nsfc-dump"
</Object>
This enables the le cache control and monitoring function (nsfc-dump) to be accessed through
the URI /nfsc. To use a dierent URI, change the from parameter in the NameTrans directive.
The following is an example of the information you receive when you access the URI:
Sun Java System File Cache Status (pid 3602)
The file cache is enabled.
Cache resource utilization
Number of cached file entries = 174968 (152 bytes each, 26595136 total bytes)
Heap space used for cache = 1882632616/1882632760 bytes
Mapped memory used for medium file contents = 0/1 bytes
Number of cache lookup hits = 47615653/48089040 ( 99.02 %)
Number of hits/misses on cached file info = 23720344/324195
Number of hits/misses on cached file content = 16247503/174985
Number of outdated cache entries deleted = 0
Number of cache entry replacements = 0
Total number of cache entries deleted = 0
Parameter settings
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •64
Using Monitoring Data toTuneYour Server
ReplaceFiles: false
ReplaceInterval: 1 milliseconds
HitOrder: false
CacheFileContent: true
TransmitFile: false
MaxAge: 3600 seconds
MaxFiles: 600000 files
SmallFileSizeLimit: 500000 bytes
MediumFileSizeLimit: 1000001 bytes
BufferSize: 8192 bytes
CopyFiles: false
Directory for temporary files: /tmp
Hash table size: 1200007 buckets
You can include a query string when you access the URI. The following values are recognized:
■
?list: Lists the les in the cache.
■
?refresh=n: Causes the client to reload the page every n seconds.
■
?restart: Causes the cache to be shut down and then restarted.
■
?start: Starts the cache.
■
?stop: Shuts down the cache.
If you choose the ?list option, the le listing includes the le name, a set of ags, the current
number of references to the cache entry, the size of the le, and an internal le ID value. The
ags are as follows:
■
C: File contents are cached.
■
D: Cache entry is marked for delete.
■
E: PR_GetFileInfo() returned an error for this le.
■
I: File information (size, modify date, and so on) is cached.
■
M: File contents are mapped into virtual memory.
■
O: File descriptor is cached (when TransmitFile is set to true).
■
P: File has associated private data (should appear on shtml les).
■
T: Cache entry has a temporary le.
■
W: Cache entry is locked for write access.
Thread Pool Information
If you are using the default settings, threads from the default thread pool process the request.
However, you can also create custom thread pools and use them to run custom NSAPI
functions. By default, Web Server creates one additional pool, named NativePool. In most
cases, the native thread pool is only needed on the Windows platform. For more information on
thread pools, see
Chapter 2 • Tuning Sun Java SystemWeb Server65
“Understanding Threads, Processes, and Connections” on page 40.
Using Monitoring Data toTuneYour Server
NativeThread Pool
The following example shows native thread pool information as it appears in perfdump:
Native pools:
---------------------------NativePool:
Idle/Peak/Limit1/1/128
Work Queue Length/Peak/Limit 0/0/0
my-custom-pool:
Idle/Peak/Limit1/1/128
Work Queue Length/Peak/Limit 0/0/0
If you have dened additional custom thread pools, they are shown under the Native Pools
heading in perfdump.
The following table shows the thread pool statistics as they appear in the AdminConsole. If you
have not dened additional thread pools, only the NativePool is shown:
TABLE 2–5 Thread Pools Statistics
NameNativePool
Idle Threads1
Threads1
Requests Queued0
Peak Requests Queued0
Idle /Peak/Limit
Idle, listed as Idle Threads in the Admin Console, indicates the number of threads that are
currently idle. Peak indicates the peak number of threads in the pool. Limit, listed as Threads in
the Admin Console, indicates the maximum number of native threads allowed in the thread
pool, and for NativePool is determined by the setting of NativePoolMaxThreads in the
magnus.conf le.
Tuning
You can modify the maximum threads for NativePool by editing the NativePoolMaxThreads
parameter in magnus.conf. For more information, see
page 68
.
“NativePoolMaxThreads Directive” on
WorkQueue Length /Peak /Limit
These numbers refer to a queue of server requests that are waiting for the use of a native thread
from the pool. The Work Queue Length is the current number of requests waiting for a native
thread, which is represented as Requests Queued in the Admin Console.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •66
Using Monitoring Data toTuneYour Server
Peak (Peak Requests Queued in the Admin Console) is the highest number of requests that were
ever queued up simultaneously for the use of a native thread since the server was started. This
value can be viewed as the maximum concurrency for requests requiring a native thread.
Limit is the maximum number of requests that can be queued at one time to wait for a native
thread, and is determined by the setting of NativePoolQueueSize.
Tuning
You can modify the queue size for NativePool by editing the NativePoolQueueSize directive in
magnus.conf. For more information, see
“NativePoolQueueSize Directive” on page 67.
NativePoolStackSize Directive
The NativePoolStackSize determines the stack size in bytes of each thread in the native
(kernel) thread pool.
Tuning
You can modify the NativePoolStackSize by editing the NativePoolStackSize directive in
magnus.conf.
NativePoolQueueSizeDirective
The NativePoolQueueSize determines the number of threads that can wait in the queue for the
thread pool. If all threads in the pool are busy, then the next request-handling thread that needs
to use a thread in the native pool must wait in the queue. If the queue is full, the next
request-handling thread that tries to get in the queue is rejected, with the result that it returns a
busy response to the client. It is then free to handle another incoming request instead of being
tied up waiting in the queue.
Setting the NativePoolQueueSize lower than the maximum threads value causes the server to
execute a busy function instead of the intended NSAPI function whenever the number of
requests waiting for service by pool threads exceeds this value. The default returns a “503
Service Unavailable” response and logs a message, depending on your log level setting. Setting
the NativePoolQueueSize higher than the maximum threads causes the server to reject
connections before a busy function can execute.
This value represents the maximum number of concurrent requests for service that require a
native thread. If your system is unable to fulll requests due to load, letting more requests queue
up increases the latency for requests, and could result in all available request threads waiting for
a native thread. In general, set this value to be high enough to avoid rejecting requests by
anticipating the maximum number of concurrent users who would execute requests requiring a
native thread.
Chapter 2 • Tuning Sun Java SystemWeb Server67
Using Monitoring Data toTuneYour Server
The dierence between this value and the maximum threads is the number of requests reserved
for non-native thread requests, such as static HTML and image les. Keeping a reserve and
rejecting requests ensures that your server continues to ll requests for static les, which
prevents it from becoming unresponsive during periods of very heavy dynamic content load. If
your server consistently rejects connections, this value is either set too low, or your server
hardware is overloaded.
Tuning
You can modify the NativePoolQueueSize by editing the NativePoolQueueSize directive in
magnus.conf.
NativePoolMaxThreads Directive
NativePoolMaxThreads determine the maximum number of threads in the native (kernel)
thread pool.
A higher value allows more requests to execute concurrently, but has more overhead due to
context switching, so bigger is not always better. Typically, you do not need to increase this
number, but if you are not saturating your CPU and you are seeing requests queue up, then you
should increase this number.
Tuning
You can modify the NativePoolMaxThreads by editing the NativePoolMaxThreads parameter
in magnus.conf.
NativePoolMinThreads Directive
Determines the minimum number of threads in the native (kernel) thread pool.
Tuning
You can modify the NativePoolMinThreads by editing the NativePoolMinThreads parameter
in magnus.conf.
DNS Cache Information
The DNS cache caches IP addresses and DNS names. Web Server uses DNS caching for logging
and for access control by IP address. DNS cache is enabled by default. The following example
shows DNS cache information as displayed in perfdump:
DNSCacheInfo:
-----------------enabledyes
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •68
Using Monitoring Data toTuneYour Server
CacheEntries4/1024
HitRatio62854802/62862912 ( 99.99%)
AsyncDNS Data:
------------------
enabledyes
NameLookups0
AddrLookups0
LookupsInProgress0
The following example shows the DNS Cache information as displayed in the Admin Console:
TABLE 2–6 DNS Cache Statistics
Total Cache Hits62854802
Total Cache Misses6110
Number of AsynchronousLookups0
Lookups in Progress4
Asynchronous Lookups Enabled1
Number of AsynchronousAddress Lookups
Performed
0
Enabled
If the DNS cache is disabled, the rest of this section is not displayed in perfdump. In the Admin
Console, the page displays zeros.
Tuning
By default, the DNS cache is on. You can enable or disable DNS caching in the Admin Console
on the conguration's Performance tab ⇒ DNS sub tab, under DNS Cache Settings and
selecting or deselecting the DNS Cache Enabled box. To enable or disable it using the
command-line-interface, use wadm set-dns-cache-prop and set the enabled property.
Cache Entries (Current Cache Entries/ Maximum Cache Entries)
This section in perfdump shows the number of current cache entries and the maximum number
of cache entries. In the Admin Console the current cache entries are shown as Total Cache Hits.
A single cache entry represents a single IP address or DNS name lookup. The cache should be as
large as the maximum number of clients that access your web site concurrently. Notethat
setting the cache size too high wastes memory and degrades performance.
Chapter 2 • Tuning Sun Java SystemWeb Server69
Using Monitoring Data toTuneYour Server
Tuning
You can set the maximum size of the DNS cache in the Admin Console in the Maximum Cache
Size eld on the conguration's Performance tab ⇒ DNS sub tab, under DNS Cache Settings.
To set it using the command-line-interface, use wadm set-dns-cache-prop and set the
max-entries property. The default cache size is 1024. The value range is 2-32768.
Hit Ratio (Cache Hits / Cache Lookups)
The hit ratio in perfdump displays the number of cache hits compared to the number of cache
lookups. You can compute this number using the statistics in the Admin Console by dividing
the Total Cache Hits by the sum of the Total Cache Hits and the Total Cache Misses.
This setting is not tunable.
Async DNS Enabled/Disabled
Async DNSenabled/disabled displays whether the server uses its own asynchronous DNS
resolver instead of the operating system's synchronous resolver. By default, AsyncDNS is
disabled. If it is disabled, this section does not appear in perfdump. To enable it using the Admin
Console, on the conguration's Performance tab ⇒ DNStab, under DNS Lookup Settings,
select Asynchronous DNS. To enable it using the command-line interface, use wadmset-dns-prop and set the async property to true.
JavaVirtual Machine (JVM) Information
JVM statistics are displayed through the Admin Console, the CLI, and stats-xml only. They
are not shown in perfdump.
The following table shows an example of the JVM statistics displayed in the AdminConsole:
TABLE 2–7 Java Virtual Machine (JVM) Statistics
Virtual Machine NameJava HotSpotTMServer VM
Virtual Machine VendorSun Microsystems Inc.
Virtual Machine Version1.5.0_06-b05
Heap Memory Size5884856
Elapsed Garbage Collection Time (milli seconds)51
Present Number of Classes Loaded1795
Total Number of Classes Loaded1795
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •70
Most of these statistics are not tunable. They provide information about the JVM's operation.
Another source of tuning information on the JVM is the package java.lang.management,
which provides the management interface for monitoring and management of the JVM. For
more information on this package, see
As with all Java programs, the performance of the web applications in the Web Server is
dependent on the heap management performed by the JVM. There is a trade-o between pause
times and throughput. A good place to start is by reading the performance documentation for
the Java HotSpot virtual machine, which can be found at
“Tuning Garbage Collection with the 5.0 Java Virtual
and
“Ergonomics in the 5.0 Java Virtual Machine”
(http://java.sun.com/docs/hotspot/gc5.0/ergo5.html)
.
JVM options can be specied in the AdminConsole on the conguration's Java tab ⇒ JVM
Settings sub tab. In the CLI, use the wadm commands set-jvm-prop and
set-jvm-profiler-prop.
WebApplication Information
Web application statistics are displayed through the Admin Console, wadm get-config-stats
command), and stats-xml only. They are not shown in perfdump.
▼
To Access Web Application StatisticsFrom the AdminConsole
From the CommonTasks page, choose the Monitoringtab.
1
2
Click the conguration name toview web application statistics forthe conguration. To view
web application statisticsfor the instance, click theInstance sub tab and the instance name.
Chapter 2 • Tuning Sun Java SystemWeb Server71
Using Monitoring Data toTuneYour Server
On the Monitoring Statistics page,click Virtual Server Statistics.
3
Click the virtual server name.
4
On the VirtualServer MonitoringStatistics page, clickWebApplications.
5
Select theweb application for which toview statistics from the Web Applicationpull-down
6
menu.
WebApplication Statistics
The following table shows an example of the Web Application statistics displayed in the Admin
Console:
TABLE 2–8 Web Application Statistics
Number of JSPsLoaded1
Number of JSPsReloaded1
Total Number of Sessions Serviced2
Number of Sessions Active2
Peak Number of Active Sessions2
Number of Sessions Rejected0
Number of Sessions Expired0
AverageTime (seconds) that expired sessions had
been alive
Longest Time (seconds) for which an expired session
was alive
0
0
For more information on tuning, see “Tuning Java Web Application Performance” on page 78.
Also see Sun Java System Web Server 7.0 Update 1 Developer’s Guide to Java Web Applications.
JDBC Resource Information
A JDBC resource is a named group of JDBC connections to a database. A JDBC resource denes
the properties used to create a connection pool. Each JDBC resource uses a JDBC driver to
establish a connection to a physical database when the server is started. A pool of connections is
created when the rst request for connection is made on the pool after you start Web Server.
A JDBC-based application or resource draws a connection from the pool, uses it, and when no
longer needed, returns it to the connection pool by closing the connection. If two or more JDBC
resources point to the same pool denition, they use the same pool of connections at run time.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •72
Using Monitoring Data toTuneYour Server
The use of connection pooling improves application performance by doing the following:
■
Creating connections in advance. The cost of establishing connections is moved outside of
the code that is critical for performance.
■
Reusing connections. The number of times connections are created is signicantly lowered.
■
Controlling the amount of resources a single application can use at any moment.
JDBC resources can be created and edited using the Admin Console's Java tab ⇒ Resources sub
tab for the conguration. You can also use the wadm create-jdbc-resource and
set-jdbc-resource-prop commands. For more information, see the Sun Java System Web
Server 7.0 Update 1 Administrator’s Guide.
Note – Each dened pool is instantiated during Web Server startup. However, the connections
are only created the rst time the pool is accessed. You should jump-start a pool before putting
it under heavy load.
JDBC resource statistics are available through the Admin Console, CLI, and stats.xml only.
They are not shown in perfdump. Some of the monitoring data is unavailable through the
Admin Console and can only be viewed through the CLI using wadm get-config-stats and
through the stats.xml output.
A pool is created on demand, that is, it is created the rst time it is used. The monitoring
statistics are not displayed until the rst time the pool is used.
JDBC ResourceStatistics AvailableThrough the AdminConsole
The following table shows an example of the JDBC resource statistics displayed through the
Admin Console:
TABLE 2–9 JDBC Resource Statistics
Connections32
Free Connections0
Leased Connections32
AverageQueue Time1480.00
Queued Connections40
Connection Timeout100
To change the settings for a JDBC resource through the Admin Console, for the conguration,
choose the Java tab ⇒ Resources sub tab. Select the JDBC resource. The settings are available on
the Edit JDBC Resource page. To change the JDBC resource through the
command-line-interface, use wadm set-jdbc-resource-prop.
Chapter 2 • Tuning Sun Java SystemWeb Server73
Using Monitoring Data toTuneYour Server
Connections
This number shows the current JDBC connections, including both free and busy connections.
Tuning – This setting cannot be tuned, but it is a good indicator of recent pool activity. If the
number of connections is consistently higher than the minimum number of connections,
consider increasing the minimum number of connections to be closer to the number of current
JDBC connections. To change the minimum connections for a JDBC resource through the
Admin Console, on the Edit JDBC Resources page, edit the Minimum Connections setting. To
change the JDBC resource's minimum connections through the command-line-interface, use
wadm set-jdbc-resource-prop and change the min-connections property.
Free Connections
This number shows the current number of free connections in the pool. All free connections
over the minimum pool size are closed if they are idle for more than the maximum idle timeout.
The free connections are not tunable.
Leased Connections
This number shows the current number of connections in use.
Tuning – If number of leased connections is consistently lower than the minimum
connections, consider reducing the minimum connections for the JDBC resource. If number of
leased connections is consistently higher than minimum connections, consider increasing the
minimum connections. If number of leased connections is consistently at the JDBC resource's
maximum number of connections, consider increasing the maximum number of connections.
The upper limit for the number of leased connections is the number of maximum connections.
To change the minimum or maximum connections for a JDBC resource through the Admin
Console, on the Edit JDBC Resource page, edit the Minimum Connections or Maximum
Connections elds. To change the JDBC resource's minimum or maximum connections
through the command-line-interface, use wadm set-jdbc-resource-prop and change the
min-connections or max-connections properties.
Queued Connections
This number shows the current number of requests for connections that are waiting to receive a
connection from the JDBC pool. Connection requests are queued if the current number of
leased connections has reached the maximum connections.
Tuning – If this number is consistently greater than zero, consider increasing the JDBC
resource's maximum connections. To change the maximum connections for a JDBC resource
through the Admin Console, on the Edit JDBC Resource page, edit the Maximum Connections
eld. To change the JDBC resource's maximum connections through the
command-line-interface, use wadm set-jdbc-resource-prop and change the
max-connections property.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •74
Using Monitoring Data toTuneYour Server
JDBC ResourceStatistics Not Availablein the Admin Console
Some JDBC statistics are available through the wadm get-config-stats command (using the
--node option), through stats-xml, and through SNMP but not through the Admin Console.
maxConnections – The congured maximum size of the pool. Use as a reference for other
statistics. To change the maximum connections for a JDBC resource through the Admin
Console, on the Edit JDBC Resource page, edit the Maximum Connections eld. To change the
JDBC resource's maximum connections through the command-line-interface, use wadmset-jdbc-resource-prop and change the max-connections property.
peakConnections – The highest number of connections that have been leased concurrently
during the history of the pool. This number is a good indication on the upper limit on pool
usage. It is limited by the maximum connections setting.
countTotalLeasedConnections – The total number of times a connection has been handed out
by the pool. Indicates total pool activity. Not tunable.
countTotalFailedValidationConnections – If connection validation is enabled, shows the
number of times a connection has been detected as invalid by the pool. If this number is
relatively high, it could signal database or network problems. Not tunable.
peakQueued – The highest number of connection requests that have been queued
simultaneously at any time during the lifetime of the pool. Nottunable.
millisecondsPeakWait – The maximum time in milliseconds that any connection request has
been in the wait queue. A high number is an indication of high pool activity. The upper limit is
the JDBC resource setting wait timeout.
countConnectionIdleTimeouts – The number of free connections that have been closed by the
pool because they exceeded the congured JDBC idle timeout. To change the idle timeout for a
JDBC resource through the Admin Console, on the Edit JDBC Resource page, edit the Idle
Timeout eld. To change the JDBC resource's idle timeout through the
command-line-interface, use wadm set-jdbc-resource-prop and change the idle-timeout
property.
JDBC ResourceConnection Settings
Depending on your application’s database activity, you might need to size JDBC resource
connection pool settings. Attributes of a JDBC resource which aect performance are listed
below, along with performance considerations when setting values.
■
Minimum connections
The size the pool tends to keep during the life of the server instance. Also the initial size of
the pool. Defaults to 8. This number should be as close as possible to the expected average
size of the pool. Use a high number for a pool that is expected to be under heavy load, to
Chapter 2 • Tuning Sun Java SystemWeb Server75
Using Monitoring Data toTuneYour Server
minimize creation of connections during the life of the application and minimize pool
resizing. Use a lower number if the pool load is expected to be small, to minimize resource
consumption.
■
Maximum connections
The maximum number of connections that a pool can have at any given time. Defaults to 32.
Use this setting to enforce a limit in the amount of connection resources that a pool or
application can have. This limit is also benecial to avoid application failures due to
excessive resource consumption.
■
Idle timeout
The maximum amount in seconds that a connection is ensured to remain unused in the
pool. After the idle timeout, connections are automatically closed. If necessary, new
connections are created up to the minimum number of connections to replace the closed
connection. Notethat this setting does not control connection timeouts enforced at the
database server side. Defaults to 60 seconds.
Setting this attribute to –1 prevents the connections from being closed. This setting is good
for pools that expect continuous high demand. Otherwise, keep this timeout shorter than
the database server-side timeout (if such timeouts are congured on the specic vendor
database), to prevent accumulation of unusable connections in the pool.
■
Wait timeout
The amount of time in seconds that a request waits for a connection in the queue before
timing out. After this timeout, the user sees an error. Defaults to 60.
Setting this attribute to 0 causes a request for a connection to wait indenitely. This setting
could also improve performance by keeping the pool from having to account for connection
timers.
■
Validation method
The method used by the pool to determine the health of a connections in the pool. Defaults
to o.
If a validation method is used, the pool executes a sanity check on a connection before
leasing it to an application. The eectivity and performance impact depends on the method
selected:
■
meta-data is less expensive than table in terms of performance, but usually less
eective as most drivers cache the result and do not use the connection, providing false
results.
■
table is almost always eective, as it forces the driver to perform an SQL call to the
database, but it is also the most costly.
■
auto-commit can provide the best balance of eectiveness and performance cost, but a
number of drivers also cache the results of this method.
■
Validation Table Name
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •76
Tuningthe ACL User Cache
The user-dened table to use for validation when the validation method is table. Defaults
to test.
If this method is used, the table used should be dedicated only to validation, and the number
of rows in the table should be kept to a minimum.
■
Fail All Connections
Indicates whether all connections in the pool are re-created when one is found to be invalid,
or only the invalid one. Only applicable if you have selected a connection validation method.
Disabled by default.
If enabled, all of the re-creation is done in one step, and the thread requesting the
connection is heavily aected. If disabled, the load of re-creating connections is distributed
between the threads requesting each connection.
■
Transaction IsolationLevel
Species the Transaction Isolation Level on the pooled database connections.
By default, the default isolation level of the connection is left intact. Setting it to any value
does incur the small performance penalty caused by the method call.
■
Guarantee Isolation
Only applicable if a transaction isolation level is specied. Defaults to disabled.
Leaving this setting disabled causes the isolation level to be set only when the connection is
created. Enabling sets the level every time the connection is leased to an application. In most
cases, leave this setting disabled.
Tuning the ACL User Cache
The ACL user cache is on by default. Because of the default size of the cache (200 entries), the
ACL user cache can be a bottleneck, or can simply not serve its purpose on a site with heavy
trac. On a busy site, more than 200 users can hit ACL-protected resources in less time than the
lifetime of the cache entries. When this situation occurs, Web Server must query the LDAP
server more often to validate users, which impacts performance.
This bottleneck can be avoided by increasing the maximum users of the ACL cache on the
conguration's Performance tab ⇒ Cache sub tab. You can also set the number of users by
setting the max-users property using the command wadm set-acl-cache-prop. Note that
increasing the cache size uses more resources; the larger you make the cache, the more RAM
you'll need to hold it.
There can also be a potential (but much harder to hit) bottleneck with the number of groups
stored in a cache entry (four by default). If a user belongs to ve groups and hits ve ACLs that
check for these dierent groups within the ACL cache lifetime, an additional cache entry is
created to hold the additional group entry. When there are two cache entries, the entry with the
original group information is ignored.
Chapter 2 • Tuning Sun Java SystemWeb Server77
TuningJavaWebApplication Performance
While it would be extremely unusual to hit this possible performance problem, the number of
groups cached in a single ACL cache entry can be tuned with Maximum Groups setting on the
conguration's Performance tab ⇒ Cache sub tab. Or you can use the max-groups-per-user
property of the wadm set-acl-cache-prop command.
The maximum age setting of the ACL cache determines the number of seconds before the cache
entries expire. Each time an entry in the cache is referenced, its age is calculated and checked
against the maximum age setting. The entry is not used if its age is greater than or equal to the
maximum age. The default value is 120 seconds. If your LDAP is not likely to change often, use a
large number for the maximum age. However, if your LDAP entries change often, use a smaller
value. Forexample, when the value is 120 seconds, the Web Server might be out of sync with the
LDAP server for as long as two minutes. Depending on your environment, that might or might
not be a problem.
Tuning Java Web Application Performance
This section contains information to help you improve the performance of your Java Web
Applications. This section includes the following topics:
■
“Using Precompiled JSPs” on page 78
■
“Using Servlet/JSP Caching” on page 79
■
“Conguring the Java Security Manager” on page 79
■
“Conguring Class Reloading” on page 79
■
“AvoidingDirectories in the Classpath” on page 80
■
“Conguring the Web Application’s Session Settings” on page 80
In addition, see the following sections for other tuning information related to the Java Web
Applications:
■
“Java Virtual Machine (JVM) Information” on page 70
■
“JDBC Resource Information” on page 72
Using Precompiled JSPs
Compiling JSPs is a resource-intensive and relatively time-consuming process. By default, the
Web Server periodically checks to see if your JSPs have been modied and dynamically reloads
them; this allows you to deploy modications without restarting the server. The
reload-interval property of the jsp-config element in sun-web.xml controls how often the
server checks JSPs for modications. However, there is a small performance penalty for that
checking.
When the server detects a change in a .jsp le, only that JSP is recompiled and reloaded; the
entire web application is not reloaded.
If your JSPs don't change, you can improve performance by precompiling your JSPs.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •78
TuningJavaWebApplication Performance
When adding a web application, either through the Admin Console or CLI, choose the
precompile JSPs option. Enabling precompiled JSPs allows all the JSPs present in the web
application to be pre-compiled, and their corresponding servlet classes are bundled in the web
application's WEB-INF/lib or WEB-INF/classes directory. When a JSP is accessed, it is not
compiled and instead, its precompiled servlet is used. For more information on JSPs, see SunJava System Web Server 7.0 Update 1 Developer’s Guide to Java Web Applications. Also see
“Conguring Class Reloading” on page 79.
Using Servlet/JSP Caching
If you spend a lot of time re-running the same servlet/JSP, you can cache its results and return
results out of the cache the next time it is run. For example, this is useful for common queries
that all visitors to your site run: you want the results of the query to be dynamic because it might
change daily, but you don't need to run the logic for every user.
To enable caching, you congure the caching parameters in the sun-web.xml le of your
application. For more details, see “Caching Servlet Results” in Sun Java System Web Server 7.0Update 1 Developer’s Guide to Java Web Applications.
Conguring the Java Security Manager
Web Server supports the Java Security Manager. The main drawback of running with the
Security Manager is that it negatively impacts performance. The Java Security Manager is
disabled by default when you install the product. Running without the Security Manager might
improve performance signicantly for some types of applications. Based on your application
and deployment needs, you should evaluate whether to run with or without the Security
Manager. For more information, see Sun Java System Web Server 7.0 Update 1 Developer’s Guideto Java Web Applications.
Conguring Class Reloading
The dynamic reload interval of the servlet container and the dynamic-reload-interval of the
class-loader element in sun-web.xml control the frequency at which the server checks for
changes in servlet classes. When dynamic reloading is enabled and the server detects that a
.class le has changed, the entire web application is reloaded.
Set the dynamic reload interval on the conguration's Java tab ⇒ Servlet Container sub tab, or
using the wadm set-servelt-container-props command. In a production environment
where changes are made in a scheduled manner, set this value to 0 to prevent the server from
constantly checking for updates. The default value is 0 (that is, class reloading is disabled). For
more information about elements in sun-web.xml, see Sun Java System Web Server 7.0 Update 1Developer’s Guide to Java Web Applications.
Chapter 2 • Tuning Sun Java SystemWeb Server79
TuningJavaWebApplication Performance
Avoiding Directories in the Classpath
For certain applications (especially if the Java Security Manager is enabled) you can improve
performance by ensuring that there are no unneeded directories in the classpath. To do so,
change the Server Class Path,Class Path Prex, and Class Path Sux elds on the
conguration's Java tab ⇒ General sub tab for the conguration or use the command wadm
set-jvm-prop. Also, package the web application's .class les in a .jar archive in
WEB-INF/lib instead of packaging the .class les as is in WEB-INF/classes, and ensure thatthe .war archive does not contain a WEB-INF/classes directory.
Conguring the Web Application’s Session Settings
If you have relatively short-lived sessions, try decreasing the session timeout by conguring the
value of the timeOutSeconds property under the session-properties element in
sun-web.xml from the default value of 10 minutes.
If you have relatively long-lived sessions, you can try decreasing the frequency at which the
session reaper runs by increasing the value of the reapIntervalSeconds property from the
default value of once every minute.
For more information about these settings, and about session managers, see Sun Java SystemWeb Server 7.0 Update 1 Developer’s Guide to Java Web Applications.
In multi-process mode when the persistence-type in sun-web.xml is congured to be either
s1ws60 or mmap, the session manager uses cross-process locks to ensure session data integrity.
These can be congured to improve performance as described below.
Note – For Javatechnology-enabled servers, multi-process mode is deprecated and included for
backward-compatibility only.
TuningmaxLocks (UNIX/Linux)
The implication of the number specied in the maxLocks property can be gauged by dividing
the value of maxSessions with maxLocks. For example, if maxSessions = 1000 and you set
maxLocks = 10, then approximately 100 sessions (1000/10) contend for the same lock.
Increasing maxLocks reduces the number of sessions that contend for the same lock and might
improve performance and reduce latency. However, increasing the number of locks also
increases the number of open le descriptors, and reduces the number of available descriptors
that would otherwise be assigned to incoming connection requests.
For more information about these settings, see Chapter 6, “Session Managers,” in Sun JavaSystem Web Server 7.0 Update 1 Developer’s Guide to Java Web Applications.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •80
TuningMMapSessionManager (UNIX/Linux)
The following example describes the eect on process size when conguring the
persistence-type="mmap" using the manager-properties properties. For more information,
see “MMap Session Manager (UNIX Only)” in Sun Java System Web Server 7.0 Update 1Developer’s Guide to Java Web Applications.
This example would create a memory mapped le of size 1000 X 10 X 4096 bytes, or ~40 MB. As
this is a memory mapped le, the process size will increase by 40 MB upon startup. The larger
the values you set for these parameters, the greater the increase in process size.
Tuning CGI Stub Processes (UNIX/Linux)
In Web Server, the CGI engine creates CGI stub processes as needed. On systems that serve a
large load and rely heavily on CGI-generated content, it is possible for the CGI stub processes to
consume all system resources. If this is happening on your server, the CGI stub processes can be
tuned to restrict how many new CGI stub processes can be spawned, their timeout value, and
the minimum number of CGI stub process that run at any given moment.
TuningCGI Stub Processes (UNIX/Linux)
Note – If you have an init-cgi function in the magnus.conf le and you are running in
multi-process mode, you must add LateInit = yes to the init-cgi line.
Tune the following settings to control CGI stubs. These settings are on the conguration's
Performance Tab ⇒ CGI sub tab.
■
Minimum Stubs Size: Controls the number of processes that are started by default. The rst
CGI stub process is not started until a CGI program has been accessed. The default value is
0. If you have an init-cgi directive in the magnus.conf le, the minimum number of CGI
stub processes are spawned at startup.
■
Maximum Stub Size: Controls the maximum number of CGI stub processes the server can
spawn. This is the maximum concurrent CGI stub processes in execution, not the
maximum number of pending requests. The default value is 16 and should be adequate for
most systems. Setting this too high might actually reduce throughput.
■
CGI Stub Timeout: Causes the server to kill any CGI stub processes that have been idle for
the number of seconds set by this directive. Once the number of processes is at the
minimum stubs size, it does not kill any more processes. The default is 30.
■
CGI Timeout: Limits the maximum time in seconds that CGI processes can run. The default
is –1, which means there is no timeout.
Chapter 2 • Tuning Sun Java SystemWeb Server81
Using nd-pathinfo-forward
Using nd-pathinfo-forward
The find-pathinfo-forward parameter used in obj.conf can help improve your
performance. It is used with the PathCheck function find-pathinfo and the NameTrans
functions pfx2dir and assign-name. The find-pathinfo-forward parameter instructs the
server to search forward for PATH_INFO in the path after ntrans-base, instead of backward
from the end of the path in the server function find-pathinfo.
Note – The server ignores the find-pathinfo-forward parameter if the ntrans-base parameter
is not set in rq->vars when the server function find-pathinfo is called. By default,
ntrans-base is set.
This feature can improve performance for certain URLs by doing fewer stats in the server
function find-pathinfo. On Windows, you can also use this feature to prevent the server from
changing "\\" to "/" when using the PathCheck server function find-pathinfo.
For more information about obj.conf, see the Sun Java System Web Server 7.0 Update 1Administrator’s Conguration File Reference.
Using nostat
You can specify the parameter nostat in the obj.conf NameTrans function assign-name to
prevent the server from doing a stat on a specied URL whenever possible. Use the following
syntax:
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •82
In the previous example, the server does not stat for path /ntrans-base/nsfc and
/ntrans-base/nsfc/* if ntrans-base is set. If ntrans-base is not set, the server does not stat forURLs /nsfc and /nsfc/*. By default, ntrans-base is set. The example assumes the default
PathCheck server functions are used.
When you use nostat= virtual-path in the assign-name NameTrans, the server assumes that
stat on the specied virtual-path will fail. Therefore, use nostat only when the path of the
virtual-path does not exist on the system, for example, in NSAPI plug-in URLs. Using nostat
on those URLs improves performance by avoiding unnecessary stats on those URLs.
For more information about obj.conf, see the Sun Java System Web Server 7.0 Update 1Administrator’s Conguration File Reference.
Using Busy Functions
The default busy function returns a "503 Service Unavailable" response and logs a message
depending upon the log level setting. You might want to modify this behavior for your
application. You can specify your own busy functions for any NSAPI function in the obj.conf
le by including a service function in the conguration le in this format:
busy="my-busy-function"
Using Busy Functions
For example, you could use this sample service function:
Service fn="send-cgi" busy="service-toobusy"
This allows dierent responses if the server become too busy in the course of processing a
request that includes a number of types (such as Service, AddLog, and PathCheck). Note that
your busy function applies to all functions that require a native thread to execute when the
default thread type is non-native.
To use your own busy function instead of the default busy function for the entire server, you can
write an NSAPI init function that includes a func_insert call as shown below:
Busy functions are never executed on a pool thread, so you must be careful to avoid using
function calls that could cause the thread to block.
Chapter 2 • Tuning Sun Java SystemWeb Server83
84
CHAPTER 3
3
Common Performance Problems
This chapter discusses common web site performance problems, and includes the following
topics:
■
“check-acl Server Application Functions” on page 85
■
“Low-Memory Situations” on page 86
■
“Too FewThreads” on page 86
■
“Cache Not Utilized” on page 87
■
“Keep-Alive Connections Flushed” on page 87
■
“Log File Modes” on page 88
Note – For platform-specic issues, see Chapter4, “Platform-Specic Issues and Tips”
check-acl Server Application Functions
For optimal performance of your server, use ACLs only when required.
The server is congured with an ACL le containing the default ACL allowing write access to
the server only to all, and an es-internal ACL for restricting write access for anybody. The
latter protects the manuals, icons, and search UI les in the server.
The default obj.conf le has NameTrans lines mapping the directories that need to be read-only
to the es-internal object, which in turn has a check-acl SAF for the es-internal ACL.
The default object also contains a check-acl SAF for the default ACL.
You can improve performance by removing the check-acl SAF from the default object for
URIs that are not protected by ACLs.
85
Low-Memory Situations
Low-Memory Situations
If Web Server must run in low-memory situations, reduce the thread limit to a bare minimum
by lowering the value of the Maximum Threads setting on the conguration's Performance Tab
⇒ HTTP sub tab. You can also set it with wadm set-thread-pool-prop command's
max-threads property.
Your web applications running under stress might sometimes result in the server running out
of Java VM runtime heap space, as can be seen by java.lang.OutOfMemoryError messages in
the server log le. There could be several reasons for this (such as excessive allocation of
objects), but such behavior could aect performance. To address this problem, prole the
application. Refer to the following HotSpot VM performance FAQ for tips on proling
allocations (objects and their sizes) of your application:
http://java.sun.com/docs/hotspot/index.html
At times your application could be running out of maximum sessions (as evidenced by a “too
many active sessions” message in the server log le), which would result in the container
throwing exceptions, which in turn impacts application performance. Consideration of session
manager properties, session creation activity (note that JSPs have sessions enabled by default),
and session idle time is needed to address this situation.
Too Few Threads
The server does not allow the number of active threads to exceed the thread limit value. If the
number of simultaneous requests reaches that limit, the server stops servicing new connections
until the old connections are freed up. This can lead to increased response time.
In Web Server, the server’s default maximum threads setting is 128. If you want your server to
process more requests concurrently, you need to increase the maximum number of threads.
The symptom of a server with too few threads is a long response time. Making a request from a
browser establishes a connection fairly quickly to the server, but if there are too few threads on
the server it might take a long time before the response comes back to the client.
The best way to tell if your server is being throttled by too few threads is to see if the number of
active sessions is close to, or equal to, the maximum number of threads. To do this, see
Creation (Thread) Information” on page 57.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •86
“Session
Cache Not Utilized
If the le cache is not utilized, your server is not performing optimally. Since most sites have lots
of GIF or JPEG les that should always be cacheable, you need to use your cache eectively.
Some sites, however, do almost everything through CGIs, SHTML, or other dynamic sources.
Dynamic content is generally not cacheable, and inherently yields a low cache hit rate. Don’tbe
too alarmed if your site has a low cache hit rate. The most important thing is that your response
time is low. You can have a very low cache hit rate and still have very good response time. As
long as your response time is good, you might not care that the cache hit rate is low.
Check your hit ratio using statistics from perfdump, the Admin Console Monitoring tab, orwadm stats commands. The hit ratio is the percentage of times the cache was used with all hits to
your server. A good cache hit rate is anything above 50%. Some sites might even achieve 98% or
higher. Formore information, see
In addition, if you are doing a lot of CGI or NSAPI calls, you might have a low cache hit rate. If
you have custom NSAPI functions, you might also have a low cache hit rate.
“File Cache Information (Static Content)” on page 59.
Keep-Alive Connections Flushed
Keep-Alive Connections Flushed
A web site that might be able to service 75 requests per second without keep-alive connections
might be able to do 200-300 requests per second when keep-alive is enabled. Therefore, as a
client requests various items from a single page, it is important that keep-alive connections are
being used eectively. If the KeepAliveCount shown in perfdump (Total Number of
Connections Added, as displayed in the Admin Console) exceeds the keep-alive maximum
connections, subsequent keep-alive connections are closed, or “ushed,” instead of being
honored and kept alive.
Check the KeepAliveFlushes and KeepAliveHits values using statistics from perfdump or the
Number of Connections Flushed and Number of Connections Processed under Keep Alive
Statistics on the Monitoring Statistics page. For more information, see
Information” on page 53
On a site where keep-alive connections are running well, the ratio of KeepAliveFlushes to
KeepAliveHits is very low. If the ratio is high (greater than 1:1), your site is probably not
utilizing keep-alive connections as well as it could.
To reduce keep-alive ushes, increase the keep-alive maximum connections (as congured on
the conguration's Performance Tab ⇒ HTTP sub tab or the wadm set-keep-ailve props
command). The default value is 200. By raising the value, you keep more waiting keep-alive
connections open.
Chapter 3 • CommonPerformanceProblems87
.
“Keep-Alive
Log File Modes
Caution – On UNIX/Linux systems, if the keep-alive maximum connectionsvalue is too high,
the server can run out of open le descriptors. Typically 1024 is the limit for open les on
UNIX/Linux, so increasing this value above 500 is not recommended.
Log File Modes
Keeping the log les on a high-level of verbosity can have a signicant impact on performance.
On the conguration's General Tab ⇒ Log Settings page choose the appropriate log level and
use levels such as Fine, Finer, and Finest with care. To set the log level using the CLI, use the
command wadm set-log-prop and set the log-level.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •88
CHAPTER 4
4
Platform-Specic Issues andTips
This chapter provides platform-specic tuning tips, and includes the following topics:
■
“Solaris Platform-Specic Issues” on page 89
■
“Solaris File System Tuning” on page 93
■
“Solaris Platform-Specic Performance Monitoring” on page 94
■
“Tuning Solaris for Performance Benchmarking” on page 96
■
“Tuning UltraSPARC T1–Based Systems for Performance Benchmarking” on page 97
Solaris Platform-Specic Issues
This section discusses miscellaneous Solaris-specic issues and tuning tips, and includes the
following topics:
■
“Files Open in a Single Process (File Descriptor Limits)” on page 89
■
“Failure to Connect to HTTP Server” on page 90
■
“Connection Refused Errors” on page 91
■
“Tuning TCP Buering” on page 91
■
“Using the Solaris Network Cache and Accelerator (SNCA)” on page 91
Files Open in a Single Process (File Descriptor Limits)
Dierent platforms each have limits on the number of les that can be open in a single process
at one time. For busy sites, you might need to increase that number. On Solaris systems, control
this limit by setting rlim_fd_max in the /etc/system le. For Solaris 8, the default is 1024,
which you can increase to 65536. For Solaris 9 and 10, the default is 65536, which doesn't need
to be increased.
After making this or any change in the /etc/system le, reboot Solaris to put the new settings
into eect. In addition, if you upgrade to a new version of Solaris, any line added to
/etc/system should be removed and added again only after verifying that it is still valid.
89
Solaris Platform-Specic Issues
An alternative way to make this change is using the ulimit –n "value" command. Using this
command does not require a system restart. However,this command only changes the login
shell, while editing the etc/system le aects all shells.
Failure to Connect to HTTP Server
If users are experiencing connection timeouts from a browser to Web Server when the server is
heavily loaded, you can increase the size of the HTTP listener backlog queue. To increase this
setting, edit the HTTP listener's listen queue value.
In addition to this setting, you must also increase the limits within the Solaris TCP/IP
networking code. There are two parameters that are changed by executing the following
commands:
These two settings increase the maximum number of two Solaris listen queues that can ll up
with waiting connections. tcp_conn_req_max_q increases the number of completed
connections waiting to return from an accept() call. tcp_conn_req_max_q0 increases the
maximum number of connections with the handshake incomplete. The default values are 128
and 1024 respectively. To automatically have these ndd commands executed after each system
reboot, place them in a le called /etc/init.d/network-tuning and create a link to that le
named /etc/rc2.d/S99network-tuning.
You can monitor the eect of these changes by using the netstat -s command and looking at
the tcpListenDrop, tcpListenDropQ0, and tcpHalfOpenDrop values. Review them before
adjusting these values. If they are not zero, adjust the value to 2048 initially, and continue to
monitor the netstat output.
The Web Server HTTP listener's listen queue setting and the related Solaris
tcp_conn_req_max_q and tcp_conn_req_max_q0 settings should match the throughput of the
Web Server. These queues act as a "buer" to manage the irregular rate of connections coming
from web users. These queues allow Solaris to accept the connections and hold them until they
are processed by the Web Server.
You don't want to accept more connections than the Web Server is able to process. It is better to
limit the size of these queues and reject further connections than to accept excess connections
and fail to service them. The value of 2048 for these three parameters typically reduces
connection request failures, and improvement has been seen with values as high as 4096.
This adjustment is not expected to have any adverse impact in any web hosting environment, so
you can consider this suggestion even if your system is not showing the symptoms mentioned.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •90
Solaris Platform-Specic Issues
Connection Refused Errors
If users are experiencing connection refused errors on a heavily loaded server, you can tune the
use of network resources on the server.
When a TCP/IP connection is closed, the port is not reused for the duration of
tcp_time_wait_interval (default value of 240000 milliseconds). This is to ensure that there
are no leftover segments. The shorter the tcp_time_wait_interval, the faster precious
network resources are again available. This parameter is changed by executing the following
command (do not reduce it below 60000):
To automatically have this ndd command executed after each system reboot, place it in a le
called /etc/init.d/network-tuning and create a link to that le named
/etc/rc2.d/S99network-tuning.
If your system is not exhibiting the symptoms mentioned, and if you are not well-versed in
tuning the TCP protocol, it is suggested that you do not change the above parameter.
Tuning TCP Buering
If you are seeing unpredictable intermittent slowdowns in network response from a
consistently loaded server, you might investigate setting the sq_max_size parameter by adding
the following line to the /etc/system le:
set sq_max_size=512
This setting adjusts the size of the sync queue, which transfers packets from the hardware driver
to the TCP/IP protocol driver. Using the value of 512 allows the queue to accommodate high
volumes of network trac without overowing.
Using the Solaris Network Cache and Accelerator
(SNCA)
The Solaris Network Cache and Accelerator (SNCA) is a caching server that provides improved
web performance to the Solaris operating system.
It is assumed that SNCA has been congured for the system on which the Web Server is
running. Formore information about SNCA and its conguration and tuning, refer to the
following man pages on your system:
■
ncab2clf(1)
■
ncakmod(1)
Chapter 4 • Platform-SpecicIssues andTips91
Solaris Platform-Specic Issues
■
■
■
■
■
▼
To Enable SNCA toWork With Web Server
This procedure assumes that SNCA has been congured, as discussed above.
1
From the CommonTasks page, choose a congurationand click Edit Conguration.
Click the HTTP Listeners tab and select the HTTP listener to edit.
2
On the Edit HTTP Listener page, setthe Protocol Family tonca.
3
The HTTP listener must be listening on port 80 for this to work.
On the Cache Settings page,make sure the le cache is enabledand enable Use Sendle.
7
Save your changes.
8
Redeploy the congurationfor your changes to takeeect.
9
Maximum Threads and Queue Size
When conguring Web Server to be used with SNCA, disabling the thread pool provides better
performance. These settings are on the conguration's Performance tab ⇒ HTTP sub tab,
under Thread Pool Settings. To disable the thread pool, deselect the Thread Pool Enabled
checkbox. You can also disable the thread pool using the wadm set-thread-pool-prop
command's enabled property.
The thread pool can also be disabled with non-SNCA congurations, especially for cases in
which short latency responses with no keep-alives must be delivered.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •92
Solaris File System Tuning
This section discusses changes that can be made for le system tuning, and includes topics that
address the following issues:
■
“High File System Page-In Rate” on page 93
■
“Reduce File System Housekeeping” on page 93
■
“Long Service Times on Busy Disks or Volumes” on page 93
Please read the descriptions of the following parameters carefully. If the description matches
your situation, consider making the adjustment.
High File System Page-In Rate
If you are seeing high le system page-in rates on Solaris 8 or 9, you might benet from
increasing the value of segmap_percent. This parameter is set by adding the following line to
the /etc/system le:
set segmap_percent=25
segmap_percent adjusts the percentage of memory that the kernel maps into its address space
for the le system cache. The default value is 12; that is, the kernel reserves enough space to map
at most 12% of memory for the le system cache. On a heavily loaded machine with 4 GB of
physical memory, improvements have been seen with values as high as 60. You should
experiment with this value, starting with values around 25. On systems with large amounts of
physical memory, you should raise this value in small increments, as it can signicantly increase
kernel memory requirements.
Solaris File System Tuning
Reduce File System Housekeeping
UNIX le system (UFS) volumes maintain the time that each le was accessed. Notethat the
following change does not turn o the access time updates when the le is modied, but only
when the le is accessed. If the le access time updates are not important in your environment,
you could turn them o by adding the noatime parameter to the data volume's mount point in
Web Server's responsiveness depends greatly on the performance of the disk subsystem. Use the
iostat utility to monitor how busy the disks are and how rapidly they complete I/O requests
Chapter 4 • Platform-SpecicIssues andTips93
Solaris Platform-Specic Performance Monitoring
(the %b and svc_t columns, respectively). Service times are unimportant for disks that are less
than about 30% busy, but for busier disks, service times should not exceed about 20
milliseconds. If your busy disks have slower service times, improving disk performance might
help Web Server performance substantially.
Your rst step should be to balance the load: if some disks are busy while others are lightly
loaded, move some les o of the busy disks and onto the idle disks. If there is an imbalance,
correcting it usually gives a far greater payo than trying to tune the overloaded disks.
Solaris Platform-Specic Performance Monitoring
This section describes some of the Solaris-specic tools and utilities you can use to monitor
your system's behavior, and includes the following topics:
■
“Short-Term System Monitoring” on page 94
■
“Long-Term System Monitoring” on page 95
■
““Intelligent” Monitoring” on page 95
The tools described in this section monitor performance from the standpoint of how the system
responds to the load that Web Server generates. For information about using Web Server's own
capabilities to track the demands that users place on the Web Server itself, see
Server Performance” on page 22
.
“Monitoring
Short-Term SystemMonitoring
Solaris oers several tools for taking “snapshots” of system behavior. Although you can capture
their output in les for later analysis, the tools listed below are primarily intended for
monitoring system behavior in real time:
■
The iostat -x 60 command reports disk performance statistics at 60-second intervals.
Watch the %b column to see how much of the time each disk is busy. For any disk busy more
than about 20% of the time, pay attention to the service time as reported in the svct column.
Other columns report the I/O operation rates, the amount of data transferred, and so on.
■
The vmstat 60 command summarizes virtual memory activity and some CPU statistics at
60-second intervals.
Monitor the sr column to keep track of the page scan rate and take action if it's too high
(note that "too high" is very dierent for Solaris 8 and 9 than for earlier releases). Watch the
us, sy, and id columns to see how heavily the CPUs are being used; remember that you need
to keep plenty of CPU power in reserve to handle sudden bursts of activity. Also keep track
of the r column to see how many threads are contending for CPU time; if this remains
higher than about four times the number of CPUs, you might need to reduce the server's
concurrency.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •94
Solaris 10 Platform-Specic Tuning Information
■
The mpstat 60 command gives a detailed look at CPU statistics, while the netstat -i 60
command summarizes network activity.
Long-Term System Monitoring
It is important not only to "spot-check" system performance with the tools mentioned above,
but to collect longer-term performance histories so you can detect trends. If nothing else, a
baseline record of a system performing well might help you gure out what has changed if the
system starts behaving poorly. Enable the system activity reporting package by doing the
following:
■
Edit the le /etc/init.d/perf and remove the # comment characters from the lines near
the end of the le. For Solaris 10, run the following command:
svcadm enable system/sar
■
Run the command crontab -e sys and remove the # comment characters from the lines
with the sa1 and sa2 commands. You might also wish to adjust how often the commands
run and at what times of day depending on your site's activity prole (see the crontab man
page for an explanation of the format of this le).
This causes the system to store performance data in les in the /var/adm/sa directory,
where by default they are retained for one month. You can then use the sar command to
examine the statistics for time periods of interest.
“Intelligent”Monitoring
The SE toolkit is a freely downloadable software package developed by Sun performance
experts. In addition to collecting and monitoring raw performance statistics, the toolkit can
apply heuristics to characterize the overall health of the system and highlight areas that might
need adjustment. You can download the toolkit and its documentation from the following
location:
http://www.sunfreeware.com/setoolkit.html
Solaris 10 Platform-SpecicTuning Information
DTrace is a comprehensive dynamic tracing framework for the Solaris Operating Environment.
You can use the DTrace Toolkit to monitor the system. It is available from the following URL:
The following table shows the operating system tuning for Solaris used when benchmarking for
performance and scalability. These values are an example of how you might tune your system to
achieve the desired result.
TABLE 4–1 Tuning Solaris for Performance Benchmarking
ParameterScopeDefault ValueTuned ValueComments
rlim_fd_max/etc/system6553665536Process open le descriptors limit;
should account for the expected load
(for the associated sockets, les, and
pipes if any).
setting to 0 makes it innite so the
performance runs won’t be hitby lack
of buer space. Set on clients too.
Note that setting sq_max_size to 0
might not be optimal for production
systems with high network trac.
tcp_time_wait_intervalndd /dev/tcp24000060000Set on clients too.
tcp_conn_req_max_qndd /dev/tcp1281024
tcp_conn_req_max_q0ndd /dev/tcp10244096
tcp_ip_abort_intervalndd /dev/tcp48000060000
tcp_keepalive_intervalndd /dev/tcp7200000900000For high trac web sites,lower this
value.
tcp_rexmit_interval_initialndd /dev/tcp30003000If retransmission is greater than
30-40%, you should increase this
value.
tcp_rexmit_interval_maxndd /dev/tcp24000010000
tcp_rexmit_interval_minndd /dev/tcp2003000
tcp_smallest_anon_portndd /dev/tcp327681024Set on clients too.
tcp_slow_start_initialndd /dev/tcp12Slightly faster transmission of small
amounts of data.
tcp_xmit_hiwatndd /dev/tcp812932768To increase the transmit buer.
tcp_recv_hiwatndd /dev/tcp812932768To increase the receive buer.
Sun Java System Web Server 7.0Update 1 Performance Tuning,Sizing, and Scaling Guide •96
TuningUltraSPARC® T1–Based Systems for Performance Benchmarking
Tuning UltraSPARC®T1–Based Systems for Performance
Benchmarking
Use a combination of tunable parameters and other parameters to tune your system for
performance benchmarking. These values are an example of how you might tune your system
to achieve the desired result.
Tuning Operating System and TCP Settings
The following table shows the operating system tuning for Solaris 10 used when benchmarking
for performance and scalability on UtraSPARC T1–based systems (64 bit systems).
TABLE 4–2 Tuning 64–bit Systems for Performance Benchmarking
ParameterScopeDefault ValueTuned ValueComments
rlim_fd_max/etc/system65536260000Process open le descriptors limit;
should account for the expected load
(for the associated sockets, les, pipes
if any).
setting to 0 makes it innite so the
performance runs won’t be hitby lack
of buer space. Set on clients too.
Note that setting sq_max_size to 0
might not be optimal for production
systems with high network trac.
ip:ip_squeue_bind0
ip:ip_squeue_fanout1
ipge:ipge_taskq_disable/etc/system0
ipge:ipge_tx_ring_size/etc/system2048
ipge:ipge_srv_fifo_depth/etc/system2048
ipge:ipge_bcopy_thresh/etc/system384
ipge:ipge_dvma_thresh/etc/system384
ipge:ipge_tx_syncq/etc/system1
tcp_conn_req_max_qndd /dev/tcp1283000
Chapter 4 • Platform-SpecicIssues andTips97
TuningUltraSPARC® T1–Based Systems for Performance Benchmarking
TABLE 4–2 Tuning 64–bit Systems for PerformanceBenchmarking(Continued)
If HTTP access is logged, follow these guidelines for the disk:
■
Write access logs on faster disks or attached storage.
■
If running multiple instances, move the logs for each instance onto separate disks as much
as possible.
■
Enable the disk read/write cache. Note that if you enable write cache on the disk, some
writes might be lost if the disk fails.
■
Consider mounting the disks with the following options, which might yield better disk
performance: nologging, directio, noatime.
Network Conguration
If more than one network interface card is used, make sure the network interrupts are not all
going to the same core. Run the following script to disable interrupts: