Saturday, February 14, 2009

How to install Maven plug-in in Rational Application Developer 7

Step :1
Click on Help --->Software Updates ---->Find and Install
Step 2: Choose the radio button "Search for new features to install"; click Next
Step 3:Add the Remote site and the URL as http://m2eclipse.codehaus.org/update/
Click Finish
Step 4: Accept the licens agreement
Step 5: Click Finish

Restart to take the changes into effect.


 

Friday, February 13, 2009

How to integrate Struts with Commons-file upload

Step 1: Update the commons-fileupload to the version 1.2.1 and commons-io to the version 1.3.x. This is required because; the entire solution is based on the FileTracker class which is available in commons-io 1.3. And, commons-fileupload has also needed to be upgraded because, FileTracker can be associated only with DiskFileItemFactory, which is part of commons-fileupload 1.2.1, but can not be associated with any of the classes referenced in the CommonsMutltipartRequest handler, which is used as part of the struts application.

Step 2: Extend the CommonsMutltipartRequestHandler to EYCommonsMultipartRequestHandler and override the method, handleRequest(). The part of the implementation is given below.

 

// Create a factory for disk-based file items

DiskFileItemFactory factory = new DiskFileItemFactory();

 

// Set factory constraints

factory.setSizeThreshold(maxMemorySize);

factory.setRepository(tempDirectory);

 

//getting handle of the file cleaning tracker

FileCleaningTracker fileCleaningTracker = FileCleanerCleanup.getFileCleaningTracker(context);

 

//setting the file cleaning tracker, so that when ever the //factory is getting garbage collected, the associated //resources, including the temp file also will be deleted.

factory.setFileCleaningTracker(fileCleaningTracker);

 

// Create a new file upload handler

ServletFileUpload upload = new ServletFileUpload(factory);

 

// Set overall request size constraint

upload.setSizeMax(yourMaxRequestSize);

 

// Parse the request

List /* FileItem */ items = upload.parseRequest(request);

 

 

Step 3: Add the newly created EYMultipartRequestHandler, to the struts configuration, under element tag as given below.

 

<controller>

<set-property property="multipartClass" value="com.ey.nexgen.struts.fileupload.EYCommonsMultipartRequestHandle”/>

controller>

Step 4: As mentioned earlier, temporary files are deleted automatically, if they are no longer used (more precisely, if the corresponding instance of java.io.File) is garbage collected. This is done silently by the org.apache.commons.io.FileCleaner class, which starts a reaper thread.

This reaper thread should be stopped, if it is no longer needed. In a servlet environment, this is done by using a special servlet context listener, called FileCleanerCleanUp . To do so, add a section like the following to your web.xml:

  ...

 

   

      org.apache.commons.fileupload.servlet.FileCleanerCleanup

   

 

  ...

Friday, February 6, 2009

High Availability Manager- IBM Websphere Application Server

Understanding High Availability Manager

High availability manager

WebSphere Application Server includes a high availability manager component. The services that the high availability manager provides are only available to WebSphere Application Server components.

A high availability manager provides several features that allow other WebSphere Application Server components to make themselves highly available. A high availability manager provides:

  • A framework that allows singleton services to make themselves highly available. Examples of singleton services that use this framework include the transaction managers for cluster members, and the default IBM messaging provider, commonly referred to as a messaging engine.
  • A mechanism that allows servers to easily exchange state data. This mechanism is commonly referred to as the bulletin board.
  • A specialized framework for high speed and reliable messaging between processes. This framework is used by the data replication service when WebSphere Application Server is configured for memory-to-memory replication.

A high availability manager instance runs on every application server, proxy server, node agent and deployment manager in a cell. A cell can be divided into multiple high availability domains known as core groups. Each high availability manager instance establishes network connectivity with all other high availability manager instances in the same core group, using a specialized, dedicated, and configurable transport channel. The transport channel provides mechanisms which allow the high availability manager instance to detect when other members of the core group start, stop, or fail.

Within a core group, high availability manager instances are elected to coordinate high availability activities. An instance that is elected is known as a core group coordinator. The coordinator is highly available, such that if a process that is serving as a coordinator stops or fails, another instance is elected to assume the coordinator role, without loss of continuity.

Highly available components

A highly available component is a component for which a high availability group is defined on the processes where that component can run. The coordinator tracks high availability group membership, and knows on which processes each highly available component can run.

The coordinator also associates a high availability policy with each high availability group. A high availability policy is a set of directives that aid the coordinator in managing highly available components. For example, a directive might specify that a component runs on a specific process, if that process is available. Directives are configurable, which makes it possible for you to tailor policies to your installation.

The coordinator is notified as core group processes start, stop or fail and knows which processes are available at any given time. The coordinator uses this information, in conjunction with the high availability group and policy information, to ensure that the component keeps functioning. The coordinator uses the policy directives to determine on which process it starts and runs each component. If the chosen process fails, the coordinator restarts the component on another eligible process. This reduces the recovery time, automates failover, and eliminates the need to start a replacement process.

State data exchange

The high availability manager provides a specialized messaging mechanism that enables processes to exchange information about their current state. Each process sends or posts information related to its current state, and can register to be notified when the state of the other processes changes. The Work Load Management (WLM) component uses this mechanism to build and maintain routing table information. Routing tables built and maintained using this mechanism are highly available.

Replication

WebSphere Application Server provides a data replication service (DRS) that is used to replicate HTTP session data, stateful EJB sessions, and dynamic cache information among cluster members. When DRS is configured for memory-to-memory replication, the transport channels defined for the high availability managers are used to pass this data among the cluster members.

When to use a high availability manager

A high availability manager consumes valuable system resources, such as CPU cycles, heap memory, and sockets. These resources are consumed both by the high availability manager and by product components that use the services that the high availability manager provides. The amount of resources that both the high availability manager and these WebSphere Application Server components consume increases exponentially as the size of a core group increases.

For large core groups, the amount of resources that the high availability manager consumes can become significant. Disabling the high availability manager frees these resources. However, before you disable the high availability manager, you should thoroughly investigate the current and future needs of your system to ensure that disabling the high availability manager does not also disable other functions that you use that require the high availability manager. For example, both memory to memory session replication, and remote request dispatcher (RRD) require the high availability manager to be enabled.

The capability to disable the high availability manager is most useful for large topologies where none of the high availability manager provided services are used. In certain topologies, only some of the processes use the services that the high availability manager provides. In these topologies, you can disable the high availability manager on a per-process basis, which optimizes the amount of resources that the high availability manager uses.

Do not disable the high availability manager on administrative processes, such as node agents and the deployment manager, unless the high availability manager is disabled on all application server processes in that core group.

Some of the services that the high availability manager provides are cluster based. Therefore, because cluster members must be homogeneous, if you disable the high availability manager on one member of a cluster, you must disable it on all of the other members of that cluster.

When determining if you must leave the high availability manager enabled on a given application server process, consider if the process requires any of the following high availability manager services:

  • Memory-to-memory replication
  • Singleton failover
  • Workload management routing
  • On-demand configuration routing

Memory-to-memory replication

Memory-to-memory replication is a cluster-based service that you configure or enable at the application server level. If memory-to-memory replication is enabled on any cluster member, then the high availability manager must be enabled on all of the members of that cluster. Memory-to-memory replication is automatically enabled if:

Singleton failover

Singleton failover is a cluster-based service. The high availability manager must be enabled on all members of a cluster if one or more instances of the default Java™ Message Service (JMS) provider are configured to run in the cluster. The default JMS provider is the messaging engine that is provided with WebSphere Application Server.

Workload management routing

Workload management (WLM) propagates the following classes or types of routing information:

  • Routing information for enterprise bean Internet Inter-ORB Protocol (IIOP) traffic.
  • Routing information for the default IBM Java Messaging Service (JMS) provider, which is also referred to as the messaging engine.

WLM uses the high availability manager to both propagate the routing information and make it highly available. Although WLM routing information normally applies to clustered resources, it can also apply to non-clustered resources, such as standalone messaging engines. Under normal circumstances, you must leave the high availability manager enabled on any application server that produces or consumes either IIOP or messaging engine routing information. For example if:

  • The routing information producer is an enterprise bean application that resides in cluster 1.
  • The routing information consumer is a servlet that resides in cluster 2.

When the servlet in cluster 2 calls the enterprise bean application in cluster 1, the high availability manager must be enabled on all servers in both clusters.

Workload management provides an option to statically build and export route tables to the file system. Use this option to eliminate the dependency on the high availability manager. See Enabling static routing for a cluster for more information about the Export route table option.

On-demand configuration routing

In a Network Deployment system, the on-demand configuration is used for proxy server routing. If you want to use on-demand configuration routing in conjunction with your Web services, you must make sure that the high availability manager is enabled on the proxy server and on all of the servers to which the proxy server will route work.

Tuning IBM Plugin - few tips

Tuning Plugin

Modifying the WebSphere plug-in to improve performance

You can improve the performance of IBM HTTP Server (with the WebSphere Web server plug-in) by modifying the plug-in's RetryInterval configuration. The RetryInterval is the length of time to wait before trying to connect to a server that has been marked temporarily unavailable. Making this change can help the IBM HTTP Server 1.3 to scale higher than 400 users.

The plug-in marks a server temporarily unavailable if the connection to the server fails. Although a default value is 60 seconds, it is recommended that you lower this value in order to increase throughput under heavy load conditions. Lowering the RetryInterval is important for IBM HTTP Server 1.3 on UNIX operating systems that have a single thread per process, or for IBM HTTP Server 2.0 if it is configured to have fewer than 10 threads per process.

How can lowering the RetryInterval affect throughput? If the plug-in attempts to connect to a particular application server while the application server threads are busy handling other connections, which happens under heavy load conditions, the connection times out and the plug-in marks the server temporarily unavailable. If the same plug-in process has other connections open to the same server and a response is received on one of these connections, the server is marked again. However, when you use the IBM HTTP Server 1.3 on a UNIX operating system, there is no other connection since there is only one thread and one concurrent request per plug-in process. Therefore, the plug-in waits for the RetryInterval before attempting to connect to the server again.

Since the application servPublish Poster is not really down, but is busy, requests are typically completed in a small amount of time. The application server threads become available to accept more connections. A large RetryInterval causes application servers that are marked temporarily unavailable, resulting in more consistent application server CPU utilization and a higher sustained throughput.

Note: Although lowering the RetryInterval can improve performance, if all the application servers are running, a low value can have an adverse affect when one of the application servers is down. In this case, each IBM HTTP Server 1.3 process attempts to connect and fail more frequently, resulting in increased latency and decreased overall throughput.

Tuning HTTP Server

Tuning HTTP Server

Determining maximum simultaneous connections

The first tuning decision you'll need to make is determining how many simultaneous connections your IBM HTTP Server installation will need to support. Many other tuning decisions are dependent on this value.

For some IBM HTTP Server deployments, the amount of load on the web server is directly related to the typical business day, and may show a load pattern such as the following:

 

 

    Simultaneous
    connections
 
            |
       2000 |
            |
            |                            **********
            |                        ****          ***
       1500 |                   *****                 **
            |               ****                        ***
            |            ***                               ***
            |           *                                     **
       1000 |          *                                        **
            |         *                                           *
            |         *                                           *
            |        *                                             *
        500 |        *                                             *
            |        *                                              *
            |      **                                                *
            |   ***                                                  ***
          1 |***                                                        **
 Time of    +-------------------------------------------------------------
   day         7am  8am  9am  10am  11am  12am  1pm  2pm  3pm  4pm  5pm
 

For other IBM HTTP Server deployments, providing applications which are used in many time zones, load on the server varies much less during the day.

The maximum number of simultaneous connections must be based on the busiest part of the day. This maximum number of simultaneous connections is only loosely related to the number of users accessing the site. At any given moment, a single user can require anywhere from zero to four independent TCP connections.

The typical way to determine the maximum number of simultaneous connections is to monitor mod_status reports during the day until typical behavior is understood, or to use mod_mpmstats (2.0.42.2 and later).

Monitoring with mod_status

  1. Add these directives to httpd.conf, or uncomment the ones already there:
2.           # This example is for IBM HTTP Server 2.0 and above
3.           # Similar directives are in older default configuration files.
4.            
5.           Loadmodule status_module modules/mod_status.so
6.           
7.           SetHandler server-status
8.           Order deny,allow
9.           Deny from all
10.       Allow from .example.com    <--- replace with "." + your domain name
11.       
  1. Request the /server-status page (http://www.example.com/server-status/) from the web server at busy times of the day and look for a line like the following:
13.       192 requests currently being processed, 287 idle workers

The number of requests currently being processed is the number of simultaneous connections at this time. Taking this reading at different times of the day can be used to determine the maximum number of connections that must be handled.

Monitoring with mod_mpmstats (IBM HTTP Server 2.0.42.2 and later)

  1. Copy the version of mod_mpmstats.so for your operating system from the ihsdiag package to the IBM HTTP Server modules directory. (Example filename: ihsdiag-1.4.1/2.0/aix/mod_mpmstats.so)
  2. Add these directives to the bottom of httpd.conf:
3.           LoadModule mpmstats_module modules/mod_mpmstats.so
4.           ReportInterval 90
  1. Check entries like this in the error log to determine how many simultaneous connections were in use at different times of the day:
6.           [Thu Aug 19 14:01:00 2004] [notice] mpmstats: rdy 712 bsy 312 rd 121 wr 173 ka 0 log 0 dns 0 cls 18
7.           [Thu Aug 19 14:02:30 2004] [notice] mpmstats: rdy 809 bsy 215 rd 131 wr 44 ka 0 log 0 dns 0 cls 40
8.           [Thu Aug 19 14:04:01 2004] [notice] mpmstats: rdy 707 bsy 317 rd 193 wr 97 ka 0 log 0 dns 0 cls 27
9.           [Thu Aug 19 14:05:32 2004] [notice] mpmstats: rdy 731 bsy 293 rd 196 wr 39 ka 0 log 0 dns 0 cls 58

Note that if the web server has not been configured to support enough simultaneous connections, one of the following messages will be logged to the web server error log and clients will experience delays accessing the server.

Windows
[warn] Server ran out of threads to serve requests. Consider raising the ThreadsPerChild setting
 
Linux and Unix
[error] server reached MaxClients setting, consider raising the MaxClients setting
 

Check the error log for a message like this to determine if the IBM HTTP Server configuration needs to be changed.

Once the maximum number of simultaneous connections has been determined, add 25% as a safety factor. The next section discusses how to use this number in the web server configuration file.

Note: Setting of the KeepAliveTimeout can affect the apparent number of simultaneous requests being processed by the server. Increasing KeepAliveTimeout effectively reduces the number of threads available to service new inbound requests, and will result in a higher maximum number of simultaneous connections which must be supported by the web server. Decreasing KeepAliveTimeout can drive extra load on the server handling unnecessary TCP connection setup overhead. A setting of 5 to 10 seconds is reasonable for serving requests over high speed, low latency networks.