WAP, Scalability and Availability in a J2EE environment Master’s thesis Information and Communication Systems Technical University Hamburg-Harburg

Muhammad Farhat Kaleem Arbeitsbereich Telematik Technical University Hamburg-Harburg December 2000

2 I hereby declare that I have written this thesis myself, using only the references listed in the thesis.

Muhammad Farhat Kaleem 4th December, 2000.

3

Abstract................................................................................................................................ 7 Chapter 1 ............................................................................................................................ 8 1.

Introduction.......................................................................................................... 8

Chapter 2 .......................................................................................................................... 10 2.

Integrating Wireless Application Protocol (WAP) with J2EE 10

2.1. WAP........................................................................................................................ 10 2.2. WAP Protocol....................................................................................................... 11 2.3. WAP Gateway ...................................................................................................... 11 2.4. Serving WAP Clients ......................................................................................... 12 2.5. WAP and J2EE..................................................................................................... 13 2.5.1. Business case .................................................................................................... 15 2.5.2. Components ...................................................................................................... 17 2.5.3. Using XML as data format ............................................................................ 19 2.5.4. XML/XSL processing .................................................................................... 20 2.5.5. XSLT for Transforming XML ..................................................................... 20 2.5.6. Using the servlet for XSLT processing ...................................................... 21 2.5.7. Cost of XSLT transformation ....................................................................... 24 2.5.8. Session Management ...................................................................................... 24 2.5.9. Using stateful session bean to maintain state ........................................... 25 2.5.10. Scalability ...................................................................................................... 26 2.6. Use of a Publishing Framework....................................................................... 26 2.7. Conclusion ............................................................................................................. 27

Chapter 3 .......................................................................................................................... 28 3. Enhancing WAP-J2EE scenario with Java Messaging Service (JMS)................................................................................................................................. 28 3.1. 3.2. 3.3. 3.4. 3.4.1. 3.4.2. 3.4.3. 3.5. 3.6.

JMS ......................................................................................................................... 28 Proposed Scenario ............................................................................................... 33 Usage Scenario ..................................................................................................... 38 Some important aspects ..................................................................................... 38 Quality of JMS implementation................................................................... 39 Concurrency & Performance issues ............................................................ 39 Scalability and Fail-over ................................................................................ 40 Advantage of a similar scenario....................................................................... 40 Further extensions ............................................................................................... 41

Chapter 4 .......................................................................................................................... 42 4.

Scalability and Availability in a J2EE environment ..................... 42

4.1. 4.2. 4.3. 4.4. 4.5. 4.5.1.

General Overview................................................................................................ 42 The role of the application ................................................................................ 42 How to achieve scalability and availability .................................................. 43 Clustering .............................................................................................................. 43 Clustering in a J2EE context ............................................................................ 45 Application Server Clustering ...................................................................... 45

4

4.5.2. Application deployment................................................................................. 45 4.5.3. Some Clustering Configurations ................................................................. 46 4.5.3.1. Case 1 ............................................................................................................. 46 4.5.3.2. Case 2 ............................................................................................................. 48 4.5.3.3. Case 3 ............................................................................................................. 49 4.5.4. Vertical and Horizontal Scaling [25] .......................................................... 50 4.5.4.1. Vertical Scaling ............................................................................................ 51 4.5.4.2. Horizontal scaling........................................................................................ 52 4.5.4.3. Combination.................................................................................................. 52

Chapter 5 .......................................................................................................................... 54 5.

WLS and IAS clustering ............................................................................. 54

5.1. Setup used.............................................................................................................. 54 5.1.1. WLS clustering configuration ...................................................................... 54 5.1.2. IAS clustering configuration ........................................................................ 55 5.2. WLS Clustering ................................................................................................... 57 5.2.1. Setup and configuration ................................................................................. 57 5.2.1.1. Proxy Server ................................................................................................. 58 5.2.1.2. Configuration of proxy server .................................................................. 60 5.2.2. Heap size ............................................................................................................ 60 5.2.3. Enterprise javabeans ....................................................................................... 61 5.2.4. Transactional behaviour in a cluster ........................................................... 62 5.2.5. WAP and WLS cluster ................................................................................... 62 5.2.6. Comments .......................................................................................................... 63 5.3. IAS clustering ....................................................................................................... 64 5.3.1. Setup and Configuration ................................................................................ 64 5.3.2. Web-tier ............................................................................................................. 65 5.3.3. Enterprise javabeans ....................................................................................... 66 5.3.4. Naming service ................................................................................................ 68 5.3.5. Transactions ...................................................................................................... 68 5.3.6. Vertical and horizontal scaling with IAS .................................................. 69 5.4. Tuning and testing ............................................................................................... 69 5.4.1. Testing and Profiling ...................................................................................... 70 5.4.1.1. Testing ............................................................................................................ 71 5.5. Conclusion ............................................................................................................. 75

Appendix A .................................................................................................................... 76 Tools for testing and profiling enterprise applications ............................ 76

Appendix B ..................................................................................................................... 84 Wireless Application Protocol ............................................................................. 84

Appendix C ..................................................................................................................... 89 Oracle XML Utility ................................................................................................... 89

Appendix D .................................................................................................................... 91 References ...................................................................................................................... 91

5

Glossary............................................................................................................................. 94

6

7

Abstract This thesis describes a generic architecture based on J2EE (Java T M 2 Platform Enterprise Edition) that can be used to develop an application with J2EE technologies having an inherent capability of serving different types of clients, with particular reference to WAP (Wireless Application Protocol) clients. An extension to this architecture is also proposed, with the intent of adding value to the architecture for WAP clients accessing enterprise applications. The issues of scalability and availability with respect to J2EE are also discussed as part of this thesis. In this context different solutions are described with reference to two commercial application servers, and different strategies are proposed as to how a J2EE application could be optimally deployed for scalability and high availability.

8

Chapter 1 1. Introduction The

J2EE

defines

applications.

the

standard

for

developing

multitier

enterprise

The promise of the J2EE platform is to take the complexity

out of enterprise applications by providing for a component model that has essential

services



transaction

management,

life-cycle

management,

resource pooling, persistence, naming, database access, messaging etc. – provided to it and taken care of by the application server. In addition, scalability and portability of enterprise applications is inherent in the J2EE vision. However, going beyond the marketing hype, much depends on a solid design of the application, based on judiciously selected J2EE technologies, and most of all, the quality of the application server that provides the infrastructure for the J2EE technology based applications. It will be shown how using J2EE technologies such as JSPs and Servlets at the web tier, and enterprise javabeans at the (loosely called) middle tier, and XML as the format for data transfer between application layers can provide for an application that serves different types of clients, using XSL transformation as the medium to control presentation to different client types, in particular WAP clients. Then the use of another J2EE strategic technology, namely JMS, is described, as to how it can be used to add functionality to a J2EE based application, with particular reference to WAP clients. Support for JMS on part of J2EE compliant application servers has now been raised from optional to required [1]. J2EE based enterprise applications are also expected to scale seamlessly, from a small test prototype, to a full-blown application serving thousands and more concurrent users. Here the application servers on which the applications are deployed have an integral part to play, by providing the platform that helps the application scale and remain highly available. Application servers provide for this in different ways, each solution having its own pros and cons. Different solutions are considered in this thesis, with respect to scalability and availability in a J2EE environment, and in the context of two application servers, namely WebLogic Server (WLS) and

9 Inprise Application Server (IAS). The former is perhaps the most deployed application server in the market, and also has achieved J2EE certification by passing the compatibility test suite, whereas the latter, though not offering all the J2EE technologies to be fully J2EE compliant in the present release, has a solid infrastructure built on CORBA, and is distinguished by a capable EJB container and associated services. Both application servers deal with the scalability and availability requirements differently, and the thesis describes

various

configurations

that

can

be

used

to

fulfil

thes e

requirements optimally. The pros and cons of each approach are also discussed, as well as the issues that need to be considered to reach an optimal configuration for a particular deployment. It is also discussed that with the J2EE technologies still evolving, and that at a very fast rate, and the J2EE application servers available as yet still maturing with respect to fulfilling the requirements of the J2EE platform, and the increasing subjugation of enterprise applications to the required internet-time-to-market, there exists no one solution to the scalability and availability needs of a particular application. Each domain, and the application within a domain, needs thorough investigation and testing with respect to ascertaining the best deployment scenario. In this context the thesis discusses how enterprise applications meant for scalable and highly available deployments could be tested using specialized tools, and how this testing helps with deployment goals mentioned previously.

10

Chapter 2 2. Integrating Wireless Application Protocol (WAP) with J2EE This chapter, despite its optimistic name, would describe how a J2EE application could be adapted for WAP access. Starting with a brief introduction to WAP, some typical deployment scenarios will be described. This would be followed by describing one generic, portable architecture that extends WAP usage to a complete J2EE application, making it accessible from different types of clients. The rationale behind such an architecture would be discussed, as well as the issues related to WAP and J2EE integration.

2.1. WAP WAP [2] describes an open, standard architecture and set of protocols intended to implement wireless internet access. The key elements of the WAP specification include: ƒ

A definition of the WAP Programming Model, which borrows heavily from the WWW programming model. This is intended to provide the application developer community a familiar programming model as well as the ability to leverage existing tools (e.g., web servers, XML related utilities etc.). Where possible, existing standards have been adopted or have been used as starting point for WAP technology.

ƒ

A mark-up language, Wireless Markup Language (WML) that adheres to XML standards which can be used to build applications within the constraints of handheld devices. From the WAP Gateway, all WML content is accessed over the internet using standard HTTP 1.1 requests, so traditional web servers, tools and techniques can be used to serve WAP clients. This is what also enables the so-called integration of WAP within a J2EE environment.

ƒ

A specification for a microbrowser in the wireless terminal that controls the user interface and is analogous to a standard web browser.

11 This specification defines how WML and WMLScript should be interpreted in the handset and presented to the user. The microbrowser specification has been designed for wireless handsets so that the resulting code will be compact and efficient. ƒ

A lightweight protocol stack to minimize bandwidth requirements to guarantee

that

a

variety

of

wireless

networks

can

run

WAP

applications.

2.2. WAP Protocol The WAP protocol suite contains four protocols for handling the communication between clients and the WAP Gateway. These protocols are modelled after protocols used on the internet. This is shown in the diagram below. More details about the protocols and WAP in general are described in Appendix B.

2.3. WAP Gateway A WAP Gateway is a piece of software through which the WAP requests are routed. It acts as an intermediary between the bearer used by the WAP client (GSM, CDMA, TDMA etc.) and the computing network that the WAP Gateway resides on (TCP/IP). Its general functions are : ƒ

Converting

WML

from

textual

format

to

tokenised

(binary/compressed) format which is readable by a WAP device ƒ

Translating the requests from the WAP device to HTTP requests for use by the web server

ƒ

Converting between SSL encryption in the web part and WTLS in the WAP part

ƒ

Converting between TCP in the web part to WDP in the WAP part

For our scenario, when a mobile client first sends request to the WAP application running on a J2EE platform (this means J2EE technologies being used to generate WML content for display on the WAP device), the request is first routed through the WAP gateway where it is decoded, translated to HTTP, then forwarded to the appropriate URL. The response is then routed back through the gateway, translated to WAP, encoded and

12 forwarded, in binary form due to bandwidth restrictions, to the WAP client on any bearer service (e.g. SMS, GPRS etc.). It should be evident that the WAP gateway is a critical part of the system. Note : The term WAP Server is usually reserved for a web server and a WAP gateway built into one. It should not be assumed that WAP content can only be served from a WAP Server, since WAP content can be served from any web server (or a web container, if JSPs/Servlets are being used to generate WAP content dynamically).

2.4. Serving WAP Clients Figure 2.2 describes how WAP clients are usually served. The format of the requests and responses and the route they take are also marked.

Client

WAP device

WAP Gateway WAP request (URL)

Origin Server HTTP request (URL)

Encoders and Decoders WML and WMLScript content Protocol Conversion

WAP Protocol

HTTP response (WML)

WAP response (binary WML)

WAP protocols

Internet protocols

Figure 2.2

The Origin Server is a web server that has WML content, either static or generated dynamically. The J2EE technologies usually used for dynamic WML generation are JSPs/Servlets, wh ich means a web server would also have a web container.

13 Figure 2.3 shows how JSPs/Servlets are generally used for serving WAP clients. The WAP gateway is not shown in this diagram, but it is part of the system, as shown in figure 2.2.

request

WAP Client

Origin Server JSPs/Servlets JavaBeans

Data in XML format

Data storage

WML response

XML-to-WML conversion

Figure 2.3

The WAP client interacts with the JSP/Servlet layer within the Origin Server, which in turn may access some data store for retrieval of data. This data maybe already in XML format, or might be converted into XML format during processing. For display to WAP client, the data could then be converted to WML using JavaBean components/custom tags [3], or XSL stylesheets. The

following

sections

would

describe

a

generic

architecture

to

“integrate” WAP with a J2EE environment. The emphasis here is for a complete J2EE based application to serve WAP clients, without being dependent on the type of client. So the same application should be able to serve web browser based clients, as well as mobile phone based WAP clients. Also, the emphasis would be on an application using J2EE technologies at all layers, and goes beyond providing simple information services to a WAP client, letting a WAP client have transactional interaction with backend services [4].

2.5. WAP and J2EE Figure 2.4 shows a rough sketch showing the interaction between a WAP client and a J2EE application. There is a separation between the client and application layer, showing that this layer is accessible to all type of

14 clients. It should be noted here that the typical J2EE components are being shown, without regard to the function that are performing, and without tying the function of any of these in particular to the WAP client. These components perform, though, the usual task for which they are best suited [5].

EJBs

JSPs, Servlets

DB Javabeans

Figure 2.4

The clients (whether WAP or web) interact with the JSP/Servlet layer, which might rely on JavaBean components for certain processing, and interact with enterprise javabean components, which are used for business logic and data persistence. It is also shown that the components beyond

the

JSP/Servlet

layer

are

client

ambivalent,

in

that

the

presentation of the data content in the correct format for a particular client is the responsibility of the JSP/Servlet layer. Based on the scenario outlined above, the architecture used to integrate WAP with a J2EE application is sketched in figure 2.5. The figure also

15 illustrates the design choices that were made, which will be explained in the sections that follow. The important ones from these are : ƒ

JSPs are used purely for presentation purposes, and the servlet is used in a controller function

ƒ

XML is used to represent the business data, which helps to cleanly partition components in a multi-layered architecture, with each layer having a unique responsibility

ƒ

XSLT transformation is used to transform the XML content into the proper format for a client, e.g. WML for a WAP client, or HTML for a browser based client HTML

requests JSPs (presentation) Servlet(controller)

Client

EJBs (session and entity) XML

WML

DB

XSLT

Figure 2.5

2.5.1.

XML

Business case

login

perform transactions

WAP Client

logout

Figure 2.6

16

The simplified use-case diagram in figure 2.3 shows the WAP client performing three simple self-explanatory functions. The login and logout use cases have significance with respect to session management for WAP client, which are covered in a later section in this chapter. The do transaction use-case represents any action in general that the WAP client performs with his WAP device, which leads to an interaction with the J2EE system. This may, or may not, lead to transactions in the J2EE sense of the word. For example, the WAP client might just be requesting read-only information, and the J2EE system which caters to the request might be configured to run this request without a transaction, or if running within a transaction, avoid unnecessary datastore calls, thereby avoiding the overhead associated with transactions, resulting in better performance. This use-case could include, for example, the WAP device user requesting a statement of his bank account, or initiating a payment of bills [6]. An interaction diagram is shown in figure 2.7, which represents the actual components used in the application. Specific details are available in [7]. This interaction diagram has been simplified so as to show only the components and the interactions between them. The components, and the design choices behind them, are explained next.

17

jsp

servlet

entity

sfsb

parameters

cmp

direct jdbc access

uses Oracle XML Utility lookup() & create()

method(parameters)

findBy(...) : Enumeration

getPrimaryKey() method(primKey) : xmlString

parse & display

XML WML

Web Browser

Figure 2.7

The design, at the JSP/Servlet layer, conforms to what is normally referred to as Model 2 architecture [8], which amounts to the ModelView-Controller paradigm. This amounts to offloading the processing of requests received through the JSPs to a controller servlet. This can leave the JSP layer to concern itself mostly with presentation issues only. The choice of a servlet in the controller function is also recommended in the latest version of the Java Pet Store demo application [9], which is a sample application from Sun Microsystems illustrating J2EE design patterns and best practices.

2.5.2.

Components

The interaction diagram in figure 2.4 shows the following components: ƒ

JavaServer Pages (JSPs): The JSPs are the point of entry to the application, and are written for presentation to the WAP client. These are WAP-client specific JSPs, which can be used to

18 dynamically construct a WML deck for the WAP client. These are also used to accept input from the WAP client, e.g. the input parameters required for login to the system and then call the controller servlet with the parameters to offload processing to the servlet. There are a number of JSPs in the application, which are used directly by the WAP client in response to requests made to the application, and also by the servlet to dispatch responses to the WAP client, e.g. to inform the WAP client about the result of a login attempt. An example JSP page is shown in listing 1. <%@ page contentType="text/vnd.wap.wml" %>

Please enter:

username:

quantity upper limit:

quantity lower limit:

">

View results



Listing 1

19 ƒ

Servlet: The servlet, being in a controller function, acts on requests received from the JSPs. This includes looking up the SFSB using JNDI, and then delegating the request processing to the SFSB. The SFSB also maintains state on behalf of the WAP client. The servlet gets data in XML format back from the SFSB, and then passes this data to another of its methods for display to the WAP device. This design choice is explained later.

ƒ

Stateful Session Bean (SFSB): The SFSB is the component that provides an entry point to the enterprise javabeans part of the application. The SFSB provides access to the “back-end” entity beans, that are data representations of the business data. Using a session bean to encapsulate transactional access to entity beans is normally called the “Wrapper” pattern. The rationale behind this is discussed in various resources, including [10]. The SFSB, in addition to maintaining state on behalf of the client in its variables, calls the appropriate methods on the entity beans. If the method calls on the entity beans return an XML string, then the SFSB returns the string to the servlet for XSLT processing for display to the client.

ƒ

Entity Beans: Entity beans represent persistent data, and are mapped to tables in a database. The entity beans are used to return or update the underlying data as a result of the request initiated by the WAP client. For example, an account entity bean, representing a customer account, could be used to return the current status of the account, or to transfer money from one account to another.

2.5.3.

Using XML as data format

In this application, all the data being passed between the different layers is in XML format. This leverages the power of the Java-XML combination to enable the architecture to serve any kind of client, and to introduce a generality in the application components with respect to their roles. In addition, it is evident that enabling WAP access to

20 those existing systems that use XML as the data format is quite easy by using a judicious transformation technique.

2.5.4.

XML/XSL processing

The data being passed from the entity bean layer to the JSP/Servlet layer is in XML format. For this application, the entity beans are responsible for generating the XML representation of their data. This is done with the help of Oracle XML Utility, which is described in Appendix C. This is, of course, not the only method of converting the underlying data into an XML representation, but it was adopted because of its ease of use and adequate performance, as described i n the appendix. Using the Oracle XML Utility, it is possible to have the XML representation of the data as a java.lang.String, and the entity beans return this string in response to the request for data. This string is then passed by the SFSB to the Servlet, where it is parsed and processed using XSLT stylesheets. Oracle XML Utility also includes an XML parser and XSLT processor, and these are used by the servlet for parsing and transforming of the XML string. This enables the same data to be transformed to the desired format, e.g. to WML for display on WAP-enabled devices.

2.5.5.

XSLT for Transforming XML

The World Wide Web Consortium has released a specification for a language – XSLT – that can specify transforming one XML document into another [11]. XSLT (XSL Transformations) is an XML language for specifying how XML documents can be transformed into other XML documents, or into non-XML formats. For the purpose of the application being described, XSLT is instrumental in generating WML or HTML from the XML document. This is represented in figure 2.8.

21

XML

XSLT Stylesheet

HTML

WML

Figure 2.8

An XSLT style sheet works by taking the source markup and recursively searching for patterns in it, applying XSLT formatting templates to the portion of the document that matches the pattern. This process is recursive in that the XSLT templates are again applied to the

mark-up

extracted

from

the

source

document

and

copied

(according to the templates) into the “output” document (often called the target). This process repeats until the entire document has been processed. It should also be kept in mind that XSLT is always wellformed, valid XML.

2.5.6.

Using the servlet for XSLT processing

As mentioned before, the servlet transforms the XML string received from the stateful session bean into WML for the WAP client using XSLT processing. For this purpose the servlet parses the XML string, and then transforms the XML to WML format using the appropriate XSL stylesheet. Listings 2 and 3 show portions of the XML and the XSL documents respectively, and the resulting WML deck is shown in listing 4.

22 177 2000 cycle farhat karstadt 2 -

Listing 2

The following resulted from your query
View Results

price:
product:
shop:
date:
-

Listing 3

23

The following resulted from your query
View Results

price: product: shop: date:

Listing 4

In the design mentioned previously the Servlet is responsible for XML parsing/XSL processing of the XML string it receives from the stateful session bean. This is a design by choice. This, however, pushes part of the presentation to the responsibility area of the servlet, and so we have a seemingly hybrid approach that does not fully separate the presentation functions from the controller functions for which the servlet is actually meant. However, this design was found to be convenient to use, and kept the complexity out of the JSPs. Other options could include using a JSP in conjunction with a JavaBean component to do the required processing, or to use tag libraries for XSLT transformation [12]. The use of JavaBean components can have an advantage when the information being returned to the JSP/Servlet layer is quite large in quantity, and due to the display limitations of the WAP devices, cannot be displayed at once, or is meant to be displayed chunk by chunk according to the user input. Therefore the data could be cached in the JavaBean, and then sent to the WAP device prompted by user input. In this application, though, the desired display data being returned from the stateful session bean was solely controlled through

24 XSLT transformation, which itself offers adequate control over the data being transformed.

2.5.7.

Cost of XSLT transformation

Even though XSLT is a very rich and flexible language for specifying how XML data should be transformed to other formats, it does have an associated cost, which is incurred in the form of parsing of the XML string, and more importantly, XSLT processing. However, in the application being described, and any similar application that would cater to the need of both web and WAP clients for the same service offering, the processing requirements would not be so intense, keeping in view the limited display capabilities of WAP devices. It was also observed by testing, as described in chapter 5, that XSLT processing has no negative effect on application performance. Within this application, the DOM representation [13] of the XML document is being used. Since the DOM representation of an XML document is kept in memory, this can affect performance of the application if the XML document is large. However, because of the reasons described previously, this is not an issue for concern. In addition, the provision of an easy-to-use and clean interface to the data in a desirable format proves quite advantageous, as is the case with DOM. For both issues mentioned above, there is interesting discussion in [3].

2.5.8. Session

Session Management tracking

in

a

WAP

based

scenarios

have

different

requirements due to the fact that cookies are not supported by WAP phones

in

general.

However,

even

though

WAP

Gateways

and

simulators may support cookies, this is a non-standard feature, and not portable across WAP gateways and

devices. Hence this feature was

not used in the previously described prototype application. A more suitable way of session tracking is to use URL rewriting, and it was found to work satisfactorily. However, since the using URL rewriting causes the session ID to be encoded in the URL, it leads to

25 an issue when using a cluster of WebLogic Servers. This is described in more detail in chapter 5. A code snippet is shown in listing 5 where URL rewriting is being used within a JSP which takes parameters from the client and passes them to the servlet. "> -

Listing 5

Nokia documentation [14] also mentions the use of WSP Session ID for mapping requests to a session. However, due to the reasons mentioned in [14], and the fact that minimal reliance on WAP infrastructure is a goal for this generic architecture, this method was not considered. As depicted in the use case diagram in figure 2.3, the login and logout use cases represent the start and end of a user session, respectively. When the user logs in, a session is started, and all user actions are associated to that till the time the user logs out. This session tracking is made possible by URL rewriting.

2.5.9.

Using stateful session bean to maintain state

As mentioned previously, the stateful session bean is used to maintain state on behalf of the user. Since a stateful session bean naturally lends themselves to the idea of maintaining state on behalf of the client, therefore it fits neatly into the design. However, maintaining the state in the stateful session bean can have implications for the case where failover is desired, specially in case of an application deployed on WLS, since WLS version 5.1.0 does not support fail-over of stateful session bean. However, as mentioned earlier in this chapter, a cluster of WL S

26 was not used for this application, and therefore the option of fail-over was not considered.

2.5.10. Scalability of the WAP/J2EE application Even though the application described above is meant to cater to the WAP and web clients both, the scalability requirements and the issues with respect to scalability are different in both cases. However, as was the intention with the generic architecture, most of the processing load is on the J2EE components, and therefore the onus of scalability can be placed on the J2EE infrastructure provided by the application server (more about the scalability in an application like this is discussed in Chapter no 5.). It is not possible, though, to bypass the WAP gateway, which is an essential part of the whole setup. Since the WAP requests reach the J2EE system through the WAP gateway, therefore the performance metrics of the gateway do contribute to the overall performance of the system. The Nokia WAP Server offers a number of parameters that can be tuned to enhance performance [14], for example increasing the thread pool size, which would enable more simultaneous

requests

to

be

served.

Among

these

options

for

performance tuning is the enabling of HTTP cache, which allows the Nokia WAP Server to cache documents received from the web server in its internal cache. However, since in the application the generation of documents is dynamic, through the use of JSPs/Servlet layer , therefore it is advisable to disable this cache. There could be a possibility of using Gateway clusters [15], which could make it possible to balance load at the gateway level. Though this is not in use widely at this time.

2.6. Use of a Publishing Framework A publishing framework [16] can provide an infrastructure that can provide dynamically generated content properly transformed for correct presentation to a particular client. Cocoon [17] from the Apache Cocoon project, is one such framework. Based completely in Java and XML, with transformation

of

XML

based

on

XSLT

technology,

the

Cocoon

27 publishing framework can be used to serve WML content to WAP clients. The use of this framework was briefly investigated within the scope of the application described previously, but this investigation was not pursued further since the requirements of the application did not match seamlessly with the functionality of the framework. In addition, using XSLT technology to transform XML content generated on demand by the client

as

has

been

mentioned

previously

was

found

to

perform

satisfactorily. However, it might be interesting to investigate the use of a publishing framework within the scope of a J2EE application that is required to serve the same content to different clients, each with a different presentation requirement.

2.7. Conclusion It has been shown how WAP could be used in conjunction with an application built on the J2EE platform. Using XML as the data transfer format

helps

an

application

stay

client

ambivalent,

and

XSL

transformation enables different clients, including WAP clients, to access the application. This separates the presentation responsibility of the application from the business functionality, which is built around J2EE technologies, hence taking advantage of the services available to a J2EE application. It has also been shown how the generic architecture described shifts the burden of application logic and processing to the J2EE platform, therefore allowing for better scalability.

28

Chapter 3 3. Enhancing WAP-J2EE scenario with Java Messaging Service (JMS) This chapter proposes enhancement to the WAP-J2EE architecture using JMS. Beginning with a modest introduction to JMS, some scenarios will be suggested in which JMS could be used in a J2EE environment to serve WAP clients. How the proposed scenario can add value for a WAP client would also be discussed, by focusing on the advantages that JMS brings, as well as the shortcomings of present JMS implementations.

3.1. JMS According to Sun’s definition, JMS is a strategic technology for J2EE, and is meant to work in concert with other technologies to provide reliable,

asynchronous

communication

between

components

in

a

distributed computing environment. The objectives and functionality of JMS, and its relationship with other J2EE technologies is described in the JMS Specification [18]. It should also be mentioned that the proposed final draft of EJB 2.0 Specification [19] provides for a Message-driven Bean component, which is an asynchronous message consumer. This should allow for smooth integration of JMS functionality with EJBs within the scope of the services provided by J2EE application servers. The scenario presented in this chapter could be further enhanced using Message-driven Beans, when implementation of application servers conforming to EJB 2.0 Specification become available. JMS provides for a common way for Java programs to create, send, receive and read an enterprise messaging system’s messages. JMS is an API

for

asynchronous

distributed

enterprise

messaging

that

spans

processes and machines across a network. The API defines how a JMS client accesses the facilities of an enterprise messaging product [18]. A JMS Provider is the entity which implements JMS for a messaging product. For example, WebLogic Server [19] comes with a

JMS

29 implementation, which could be used to create, send and receive messages. A message is a unit of information or data that is sent from a process running on one computer to other processes running on the same or different computers on the network. The messaging models available within JMS based messaging systems are: ƒ

Publish-Subscribe Messaging

ƒ

Point-to-Point Messaging

With

Publish-Subscribe

messaging,

multiple

Publishers

can

put

messages on a Topic, and multiple Subscribers can receive all the messages from the Topic. The messages put on a Topic can optionally be made durable. With Point-to-Point messaging, Senders can drop messages into a Queue, from where a Receiver can take the messages. The above-mentioned concepts are diagrammatically illustrated in the following figure 3.1. publishers

subscribers

messages

TOPIC

messages

Messaging server senders

messages

QUEUE

Figure 3.1

message

receiver

30 Some primary features of JMS are described below: ƒ

ConnectionFactory: This is used to create connections to a specific JMS provider. A ConnectionFactory is an “administered” object and is looked up through JNDI [20].

ƒ

Destination: JMS defines a Destination interface that defines the location to which messages are sent, or from which messages are received. Topic and Queue are in fact two types of Destinations for the messaging types as described before. A Destination is also an administered object, and could be defined in and looked up using JNDI.

ƒ

Connection:

The

Connection

class

represents

an

active

connection to the JMS provider, and can be got from the ConnectionFactory. A Connection can be used to obtain a Session object. ƒ

Session: A Session object represents a single-threaded context for sending and receiving messages. It can be used to create MessageProducers and MessageConsumers. The former are used by clients to send messages to a particular destination, whereas the latter receive messages from particular destinations.

The interaction of the JMS objects is shown in the conceptual interaction diagram in figure 3.2. It is shown for the PublishSubscribe model, showing a message of type TextMessage being published.

31

Client

TopicConnectionFactory

Connection

Session

Publisher

Topic

Message

lookup()

create()

create()

lookup objects bound to JNDI lookup()

createPublisher(topic:Topic):Publisher

new()

createTextMessage(xmlString):TextMessage

new()

publish(message)

puts message in Topic

Figure 3.2

Clients can receive messages from a Destination in two ways : either synchronously

or

asynchronously.

While

retrieving

messages

synchronously, the client must wait for the message to arrive. Wit h asynchronous retrieval, the client listens for messages. A thread managed by the Session object notifies the listeners about message receipts. ƒ

Message: Messages are the items that are sent or received. Messages are represented by objects the implement the Message interface. A JMS message consists of three parts: ¾ Header: The JMS-specific information can be found here.

32 ¾ Properties: The optional, provider-specific information in the message can be found here. ¾ Body: The content of the message, depending on the message type, is found here. The following message subtypes are defined by the JMS specification. ƒ

TextMessage: this type carries a java.lang.String object in its body. It is useful for exchanging simple text messages and more complex character data like XML documents. This is the message type whose use is proposed in the sample scenario to be described later, since it can carry an XML string, which could be used by other components in the system.

ƒ

ObjectMessage: This type carries a serializable Java object in its body.

ƒ

BytesMessage: this type carries an array of primitive bytes

ƒ

StreamMessage: this type carries a stream of primitive Java types

ƒ

MapMessage: this type carried a set of name-value pairs in its body. The values must however be Java primitives or their wrappers.

JMS also defines different acknowledgement modes: ƒ

DUPS_OK_ACKNOWLEDGE: the message receiver can receive a duplicate message

ƒ

AUTO_ACKNOWLEDGE: message acknowledgement is handled automatically by the JMS system

ƒ

CLIENT_ ACKNOWLEDGE: a message receiver invokes the message’s acknowledge() method itself

JMS also provides for optionally specifying a Session as transacted. This is suitable for introducing transactional behaviour into message sending and receiving. However, JMS does not require a provider to support distributed transactions, which could optionally be supported via

the

JTA

XAResource

API

[40].

More

will

be

transactions in the context of JMS later in this chapter. More information about JMS can be obtained from [21].

said

about

33

3.2. Proposed Scenario parameters WAP Client

jobs JSPs/Servlet

XML

component

Put message

TOPIC

Persistent messages

Check status

XSLT

Get outcome

DB

results TOPIC

browser Client

Report

component

Call components to fulfil the message

Browser can also be used by the user later on to check the status

J2EE components

Figure 3.3

Figure 3.3 describes the general architecture of the proposed scenario. It shows a WAP client interacting with a JSP/Servlet layer, which has the responsibility for presentation to the client, as well as taking the required input, and passing on that input, after processing, further down the application chain. The first part in this chain is the JMS destination, in this case a Topic (this implies using Publish-Subscribe messaging), to which the XML message is put. The JSP/Servlet layer converts the user input into an XML string, and passes it to another component, that puts creates a message from the XML string and puts it in the topic. These messages are persistent, and are stored in a database for guaranteed delivery. Another component in the chain i s responsible for retrieving this message, and based on the information parsed from the message, calls other components in the J2EE

34 application.

Depending

on

the

content

of

the

message,

these

components could be called in a single transaction, and on successful completion, or otherwise, of the transaction, a message is put into another Topic. The WAP client can later, through the JSP/Servlet layer, read the message from this Topic to learn the outcome. The user could also check the status of the transaction through a browser client from the comfort of her office, using the same JSP/Servlet layer. This scenario is described in more detail in the conceptual interaction diagram in Figure 3.4, and the choice of components follows.

JSP

SLSB

Servlet

SFSB

TOPIC1

EJB1

param eters

WAP client initiates a request

createXM L(): XMLMessage

lookup & c reate

putMessage(XMLMessage)

putMessage

receive() : Message

performW ork()

performWork()

single transaction

putM essage(resultM essage)

getOutcome

Client checks the outcome later, through a WAP device or through a web browser

getM essage

Figure 3.4

EJB2

TOPIC2

35 The interaction diagram shows the following components: ƒ

JavaServer Page (JSP): the JSP(s) are used to present WML cards to the WAP client. These are used just for presentation purposes, and take the parameters from the WAP client and pass them on to the Servlet. An alternative could be to use JavaBean components or custom tags to process these parameters [3][6], rather than delegating the task to the servlet.

ƒ

Servlet : the Servlet processes the parameters passed to it by the JSP. It then creates an XML string from these parameters according to predefined logic. It also looks up a stateless session bean (SLSB), and then calls a method on the SLSB passing the XML string as parameter.

ƒ

Stateless Session Bean (SLSB): the SLSB uses JNDI and JMS interfaces to get hold of required JMS objects, and then creates a TextMessage with the XML string as message body and puts it in the Topic. The delivery mode is specified as persistent, so that the messages are stored in the database. The use of the SLSB also contributes to the performance of the application, since a few instances of stateless session beans can service a large number of clients. In addition, there is no activation and passivation overhead. Listing 1 shows how the SLSB looks up the JMS objects and uses them to put messages.

36

TopicConnectionFactory tconFactory = null; TopicConnection connection = null; TopicSession session = null; Topic topic = null; TopicPublisher publisher = null; TextMessage msg = null; try { //JNDI stuff ic = getInitialContext(); tconFactory = (TopicConnectionFactory)ic.lookup(JMS_FACTORY); log("SLSB: connectionfactory looked up "); //make a connection connection = tconFactory.createTopicConnection(); log("SLSB: connection created since it was null"); //make a session session = connection.createTopicSession(false,Session.AUTO_ACKNOWLEDGE); log("SLSB: jms session created since it was null"); } catch (NamingException ex) { log("naming exception while looking up factory " + ex); }//catch try { topic = (Topic)ic.lookup(TOPIC); log("SLSB: jms topic looked up"); } catch (NamingException ne) { log("exception while looking up topic " + ne); } //here create the publishers try { //create a publisher publisher = session.createPublisher(topic); //create the message msg = session.createTextMessage(StringBuffer buffer); log("SLSB: message created"); //publish the message publisher.publish(msg, DeliveryMode.PERSISTENT, 10, 0); log("SLSB: message published"); -

Listing 1

ƒ

Stateful Session Bean (SFSB): the SFSB is the component that subscribes to the Topic and receives messages synchronously, since EJBs cannot be used as asynchronous message listeners. A Servlet could also have been used in place of the SFSB and could have listened to messages asynchronously by implementing the

37 JMS MessageListener interface and providing an implementation of

the

onMessage

method,

however

synchronous

message

receiving was found to perform well and adequately in the early prototype [22]. Since the SFSB also can maintain state on behalf of the client, therefore the SFSB can be used to keep relevant information taken from the message as part of its state. This state can be useful when publishing the message containing the result of the transaction, since the state could be used to create a message that could be identified for each client. In addition, the main advantage of using the SFSB lies in the transaction management,

and

the

use

of

the

SessionSynchronization

interface. This is described next. The

use

of

enterprise

javabeans

components

leads

to

implicit

transaction management provided by the application server. This is exploited by the stateful session bean, which, based on the Message contents it receives from the Topic, enlists the needed components in a transaction. These components could be entity beans, for example. The idea that has been presented previously is that the SFSB puts a message in a different Topic to report the outcome of the transaction. To make the message passing part of the EJB transaction would require XA support from the JMS implementation, which is not available in the current JMS implementations. To overcome this, one option is the use of the SessionSynchronization interface [23]. The EJB

container

invokes

the

SessionSynchronization

methods



afterBegin, beforeCompletion, and afterCompletion – at each of the main stages of the transaction. The afterCompletion method indicates that the transaction has completed, and has a single boolean parameter whose value is true if the transaction committed, and false if it was rolled back. Therefore, the stateful session bean can put the appropriate message in the Topic depending on the outcome of the transaction. This is shown in the pseudo-code snippet in listing 2.

38

public void afterCompletion(boolean committed) { //check if transaction succeeded if (committed == true) { //put message in Topic 1 publisher.publish(commitMessage, ..); } else { //put message in Topic 2 publisher.publish(abortMessage, ..);

Listing 2

The use of two different Topics reflects the different messages and ease of administration.

3.3. Usage Scenario The above scenario can be useful in situations where the WAP client does not desire an imme diate response from the transaction. This is illustrated in the simple use case diagram in figure 3.4. A user could use his WAP enabled phone to place the order for a book, for example, or pay a bill, while driving to the office. Later, in the office, he could check the status of the order he placed, which could indicate whether the order was fulfilled, or whether it is in progress, along with a tracking number in case the user wants to pursue it further or cancel the order. The user also could use a normal browser from within his office to do this, instead of using his WAP enabled phone again.

place order

WAP Client

check status of transaction

Figure 3.5

The same scenario could be extended to a number of other situations, which could conform to requirements like this.

3.4. Some important aspects

39 Based on the scenario suggested above, a number of relevant points are mentioned below.

3.4.1.

Quality of JMS implementation

The performance aspect of the above scenario would depend, among other factors, also on the quality of the JMS implementation. It could be that an independent JMS implementation performs better than the JMS implementation available as part of the J2EE application server, however

the

cost

of

integrating

the

external

JMS

within

the

infrastructure of the application server might offset the benefits in performance to be gained from it. This would be more relevant in case other J2EE components are used in conjunction with JMS. In addition, the performance aspects of the JMS implementation can only be ascertained by thorough testing. Important aspects could be thread management of the J2EE server (thread management aspects of JMS mentioned below), the amount of “load” (e.g. number of destinations, messages, message producers, etc) that can be borne by the JM S implementation, and stability of the JMS implementation within the J2EE application server context.

3.4.2.

Concurrency & Performance issues

According to the JMS specification, a session may not be operated by more than one thread at a time. The reason for this restriction is described in [18]. Also, JMS ConnectionFactories, Connections and Destinations support concurrent use, whereas MessagePublishers and MessageSubscribers can only be accessed by one thread at a time. This needs to be taken into account while designing and implementing a JMS application. In the usage scenario described previously, these issues are taken care of in the design and implementation. It should also be realized that making messages persistent does have an associated overhead, since the messages must be persisted to a database. However, with persistent message delivery the JMS provider is required to deliver a message once and only once to the client. This

40 means that a JMS provider failure should not cause the message to be lost, or delivered twice [18].

3.4.3.

Scalability and Fail-over

The scalability and fail-over of an application, in so much as it depends on the JMS implementation, is closely related to the features offered by the JMS implementation. These features would determine whether, for example, it is possible to use JMS in a clustered mode or not.

Clustering

support

can

enable

load

balancing

between

Destinations across multiple servers in a cluster. This should lead to the JMS application scaling in terms of supporting a larger number of Destinations and corresponding larger number of messages. Fail-over support, if available, would determine whether the failure of one node in the cluster remains transparent to the client, and the requests from the client are taken over by another node with the same set of JMS services. As it stands right now, load-balancing and fail-over support could be limited in JMS implementations available as part of the application servers.

3.5. Advantage of a similar scenario The scenario as presented previously represents different components cooperating through message passing. Since the delivery mode of the messages is persistent, the messages are guaranteed to be delivered, which increases the resilience of the overall system to failure of individual components. This scenario leads to a decrease of load on the WAP Gateway, since the WAP clients are connected just for the duration to pass the parameters required to build a message, rather than for the whole duration of the transaction. In addition, once the message has been put in the Topic, its processing does not depend on the availability of the stateful session bean. The stateful session bean can retrieve the message at a later time, and then initiate transactions depending on the contents of the message. The backend transactions occur transparent to the WAP

41 Client, and are independent of the connection of the client. Overall the availability requirements of the whole system are relaxed. The benefits of using XML have been described in previous chapters. In this particular scenario, using a TextMessage that represents that contains an XML string in the message body leads to a standardization of the whole scenario, in that different types of information (based on different clients, different types of task etc) can be represented using the same data format. Therefore the message itself does not have to be identified for each client separately, using, for example, message properties or headers, therefore reducing message overhead. In addition, it makes for easy retrieval of information from the message using XML parsing. Also, it is easy to transform XML so that the message can be read by different types of clients. Figure 3.3 also shows where these benefits might be relevant.

3.6. Further extensions This scenario can be further extended using Message-driven beans [19] as the

component

that

listens

to

the

messages.

This

would

enable

asynchronous receiving of messages, since the message-driven bean is the MessageListener for a given Destination. In addition, the EJB container provides concurrency, transactions and other services for the messagedriven bean, taking the burden off from the application itself. J2EE application servers that support message-driven beans have started appearing in the market, at the time of this writing, usually in beta versions of the products.

42

Chapter 4 4. Scalability and Availability in a J2EE environment Scalability and high availability is an essential requirement of enterprise applications. The need to deploy an application that is always up and running, and able to serve an increasingly large number of clients without any loss in performance, has led to many solutions, the suitability of which for any particular application would depend on a host of issues. The focus of this chapter would be along the lines of these issues, with relevance to the J2EE platform. It will also be considered how J2EE compliant application servers cater to the need of scalability and high availability, and what solutions are available, with description of pros and cons of each.

4.1. General Overview While J2EE makes the development and deployment of enterprise applications easy, the dependence of the applications on services provided by the application server is without question. Portability being a goal of the J2EE specification notwithstanding [1], and the goal having been also achieved to varying degrees of compliance, the services offered by the application server vary in scope, functionality and quality. In addition, the configuration of the services also plays an important part in whether an enterprise application is scalable and highly available. Therefore the application server itself is the main pivot on which the application depends.

4.2. The role of the application The

impact

on

scalability

of

the

architecture

and

design

of

the

application itself cannot be stressed enough. However, this will not be the main topic of description here, and only those areas relevant to the design of an application will be highlighted, either implicitly or explicitly, that have real impact on how the application interacts with the services provided by the application server. A description of the application

design

issues

relevant

to

applications

using

J2EE

43 technologies is described in [5]. A good description of Java application performance issues can be found in [41].

4.3. How to achieve scalability and availability Roughly speaking, an enterprise application needs to be deployed for ƒ

Best performance (under all loads, i.e. scalability)

ƒ

High availability

ƒ

Cost-effectiveness

Application server vendors usually approach these issues by providing a clustering solution. The idea is to extend the basic configuration to provide more computing power, by exploiting the power of each machine, and by using multiple machines. Also, it should be possible to service any given load by adding the appropriate number of machines. This should help by splitting the load among available machines (ideally proportionate to the power of each machine), and by providing for failover by distributing the load among N-1 servers in case one from N servers fails. In essence, this means putting multiple application servers together to work as a whole, while appearing as a single entity to the clients. Before we go into further details about application server clustering, some introduction to clustering is given.

4.4. Clustering A cluster could be defined [24] as a distributed system that: ƒ

Consists of a collection of whole computers

ƒ

And is used as a single, unified computing resource

From the outside, collection of computer in the clusters appear as one unit. The requests coming from outside, though, are being distributed across all machines, which should allow more requests to be processed, and also, in case one of the machine fails, does not stop the system from fulfilling incoming requests. Figure 4.1 below shows a cluster of web servers serving a large number of web clients. The load-balancer black-box could be a piece of hardware or software that distributes requests to the cluster members, but this is

44 transparent to the web clients. This figure will be expanded further later on to show a basic arrangement with application servers.

. .

Web servers Load balancer

Web clients Figure 4.1

Some of the advantages of clustering could be: ƒ

Performance: It could be said that with a cluster of servers performing the same work as an individual computer, better performance could be achieved, whether measured in throughput, or response time, or some other relevant performance metric. However, the increase in performance is dependent on a number of other issues, which are described later.

ƒ

Availability: With a cluster of servers each providing the same services, failing of one of them does not cause all the requests to break, rather the requests could be served by other servers.

ƒ

Incremental

growth:

Having

said

that

one

can

get

better

performance and availability by clustering a group of server, it should be possible to enhance by adding more servers to the cluster. This should also be relevant for scalability also, as adding more servers should enable more requests to be handled. It could also be said that it should also cater to the economic feasibility of the whole cluster, since adding cheap machines to achieve the required performance could be more economical than having one huge and expensive machine, with all the disadvantages of a single machine.

45

4.5. Clustering in a J2EE context With respect to J2EE, we would be concentrating on clustering of application servers, and how this clustering can help achieve the goal of scalability and availability. The hardware clustering solutions or the clustering capabilities as relevant to an operating system will not be considered. In addition, advanced solutions exist for load balancing and fail-over at the database level, and these are not part of the discussion. It should be noted, however, that clustering the application servers may not increase the scalability of the application, simply due to the fact that the database might be the bottleneck.

4.5.1.

Application Server Clustering

In simplest terms, application server 1 clustering means having an installation of the application server across all machines in the cluster, properly configured for clustering behaviour. The J2EE components can then be deployed across all nodes in the cluster, so as to provide load balancing and fail-over. This arrangement should als o allow for

faster

processing and more throughput, as

explaine d

previously. However, there are a number of considerations that play an important role, and these will be mentioned in the subsequent sections.

4.5.2. For

Application deployment

application

server

clustering,

the

deployment

of

the

J2EE

application is very significant. Improper deployment/configuration can nullify the benefits to be obtained from clustering. Some important points to be taken care of are : ƒ

Reduction of remote calls: In case the load balancing causes method calls on components that are not local (i.e. not in the same process or machine) to the calling component, the overhead of a remote call is introduced. Depending on the method, this overhead can have a significant impact on the performance, more so when the remote call crosses machine boundaries as well.

1

It should be remembered that application server offers a set of services, e.g. EJB container, web container, naming service, transaction service etc., and with some application servers might allow these services to be configured individually, some can only be configured as a whole

46 ƒ

Locality of transactions: Transaction boundaries can have a significant effect on overall performance. If the transaction spans EJB containers, then the use of multi-tier JDBC connections becomes necessary, which, added to the overhead of distributed transaction coordination, can significantly affect performance. In addition, transaction coordination might require XA or two-phase commit, therefore introducing non-negligible overhead.

To avoid the above mentioned overheads, the application should be deployed such that cooperating components are deployed together. This would force the calls between components to be local, therefore avoiding the remote call overhead. For effective load balancing as well,

an

application

could

be

deployed

so

as

to

result

in

a

homogeneous cluster. This essentially means that the application is deployed with the same configurations across all cluster members, so that the load balancing occurs at the entry point to the application, and then the calls between the components are local. Also, it should be ensured, and this would entail application design as well, that the transactions do not span multiple components spanning multiple EJB containers, since distributed transaction coordination (using JDBC XA) could have quite some negative effect on the performance [42]. In case load balancing between components is also needed, then the application server clustering could be configured to allow for this (Case 3 below). Though this would cost the overhead that has been mentioned previously.

4.5.3.

Some Clustering Configurations

The following diagrams show some simple configurations that can be used to cluster application servers. The ideas presented here would then be built upon to describe how WLS and IAS approach clustering, since the generic schemes described in the figures are not applicable as such to either WLS or IAS.

4.5.3.1. Case 1

47

Monolithic application server with all the services

Web server

Application Server :

Web container, EJB container, transaction service, naming service, etc.

client

Web server

A number of concurrent clients

Load-balanced web servers DB

Figure 4.2

Figure 4.2 refers to a simplistic model, which represents a monolithic model for application server deployment, with the application server installed on one machine. There are a number of web servers in front of the application server, and the load on them could be balanced by a software of hardware solution (not shown in the figure). The web servers have just been shown separated for sake of illustration , otherwise a single web server on the same machine as the application server could also have been used, depending on the anticipated load. It needs to be mentioned, however, that multiple web servers have been shown since a single web server would most definitely be a bottleneck as the number of clients grows. This simplistic scenario provides no load balancing, since a single application

server

serves

all

the

requests

made

to

it.

This

configuration, which does not represent clustering, could be sufficient when the load expected is not too great, and can be managed by one application server. However, this solution has limited scalability, for the reasons that have been mentioned previously. On the other hand, since all the components are on the same physical machine, there is no overhead of calls between this components, which saves on quite a lot of

overhead

(e.g.

network

overhead,

marshalling/unmarhsalling

48 overhead etc.). The only remote calls being made are to the database. This setup can perform quite well for relatively moderate throughput requirements. Given the configuration of the machine itself, it might be possible to apply clustering solutions on a single machine, depending on the application server being used. This is detailed in section 4.5.4 later in this chapter.

4.5.3.2. Case 2 Multiple application servers sharing the load

Application Server :

Web container, EJB container, transaction service, naming service, etc. Web server

Load balancing

client

Web server

DB

Application Server :

Web container, EJB container, transaction service, naming service, etc.

Figure 4.3

The solution above has some enhancements, in that the application server is configured on two machines in a cluster. Therefore this represents a cluster of the application servers (here the application server is meant to include the JSP/servlet engine, EJB container, and the associated services like transactions, naming etc.). The details of this clustering with respect to WLS and IAS will be discussed in the next chapter. Again, it is worth mentioning that the web servers could be part of the application server configuration itself, or could be

49 separate in case a heavy web load is expected, or, in case they form a separate layer of configuration, might be configured as a proxy server (section 5.1.3.2). This cluster configuration allows for load balancing, and also fail-over. In case the same application components are deployed on both application servers, the configuration would amount to homogeneous clustering. This is the preferred form of clustering, since it allows for load balancing and fail-over, but keeps the network overhead to a minimum. This is because high-end application servers are so optimised so as to avoid remote calls when the application components are colocated. This means that the calls from one component would go to another component that is colocated with the calling component, and not to the same component deployed on the other machine across the network. This, however, has implications for load balancing, since load balancing will occur only at the level of entry into the application servers, and after that the calls between components will be colocated. On the other hand, this reduces the overhead of the remote calls, which could offset the benefit of load balancing. An application would therefore have to find out which configuration

works

best

according

to

the

requirements

of

the

application. Fail-over is provided for in this configuration, since the application servers can be so configured that the failure of one application server or machine would be detected, and the incoming requests re-routed to the living application server. Both WLS and IAS provide for fail-over in different ways, which would be described in chapter 5.

4.5.3.3. Case 3

50

Application Server Web server

Web container Load balancing

Load balancing

Clients Web server

Web container

D

Application Server

Figure 4.4

The figure above shows one further layer being added to the clustering setup described previously. The JSP/servlet engine now forms a separate layer in application server clustering, and the EJB container is the other layer. Such a configuration could be useful when it is desired to load balance calls to the JSP/servlets, as well as the EJBs, since in this case the calls from the JSP/servlet layer will be load-balanced across EJBs on both machines. While allowing for a greater level of load balancing, this

configuration

does

introduce

more

overhead

(remote

calls,

transaction coordination overhead etc.). In addition, the hardware requirements are

greater, though the provision for availability is

greater, since failing of one machine would not cause both the JSP/servlet and EJB layers to fail simultaneously. It needs also be mentioned that the configurations shown above illustrate the concepts in a generic way, and these situations may not be applicable to WLS and IAS as such, as described in chapter 5.

4.5.4.

Vertical and Horizontal Scaling [25]

Figure 4.5 shows a monolithic application server, running all its services in a single process on a single machine. There are some other possibilities in which clustering can be configured.

51

Monolithic Application Server :

All services (web and ejb containers, transaction service, naming service etc.) in one JVM

Figure 4.5

One other factor of importance in application server clustering is whether or not an application server can be configured to run multiple services separately, each in its own JVM. A single JVM application server does not scale very well, and in addition does not utilize the CPU power of the machine, mainly due to the limitations of the JVM.

4.5.4.1. Vertical Scaling Vertical scaling is possible if multiple instances of the application servers services can be started on one physical machine. This can lead to better utilization of the hardware, as well as better load balancing, while at the same time (possibly) avoiding remote calls. In case there are, for example, two EJB containers started per application server, the load would be balanced between them. If each have their own local transaction service also, then the transaction

coordination

overhead

will

also

be

minimized.

However, this will not help with availability, since failure of the machine hosting the application server would cause all the services to become unavailable as well. Vertical scaling is illustrated in figure 4.6.

52

Naming service

Web container 1

EJB container 1 Transaction service 1 EJB container 2 Transaction service 2 EJB container 3 Transaction service 3

Appserver on one machine configured to run 3 EJB containers, each with an in-process transaction service, 1 web container, and 1 global naming service. Load balancing also possible within one machine

Figure 4.6

4.5.4.2. Horizontal scaling This is the more conventional practice of clustering application servers, whereby the application server is installed on multiple physical machines. This allows for load balancing, though at the same time introduces remote call overhead, which would take a clever application design and application deployment to minimize. However, this provides for better availability, since the failure of one machine in the cluster can be compensated for by other available machines.

4.5.4.3. Combination Clustering of application servers that allow vertical scaling can be so configured so as to have a combination of horizontal and vertical scaling. This would in particular be useful where powerful machines are available, and hence vertical scaling can utilize the machines better, and multiple machines could be utilized by having

53 application server configurations on all of them, as illustrated in figure 4.7. A practical example of this is described in Chapter 5.

Appserver with different service configuration, according to the machine capacity

Appserver with different service configuration, according to the machine capacity

Load balancing within a machine, as well as between machines

Figure 4.7

54

Chapter 5 5. WLS and IAS clustering This chapter would describe the clustering solutions available with WLS and IAS. The two solutions would be considered separately, though comparisons will be made where relevant. The focus would be the solutions that are provided by both, and the strengths and weaknesses of each would be considered. This would be complemented with observations made usin g clustering solutions of both servers in different configurations with a similar J2EE application deployed on them.

5.1. Setup used Before proceeding further, the setup used to cluster both WLS and IAS for testing is shown and described below. More detailed information about the setup is available in [43]. The choice for this setup would be explained in the description in subsequent sections.

5.1.1.

WLS clustering configuration

55

WLS 1

(WEB container, EJB container, JNDI, JDBC, Transactions etc) JDBC Proxy Server

client

IPlanet 4.0

Shared disk

Load balancing

DB

Oracle 8.1.6 A number of concurrent web clients

WLS 2

(WEB container, EJB container, JNDI, JDBC, Transactions etc)

Compaq ProLiant 5500

Figure 5.1

The configuration in figure 5.1 shows a cluster of two WLS instances, installed on a Compaq ProLiant 5500 machines. On the front-end the iPlanet 4.0 web server is configured to proxy requests to the WLS instances using the proxy plug-in. The same machine housing the proxy server has DNS configured so as to map requests to the name given to the cluster to the IP addresses of the machines hosting WLS instances. For the back-end database a single instance of Oracle 8.1.6 is used. The heap size on each of the WLS instances was varied to minimize the effect of garbage collection, and an optimal value was found by testing various configurations relative to a particular application.

5.1.2.

IAS clustering configuration

56 Web container Master naming service

EJB containers 1

client

2 JDBC

A number of concurrent web clients

OSAgent

DB

Oracle 8.1.6

slave naming service 1

3

2

EJB containers JDataStore

Figure 5.2

For a cluster of IAS the same machines were used, but with some significant differences in configuration. This is shown in figure 5.2. Each ProLiant machine had an instance of IAS, but the number o f services running on each was different. One machine had all services running, including the JSP/servlet engine and web server, and 2 instances of the EJB container, whereas the other machine had 3 EJB containers running, and no JSP/Servlet engine. In addition, the naming service was run in master-slave mode, which means that each instance of IAS has a naming service running, and one of them is configured to be the master, and the other as slave. This is to support fail-over for the naming service, so that in case the master naming service fails, the slave can take over. IAS also includes a Java database that can be run in-process. This is used for passivation of stateful session beans, and also as a backing store when using the naming service in master-slave mode. It was run on one instance of IAS.

57 On both configurations above the same application was deployed so as to form homogeneous clustering.

5.2. WLS Clustering This section would describe the specifics of WebLogic clustering, and the pros and cons associated with it. However, this would not be an exhaustive description of WLS clustering, instead only those issues will be highlighted that can affect scalability and availability.

5.2.1. ƒ

Setup and configuration

Shared Disk:

It has been until recently a recommended configuration from WLS to use a shared disk as a central repository for configuration files for th e cluster, from which all members of the cluster boot up. Even though this is not a strict necessity since WLS version 5.1.0, not using a shared disk has been found to be quite error prone 2. In any case, using the individual disks of each machine hosting a WLS instance requires absolutely similar configuration files for clustering to work properly, which means that the configuration files (cluster-wide and individual server-wide properties files) must be kept in sync in all servers. Using a shared disk is less error prone, and has been tested to work properly, but introduces a single point of failure, and has a performance penalty also. This could be alleviated by using highly available and better performing disk solutions, in order to get error free operation. ƒ

DNS Configuration:

WLS also requires DNS configuration for requests to be load balanced across cluster nodes. This too is not necessary if the only types of clients are web-based clients, connecting to the cluster through the proxy server (described later), but for clients connecting directly to the cluster, DNS configuration is necessary for round-robining of requests across cluster members. This could be the case in a scenario like Case 3 described in chapter 4, where the JSP/Servlet tier is 2

This has been reported consistently in the support newsgroup for WebLogic Server.

58 physically separate. In this case, requests across the JSP/servlet layer would be load balanced by the proxy server, but requests from the JSP/servlet layer to the EJB layer would require DNS configuration to be load balanced across cluster members. It should be noted here that this load balancing is limited to accessing the JNDI tree in each cluster member, since in any case the stubs for EJBs returned from the JNDI tree are aware of the replica objects existing across all cluster members, and hence round-robin requests across the cluster members. This means that without DNS also, the load balancing across EJBs will work, however only one JNDI tree from one node in the cluster will be accessed on each request, therefore making it a single point of failure. Note: in case of a configuration like Case 2 described in chapter 2, there might be no need to configure DNS, since the JSP/Servlet layer and the EJB layers are in the same physical machine for both cluster members. Therefore the web requests through the proxy server will be load balanced across both cluster nodes, and the requests for EJBs will not be load balanced, since JSP/servlet layer would use the EJBs on the local machine, rather than on the cluster member hosted on a second machine. However, stand-alone Java clients would require DNS in case they are accessing the EJBs directly.

5.2.1.1. Proxy Server WLS requires a proxy server configuration in front of the cluster nodes for proper load balancing and failover. The proxy server can be configured to serve static content, like images and HTML files, and forwards all JSP/servlet requests to the WLS cluster. It should be noted that the use of a proxy can have a negative effect on the performance of an application as compared to a single non-clustered instance of the WLS for the same load (here the load is meant to be that which can be handled by a single server). However, without a proxy server in-memory replication is not possible, and for higher throughput the best possibility is to have a clustering solution, which necessitates the use of a proxy server.

59 WLS itself can be used as a proxy server using HttpClusterServlet, or can use any of the Apache, iPlanet or IIS servers with the provided plug-in. The proxy server, in conjunction with the plug-in, provides for load balancing across the JSPs/Servlets in the cluster in round-robin fashion, as well as in-memory replication, which is the replication of HttpSession state across the cluster nodes in order to provide for fail-over. Moreover, the sessions in WLS are sticky, whereby repeated requests from the same client go to the same servlet instance. For in-memory replication, one node in the cluster is designated as the primary server, and another one as the secondary server, on which the session state is replicated. In case the primary fails, the session state can be recovered as the secondary server takes over the primary. If there is another server in the cluster, then it is designated as the secondary. ƒ

Issues with in-memory replication:

There are a number of issues with in-memory replication that have to be considered. Since the session is replicated across cluster nodes, it means that the size of the session can affect performance, and more importantly, scalability. In addition, the objects bound into the session must be Java serializable, otherwise in-memory replication breaks. Also, for an object bound into the session, a ny change in the object can only be reflected across a replicated session when the object is re-bound to the session again. However, since WLS does not provide fail-over of SFSB, therefore session state could be maintained using Java Servlet APIs, and the fail-over capabilities provided by in-memory replication can be availed. This will be compared with IAS, which has a different strategy at the JSP/Servlet layer. One other weakness of in-memory replication is that it requires at least three cluster nodes to work properly. This is so because when the primary fails, and the secondary takes over, session replication cannot

occur

unless

there

is

another

WLS

instance

present.

Therefore during the time which the former primary server comes up again, all session state will be lost. This would not be the case if

60 there were more than two WLS instances in the cluster. This, o f course, means a higher hardware cost as well, were this to be adopted.

5.2.1.2. Configuration of proxy server WLS server itself was tried as the proxy server, but due to a bug the proxying function was unusable for the particular application deployed [7]. In addition, using IIS as a proxy also did not give consistent in-memory replication behaviour 3. Using iPlanet 4.0 gave consistent results throughout, and hence was part of the final configuration. However, having just one proxy server was an instant bottleneck, specially in the case of increasing client load. Therefore load balancing at web server level is a must for deployments that need to cater to increasing number of clients.

5.2.2.

Heap size

Proper setting of the heap size can have non-significant effect on clustering performance. This is more so since WLS servers use a heartbeat mechanism [26] between cluster nodes to check whether a WLS instance in the cluster is living or not. In case of garbage collection cycles of long durations, a WLS instance may not respond to heartbeats, causing the other cluster instances to time out and presume it dead. Later, when the presumed dead instance re-joins the cluster, it breaks the clustering, specially in-memory replication, since the cluster setup can no longer distinguish between primary and secondary servers, and throws exceptions to the client. This was a cause of in-memory replication breaking in our setup, and was rectified by experimenting with different values of the heap size. Therefore arriving at a heap size so as to minimize the effect of garbage collection is a must for proper functioning of clustering functionality. This is done best by using different configurations for a particular application.

3

This was pointed out to WLS support, who advised to file a bug report.

61

5.2.3.

Enterprise javabeans

Clustering support for enterprise beans varies according to the type o f bean[26]. EJBs are clusterable at two levels, the Home interface level and

the

Remote

interface

level

(represented

by

EJBHome

and

EJBObject respectively [23]). ƒ

EJBHome

All homes are clusterable, so that when a client looks up an EJB home, it gets a replica aware stub that is aware of all home instances on every cluster node. Therefore a client call on home interface is load-balanced across all cluster nodes on which the EJB is deployed. Fail-over is also provided for all EJBHomes, since the replica aware stub can re-route requests (lookup, find) to the EJBHome to available cluster nodes. ƒ

EJBObject

Similar to EJBHomes, there exists a replica aware EJBObject stub that is aware of all EJBObject instances on all cluster nodes. Therefore the method calls on EJBObjects can be load balanced. Provision of fail-over depends on the type of EJB. For stateless EJBs, if a cluster instance hosting the stateless session bean fails , the method call is routed to available cluster instances. This is trivial to do, since stateless session beans maintain no state for the client between method calls, and any instance can serve a particular client. For

stateful

session

beans,

fail-over

for

EJBObjects

is

not

provided. Since a SFSB maintains state on behalf of the client, therefore, after initial load balancing, requests from the same client are always sent to the same cluster instance hosting that stateful session bean. In WLS 5.1.0 there is no way for this state to be replicated across all cluster nodes. For entity beans, fail-over depends on the type of entity bean. WLS allows two types of entity beans to be defined, one a “read-write” entity bean, and the other a “read-only” entity bean [26]. For the former, after the initial load balancing, method calls from clients go to the same bean instance. In case the cluster node hosting that

62 instance fails, the client calls are not failed over to available instances on other cluster nodes. For read-only entity beans, which assume that the bean is not used to modify data represented by the bean, load-balancing, as well as fail-over is provided. Read-only beans cache data on the server, and hence offer better performance. The assumption is that the data is never changed. The fail-over behaviour for enterprise beans mentioned above will be contrasted with the behaviour of IAS later in the chapter, and it appears to be a weakness of the WLS product at not being able to provide entity bean fail-over.

5.2.4.

Transactional behaviour in a cluster

Different application servers provide different transactional models. For example, WLS (till version 5.1.0) uses pessimistic concurrency in transactions. This means that transactional access to entity beans is serialized, and while one bean is involved in a transaction, no other client can access that bean. While this has the effect of enforcing data integrity, there is a penalty to be paid in performance. More importantly, applications designed to take advantage of the pessimistic concurrency behaviour are bound to run into problems, since in a cluster, where there are multiple instances of the entity bean deployed on different EJB containers, pessimistic concurrency cannot

be

ensured. Also, EJB containers that rely on the container itself to maintain data integrity in the face of concurrent access from clients, need special consideration in design of the application to preserve data integrity when multiple clients access the same entity bean deployed on multiple WLS instances deployed on the cluster. This behaviour will be contrasted with IAS, which provides optimistic concurrency control in transactional behaviour, delegating the task of maintain data integrity to the database.

5.2.5.

WAP and WLS cluster

As was mentioned in chapter 2, a WLS cluster was not used in the WAP-J2EE application. The WAP-J2EE application enabled session

63 tracking with URL rewriting, however the session ID used by the WLS cluster is more than 128 characters (shown in the figure 5.3), and WAP phones usually have the URL length limited to 128 characters. Therefore the WLS cluster was found unusable for the application (using Nokia Toolkit with the Nokia 7110 phone emulator), and a separate instance of WLS was used in the WAP-J2EE application. WebLogicSession=OhADT6uytp9vIzY7LaBiUWZ9DsGDUykEGukmH3W0Z0p2YPG2U8Gg|5518937552 083067635/2044965532/6/7001/7001/7002/7002/7001/1|656911739235614933/2044965531/6/7001/7001/7002/ 7002/7001/-1|5533740358416116971 Figure 5.3

5.2.6.

Comments

As has been described previously, clustering application servers introduces some issues that have to be taken care of while deploying an application for scalability and availability. The use of the shared drive to enable WLS clustering is a definite weakness in WLS clustering solution. Though not necessary, the alternative, which is also the more natural solution, is not very stable as yet. WLS clustering also is hardware intensive, in that for inmemory replication a proxy server layer is necessary. This would be necessary for applications that need to provide for fail-over, since maintenance of state is provided for using the Servlet API, and hence the need for in-memory replication for high availability. Also, as has been described, the in-memory replication is not very robust, specially for heavy sessions. Also, WLS does not allow vertical scaling [25]. Vertical scaling could be achieved if multiple instances of an application server’s services could be started on one machine. This can lead to more throughput by virtue of load balancing. However, since WLS can only be installed as a whole, therefore vertical scaling is not possible. This can have impact of the utilization of hardware by WLS, since a powerful machine may be under-utilized by one instance of WLS, due to the limits of the JVM. This was found to be the case when a single instance of WLS was installed on each of the two Compaq ProLiant machines. The processing power of the machines was not found to be

64 fully utilized. This also has implications when the load is being distributed using simple round-robin, since there is no way to force a greater amount of load to a more powerful machine. Though WLS does have the concept of weight-based round-robin, that only works between different cluster, not between instances of one cluster. An alternative is to have more than one installation of WLS on a powerful machine, but for each installation of WLS, a permanent IP address is needed, which could be possible if the machine has multiple network cards. Another observation related to what has been mentioned previously is that the addition of a less powerful machine to a cluster does not add to the scalability, rather slows down the processing. The plausible reason

would

be

what

is

called

“convoying

effect”

in

WLS

documentation. This occurs since the load balancing mechanis m accesses the servers in the same order, and one slow server can delay future requests. This could be alleviated by vertical scaling. Vertical scaling does not add to availability, though. That is achieved by horizontal scaling, which means application server installations across multiple physical machines, therefore allowing for fail-over, and hence availability.

5.3. IAS clustering Inprise Application Server is a CORBA based application server, providing a comprehensive (though not complete) list of J2EE features. It is built upon the VisiBroker for Java ORB, which provides for a number of features, including inter-object communication, thread and connection management etc., as well as clustering features [27].

5.3.1.

Setup and Configuration

The setup of IAS for cluster enabling takes a different approach, since IAS is a set of services, like the

EJB and Web containers, naming

service, transaction service, messaging service etc. Due to the distinct services, it makes it possible to configure IAS for vertical scaling

65 [27][28]. This also allows for better utilization of the hardware as compared to the case where vertical scaling is not possible. In addition, IAS relies on the OSAgent [29] , which is a propriety mechanism for naming service discovery, and also helps in load balancing and fail-over scenarios. Therefore an OSAgent has to be running in the network where IAS instances will be running. Having more than one OSAgent can provide for fail-over of OSAgents, since these communicate with each other through UDP broadcast. Also, the naming service has to be configured to enable clustering. To provide scalability and availability with IAS, both vertical and horizontal scaling is possible. It is therefore possible to have multiple installations of IAS on multiple machines, and within each installation configure the number of services according to application requirement and capacity of the machine. For our experimentation, we installed two instances of IAS on two Compaq ProLiant machines, each with a different set of services. The difference was in the number of EJB containers on both, and the web service being present only on one server. The naming service was run in master-slave configuration, with the built-in JDataStore database being used as the backing store . The choice will be explained below. Also, the heap size for all the EJB containers was set to a value so as to minimize the effect of garbage collection. This was arrived at by testing with various values.

5.3.2.

Web-tier

With IAS, there is no in-built load balancing at the web tier. This is because IAS does not provide for session replication. To achieve load balancing at this level (for the web server and JSP/Servlet engine), a software or hardware solution coul d be used to distribute requests across a number of IAS web servers and web containers. Compared t o the WLS approach of maintain state using the Servlet API, and provision of in-memory replication, IAS recommends not using the Servlet API for saving session state, and instead move the state to the enterprise javabean layer. Therefore a stateful session bean could be used to maintain the state, and fail-over is also possible since IAS

66 provides for stateful session bean fail-over as compared to WLS inmemory replication. For IAS, the reference to a stateful session bean could be kept in a Cookie (by stringifying stateful session bean reference

using

CORBA

object_to_string

and

string_to_object

methods), and the client can then use the Cookie to get back the stateful

session

implementations

bean of

reference.

CORBA

Since

stringify

and

IAS

has

destringify

optimised methods,

therefore this has little overhead. Also, this allows the client to be served by any servlet instance, and stickiness of sessions is not required. Not storing the state in the servlets and using stateful session beans for this purpose avoids the overhead and limitation of in-memory replication as experienced with WLS, but use of stateful session beans for this purpose also incurs overhead, as will be described next.

5.3.3.

Enterprise javabeans

With all type of beans, load balancing and fail-over is provided by the Visibroker infrastructure, using the naming service and OSAgent for this purpose. A number of EJB containers could be started within one IAS instance. Each container should have its own transaction service to provide for locality of transactions, thereby reducing transaction management overhead. In case of homogeneous clustering, it is also possible for each container to have its own in-process naming service. However, as described later in the chapter, a centralized naming service was used in master-slave mode. One drawback with the masterslave configuration is that locality of looked up components is not assured, which could lead to the overhead of remote calls. For stateless session beans, load balancing and fail-over is provide d for both home objects and bean instances. For stateful session beans also, load balancing and fail-over both are provided. Stateful session beans are passivated during their lifetime, and the frequency of the passivation can be configured. Passivation means that the stateful session beans save their state in some secondary storage, and then retrieve the state when they are activated.

67 IAS provides a Session Storage Server, based on the JDataStore database bundled with IAS, which is used for storing state of stateful session beans on passivation. This is used to provide for fail-over . However, this requires that only one instance of the Session Storage Service should be running in the IAS cluster. The state of all the stateful session beans in the cluster is saved here on passivation, and so with respect to provision of fail-over the Session Storage Service could become a single point of failure. A solution could be to start this service in IAS on a reliable machine, and without any other services running on this machine. Robust hardware, and the absence of any user code on this machine, could help minimize the chance for failure. The passivation interval of the stateful session beans can be set, and supposing the interval is 5 seconds, the state would be stored in the storage service after every five seconds. In case the container hosting the bean were to fail, IAS, with the help of underlying VisiBroker infrastructure, use another stateful session bean instance and load the state of the lost bean from the storage service. Therefore the client would be able to continue with the requests. However, this would be the state before the last passivation, and the state between the last passivation and the time the container crashed would be lost. A solution would be to make the passivation interval small, but this would add the passivation/activation overhead, thereby affecting performance. In comparison with the WLS in-memory replication, this method too has its shortcomings. There is difference in entity bean fail-over in IAS with respect to WLS. In IAS, fail-over is also provided for bean instances as well, not just home lookups. However, IAS does not distinguish between entity beans as Read-Only and Read-Write, but instead offers the ability to use transaction commit options as mentioned in the EJB specification [23] . The Option A corresponds in a sense to read-only beans, and caches the bean’s data on the server. However, this option is not useable in a cluster, since multiple instances of an entity bean can exist in multiple containers, with different clients accessing those instances, and so the staleness of the cached data is inevitable.

68 As for fail-over, IAS loads the state of the bean from the database in case an instance fails in another instance. This state is the state at the end of the last transaction, since a transaction is followed by the data being persisted in the database using ejbLoad call. The transaction in which the bean was participating while the failure occurred will be rolled back.

5.3.4.

Naming service

With IAS, the naming service [29] can be started stand-alone, inprocess with the EJB container, or as a centralized naming service within IAS, which caters for the whole server. However, with the centralized naming service, looking up objects involves remote calls, which can affect performance. With each container having its own naming service, the lookups are local calls to the locally present components. However, in the J2EE application deployed on the cluster, with the web container itself a client to the naming service, load balancing is not possible with each EJB container having its own naming service, and a centralized naming service is required with the clustering feature turned on, so that requests from the web container are load balanced across the EJB containers. Th erefore a centralized naming service was used in master-slave mode, which provided for high availability.

5.3.5.

Transactions

IAS uses optimistic concurrency transaction model. This means tha t the container does not use just one instance of the entity bean, and serialize concurrent calls to that instance, but instead has an entity bean instance for each concurrent client, and client calls therefore process concurrently. In this case the responsibility of maintaining data integrity is delegated to the database, e.g. if strict transaction isolation is to be used, then the bean could be used with the isolation level TRANSACTION_SERIALIZABLE [30]

. This would cause the

database to serialize updates to the data, hence maintaining data integrity in the face of concurrent access. Optimistic concurrency does

69 not cause the container to be a bottle-neck, and allows for better performance as well as scalability. In addition, optimistic concurrency behaviour also merges closely with clustering scenarios.

5.3.6.

Vertical and horizontal scaling with IAS

As has been mentioned before, IAS lends itself to both vertical and horizontal

scaling.

Vertical

scaling,

even

though

not

providing

availability, results in better utilization of the hardware and more throughput. However, there is a limit to vertical scaling as well. In our configuration, multiple EJB containers were started on each machine, though the optimum number can be found only by testing. It was found that starting more than 3 EJB containers on one machine (2processors, 1 GB RAM), actually decreased overall performance (measured by overall response time for the application deployed). Also, vertical scaling has an impact on horizontal scaling as well, in that the installation on each separate machine could be configured according to the power of that machine. Therefore a mo re powerful machine can have more services configured, whereas a less powerful machine can have correspondingly lesser services. Therefore even a simple load balancing scheme can be effective, since lesser power machines are configured for only as much load as they are capable of handling, and therefore do not hinder overall processing. In addition, it was also found that having the web container (JSP/servlet engine) on a separate machine (more akin to Case 3 in the last chapter) gave better overall response times, even though this involved remote calls being made to EJBs in the two machines where EJB containers were configured. The reason for this is not fully understood, though it could be due to increased throughput due to better load balancing offsetting the overhead due to remote calls in the particular application.

5.4. Tuning and testing To test the clustering solutions of WLS and IAS, as mentioned in the previous section, a number of tests were performed. The same J2EE

70 application [7] was deployed on different cluster configurations of both application servers, so as to enable testing of scalability and availability offered by both. A number of enterprise-level tools exist that can be used to test for an applications behaviour. Amo ng these are profiling tools, which are useful in identifying bottlenecks in application behaviour, as well as time for individual method execution, number of objects created etc. For profiling the application, the tool used was JProfiler from Satraka Group. This tool can be configured so as to run the application server from within the tool’s environment, so as to be able to profile the application server’s behaviour, as well as the application’s execution. JProfiler comes with in-built integration with WLS, though a number of parameters still have to be configured, and can be used with any other application server after the necessary configuration. The configuration of JProfiler with WLS and IAS is described in [31]. For testing scalability of an application, it is required that the application be subjected to high loads. This is best done using load generation tools, a number of which are available. These tools generate load by simulating a large number of clients. Also, a number of performance metrics are available from the tools after the test. LoadRunner from Mercury Interactive is one such tool which offers very rich functionality. It was used in the beginning, but extensive tests could not be made with it due to administrative reasons. However, there are a number of other tools that offer similar functionality. Among them are WebLoad from RadVie w Software, and e-Test Suite from RSW Software. In addition, a free tool from Microsoft is also available and offers good testing and reporting features. More details of the above-mentioned tools are available in Appendix A.

5.4.1.

Testing and Profiling

The application deployed on the clusters was profiled using JProfiler on both the application servers. There were no bottlenecks, and even though the main time consuming methods were related to JDBC, the time taken by the methods was within acceptable limits (i.e. so as to

71 limit the overall response time to 1 – 2 seconds range). The use o f Oracle XML Utility also did not show any significant overhead, and hence was proved to be a suitable choice for generation of XML. The time taken by XSLT processing in the servlet was also not any source of overhead, hence justifying its choice as a transformations solution. The main time consuming methods, with the same impact on both application servers, were related to JDBC, like getting a JDBC Connection, executing a statement etc., but the method durations and object creation were found not to impact the performance of the application as a whole. JDBC method impact on the application performance can be reduced by using a robust JDBC driver, and by limiting the number of JDBC connections opened by the application server, so as not to open any unnecessary JDBC connections.

5.4.1.1. Testing Hardware : For

testing

the

application

using

load

generation

tools,

the

following setup was used for the load generation tools. CPU (No. of Processors and Speed

RAM

Operating System

1 x PII 400 MHz

256MB

Windows NT 4 SP6

Table 1

The load that can be simulated depends on the capacity of the machine on which the load generating software is running, and disproportionate load generation can lead to incorrect statistic about

the

generated

load

against

an

application,

and

the

performance parameters being measured by the load generating tool. The configuration described above was found to be optimal under the desired test conditions. The rest of the setup, for the cluster of application servers and the proxy server and the database is shown in the table 2 below.

72

Purpose

CPU (No. of Processors RAM

Operating System

and Speed Cluster machine 1

2 x 500 MHz

1 GB

Windows

NT

4

SP6 Cluster machine 2

2 x 500 MHz

1 GB

Windows

2000

Advanced Server Oracle 8.1.6

1 x 300 MHz

256 MB

Windows

NT

4

NT

4

SP6 iPlanet

4.0

as

proxy 1 x 300 MHz

server

128 MB

Windows

SP6 (Server) Table 2

Configuration: Using the hardware mentioned in table 2, the clustering of applications servers was done as has been described in section 5.1 in this chapter. The application was deployed so as to form an homogeneous cluster, due to the advantages of this clustering configuration. To check for the proper configuration of clusters, the application was subjected to very light loads. This helped in determining whether the clusters were properly configured. It was also possible to determine a suitable value of the heap sizes for the application servers, so as to minimize the effect of garbage collection during the test runs. The heap value chosen for the cluster nodes was 256MB for this particular application. In addition to testing the expected behaviour of the application when deployed on the cluster, it was checked if the load was being shared among the cluster members. Since IAS allows for vertical scaling, which means each cluster machine had more than one EJB container, it was checked if all EJB containers were sharing the load. Also, the problems with using WLS itself as a proxy server, and IIS as a proxy server, as has been mentioned before, were revealed here. In addition, the application servers were tuned to offer maximum

73 performance.

This

tuning

was

done

according

to

the

recommendations available in WLS and IAS documentation. The application was also deployed with the possible optimisations [7]. Scalability: To test for scalability of the application, the load generation tools were used to simulate a large number of clients. Since the application being deployed was being accessed by a browser, therefore the tools were configured to serve browser clients. The main performance metric of interest was the response time of the application when subject to a large number of clients. In addition, the processor and disk characteristic of the hardware was also measured to ascertain machine usage pattern. All measurements were made with warm servers. The overhead associated with the database was not significant in the measurements, since the database statistics showed it performing within acceptable limits, and the read-only data was cached in the application servers, therefore avoiding database hits for this data. It was observed that with an increase in the number of clients, the response time of the application increase linearly, as expected. However, beyond a certain limit of clients the response time increased dramatically to a an unacceptably high value. This is

response time (seconds)

response time 10 8 6

Web server becomes a bottleneck

4 2 0 1

2

3

4

no of clients ( x 15)

Figure 5.4

5

74 shown in the figure 5.4. This is a natural outcome of the single web server becoming a bottleneck, both with IAS and WLS. With WLS the proxy server machine becomes the bottleneck, with %Processor Time greater than 90 percent throughout the duration of the test, and the Processor Queue Length with a high value (> 4). For IAS the clustering machine hosting the web server also showed similar behaviour. Overall the performance patterns were as expected, with clustering of application servers leading to load balancing client requests across cluster nodes. Machine Usage: As has been mentioned in chapter 4, vertical scaling leads to better hardware utilization. This was also observed during testing, since the powerful machines hosting the cluster instances were found to be under-utilized in the case of WLS, with the %Processor Time in the range of 50 percent on average, showing insufficient machine utilization. With IAS, however, the machines were better utilized, wit h %Processor Time in the range of 80 percent on average. It was also observed that using a weaker machine with lesser services did not have any negative impact on the response time with IAS, but did so with WLS. With WLS using a weaker machine in the cluster tended to have a significant negative effect on the response time, due to the reasons outlined in chapter 4. However, the optimal number o f services that can be started on a machine in the case of IAS depend on the processing capability of the machine, as well as the requirement of the application. It was found that on the particular hardware as described in table 2, configuring more than 3 EJB containers on one machine caused the response time to drop significantly, with the %Total Processor Time remaining well above 90 percent. In fact, even going from 2 EJB containers to 3 caused not more than 10 – 20 percent improvement of response time for the particular application for different load conditions. On the machine where IAS was also configured to run the web

75 container, the optimum number of EJB containers was found to be 2. Failover: To test for failover, different scenarios were created to simulate a service failing, and to see whether the particular clustering solution applied provides adequate failover. Since WLS supports in-memory replication, it means that the sessions existing on one machine in the cluster are replicated on another machine in the cluster as well, so that the failing of the first machine can be compensated by the second one. One cluster node was abruptly shut down to test this, and the failing over of sessions was found to be correct. There were intermittent failures when the fallen cluster instance re-joined the cluster after being brought up under different conditions, but this was found to be a limitation of the product, which has mostly been patched in the regularly released service packs for WLS. For IAS, which does not provide session replication, but instead offers SFSB failover, quite a bit of configuration is required to get the SFSB failover to work properly. In general, however, SFSB failover does work correctly.

5.5. Conclusion As has been described in this chapter and the last, there are a number of options for configuration of a clustering solution. With respect to clustering availability,

of

J2EE a

application

number

of

servers

additional

for

issues

scalability relevant

to

and

high

the

J2EE

environment have also to be taken care of. In addition, different application server vendors offer different clustering solutions, and the differences can be significant. It has been shown how different clustering options could be chosen for each application server, and how proper deployment

of

application

can

minimize

overhead

associated

with

clustering. In addition, it has been shown how testing using specialized tools can help deploying an application for maximum scalability and high availability.

76

Appendix A Tools for testing and profiling enterprise applications This chapter describes, briefly, the tools that were used in the course of the thesis work, as discussed in Chapter 5. These include load generation tools to test the scalability of applications running on an application server, as well as profiling tools, which can prove valuable in performance tuning of Java based applications. Further details about these tools are available in the references mentioned in this appendix.

Load Generation Tools The tools used offer, in the most basic form, the facility to generate load against a particular application, and measure the performance of the application under such load using pre-defined metrics. The performance results are offered in a tabular form, as well as graphically, and it is also possible to compare the performance results obtained from different performance tests. It needs to be stressed, however, that the accuracy of the test results depends on proper configuration of the tools, and correct build-up of the test scenarios that are then run against the application to be tested. Also, the hardware on which the load generation software is run should have sufficient configuration to be able to run a realistic test scenario. Browsers emulated by load generation tools Application being tested. Could be deployed on a cluster of application servers, represented by this blackbox.

client

Load generation tool also provides different measurements as the result of the test runs.

Figure A.1

77 Figure A.1 shows a simple illustration showing the load generation tools simulating the desired number of browser clients to load-test an application being tested.

Description of Load Generation Tools The following sections would describe briefly the load-generation tools used.

WebLoad 4.0 (RadView Software) WebLoad verifies the scalability of Web applications by generating load composed of Virtual Clients that simulate real-word traffic. It is possible to create test scenarios that define the behaviour of the virtual clients. WebLoad uses JavaScript for these test scenarios, though the easiest way of creating these test scenarios is for the user to record them. This is done by accessing the desired pages in the web application, and WebLoad records them in the order they are accessed. This then constitutes the test scenario, which can then be repeated using a large number of Virtual Clients to stress the application being tested. It is possible to create different test scenarios like this, whic h makes it possible to test the application in different ways. The load generated on the application being tested is measured in Transactions Per Second (TPS), and where each transaction sent by the load generator is considered a unit of work to be performed by the application being tested. The number of transactions per second depend on the machine resources, including the processor speed, the RAM available, and the operating system, as well as the complexity of the application being tested. Also, increasing the number of Virtual Clients takes up resources, leaving less resources for generating a higher TPS. Therefore a balance has to be found to generate a realistic load, since it could happen that increasing the number of Virtual Clients could lead to a decrease in the load that is actually being generated. Testing

with

WebLoad

allows

performance metrics, for example:

the

measurement

of

differen t

78 ƒ

Throughput : Amount of data being processed by the application being test in terms of kilobytes/second.

ƒ

Response Time : The time from the end of the HTTP request, until the Virtual Client has received the complete item it requested.

Figure A.2 represents a particular view from the WebLoad console.

Figure A.2

WebLoad was found to be quite easy to use, with descriptive reports for the results of tests also consistent and reliable. The evaluation version of WebLoad allows only 12 simultaneous Virtual Clients, which

are

equivalent

to

approximately

twice

that

number

of

simultaneous users, since the Virtual Clients can be configured to hit the application being tested back-to-back, thus generating non-stop load, which might not be the case with real-life users. Further information about We bLoad is available from [32].

e-Test Suite (RSW Software) e-Test Suite consists of a set of tools that can be used to test webbased applications. These tools allow for the recording of test scenarios, load generation using simulated users and extensive visual and textual reports measuring different, configurable performance metrics.

79 Among the testing components offered by e-Test suite, the ones used were e-Tester, which allows to functionally test the application, as well as record test scenarios, called Virtual Scripts, and e-Load, which emulates the desired number of users running these Virtual Scripts. There is also the facility of generating reports for the test results in different formats, and representing different performance metrics, using e-Reporter functionality available within e-Test suite. e-Test Suite offers features that are quite similar in functionality to the ones mentioned previously for WebLoad, though one helpful feature is that its possible to visually see the web pages that are being accessed by virtual clients, which makes it possible to view in real time any errors that might occur during access to these particular web pages. It is also possible to generate graphical reports using the test data, which could then be imported into Microsoft Excel, though these reports were not found to be very intuitive, in that its not very easy to deduce the performance result at just one glance. With e-Test suite its also possible to get statistics (such as memory, disk, I/O etc) of different machines, for example the machine on which the application being tested is running, or the machine running the load-generation software. On the whole, however, it needs to be made clear that even though e-Test suite offers similar features to WebLoad, and for that matter any other load generation tool, the functionality of the features is different from one tool to the other. Figure A.3 shows a view from e-Test suite console and a particular graphical report. Further information about e-Test Suite is available from [33].

80

P e r f or ma nc e v s . U s e r s

1 60, 00

1 40, 00

1 20, 00

1 00, 00

80, 0 0

60, 0 0

40, 0 0

20, 0 0

0, 0 0 0

2

4

6

8

10

12

#User s

I A S1 6Fi nal 1 . I A S1 5Sep A v er age P er f or manc e (sec)

WLS1 8Fi nal For P r es . WLS1 6Sep A ver age P er f or mance (s ec )

Figure A.3

Web Application Stress (WAS) Tool (Microsoft Corporation) The WAS tool from Microsoft is a free tool that can be used to simulate multiple browsers requesting pages from a web application. Even though this tool is meant to test Active Server Page (ASP) web sites, it was found quite suitable for web sites that use J2EE technologies, like JavaServer Pages (JSP) and Servlets. In addition, since this tool is free, the load generation capability is not limited to a certain number of emulated users, which makes this tool a very nice solution for load testing. As with other tools, its possible to record a test scenario by accessing the webpages through the browser (only Internet Explorer) from within the tool, and the tool records the pages accessed. This recorded script can then be played back with the desired number of users. At the end of the test run, the tool generates a number of measurements, for example, it is possible to see, for each web page accessed, how much time it took, after making a request for the page, to receive the first byte of data from that page.

81 On the whole this tool was found to be easy to use and with sufficient features, in particular for applications running on the Windows platform. The load generation facility is easy to configure, and the reports generated also cover adequate performance metrics. It is also possible to measure the performance of the machines being used in the test scenario, which is useful for determining the hardware usage pattern. Figure A.4 shows one view from the console of this tool. Further information about WAS is available from [34].

Figure A.4

Profiling Tool JProbe Profiler (Sitraka Inc. - formerly KL Group) JProbe profiler is a profiling tool that can be used to collect performance diagnostics of an application with line-by-line precision. It is possible to pinpoint performance bottlenecks and memory leaks in the

application,

and

find

out

which

method

calls

are

taking

unacceptably long time. JProbe profiler is available in two editions, Developer and ServerSide. The latter can be effectively used for profiling

in

server-side

development

environments,

and

can

be

integrated with leading application servers. This integration, however, is not always seamless, or plug and play, and some configuration needs to be done to run an application server from within the profiler.

82 JProbe provides integration with BEA WebLogic Server (WLS) among others, and after some configuration tuning was used successfully with WLS. However, it requires a lot more configuration to run Inprise Application Server (IAS) from within JProbe profiler, and given the time constraint, only the EJB container of IAS was configured to be used with JProbe profiler. Given the nature of the application running on IAS EJB container, and the way services are partitioned in IAS, profiling the IAS EJB container proved sufficient to get the desired results. The visual presentation of results, e.g. memory usage, object creation, garbage collection, method call time etc., and the ability to drill down to the culprit method going through all the preceding methods can be very helpful in tuning the application, and getting an indication of the behaviour of the application server with respect for the services that it is providing (e.g. the making of database connections using JDBC). Figure A.5 shows particular views from the JProfiler console.

Figure A.5

Further information about JProbe is available from [35].

83

Comments : As mentioned previously, tools for load testing and profiling an enterprise application are valuable through development to deployment. All of these tools require careful configuration with respect to the purpose they are being deployed for. In addition, since a typical J2EE application consists not only of components at the web tier, like JSPs and servlets, but also EJBs, which are used by the web tier components, and which, it could be said, do the real work, the performance statistics obtained by the load generation tools should be interpreted with care, since these can only give an indication of the overall application behaviour. It could be argued, however, that this is sufficient in most cases. Also, the use of profiling tools can supplement the analysis and testing of the application during testing and deployment, since it is possible not only to get an indication of the behaviour of the application, down to the method level, but the behaviour of the application server can also be scrutinized, which can lead to optimal configuration and tuning of the application server for a particular application.

84

Appendix B Wireless Application Protocol This appendix has supplementary information about WAP, and also lists a number of references.

Components of the WAP architecture The WAP architecture provides a layered design of the entire protocol stack. The WAP-stack is basically divided into five layers, which are: ƒ

Application Layer -

ƒ

Session Layer -

ƒ

Wireless Transaction Protocol (WTP)

Security Layer -

ƒ

Wireless Session Protocol (WSP)

Transaction Layer -

ƒ

Wireless Application Environment (WAE)

Wireless Transport Layer Security (WTLS)

Transport Layer -

Wireless Datagram Protocol (WDP)

Figure B.1

Each layer of the WAP protocol stack shown in figure B.1 specifies a well-defined interface to the layer above, meaning that a certain layer makes lower layer invisible to the layer above.

85

Wireless Session Protocol (WSP) The WSP provides the application layer of WAP with a consistent interface for two session services. The first is a connection-oriented service that operates above the transaction layer protocol WTP. The second is a connectionless service that operates above a secure or nonsecure datagram service (WDP). The Wireless Session Protocols currently consist of services suited for browsing applications (WSP/B). WSP(B) provides the following functionality: ƒ

HTTP/1.1 functionality and semantics in a compact over-the-air encoding

ƒ

Long-lived session state

ƒ

Session suspend and resume with session migration

ƒ

A common facility for reliable and unreliable data push

ƒ

Protocol feature negotiation

The protocols in the WSP family are optimised for low-bandwidth bearer networks with relatively long latency. WSP/B is designed to allow a WAP proxy to connect a WSP/B client to a standard HTTP server.

Wireless Transaction Protocol (WTP) WTP is responsible for control of transmitted and received messages. The Wireless Transaction Protocol (WTP) runs on top of a datagram service and provides as a light-weight transaction-oriented protocol that is suitable for implementation in “thin” clients (mobile stations). WTP operates

efficiently

over

secure

or

non-secure

wireless

datagra m

networks and provides the following features ƒ

Three classes of transaction service

ƒ

Unreliable one-way requests

ƒ

Reliable one-way requests

ƒ

Reliable two-way request-reply transactions

ƒ

Optional

user-to-user

reliability

-

WTP

user

triggers

the

confirmation of each received message ƒ

Optional out-of-band data on acknowledgements

ƒ

PDU concatenation and delayed acknowledgement to reduce the number of messages sent

86 ƒ

Asynchronous transactions

Wireless Transport Layer Security (WTLS) WTLS is a security protocol based upon the industry-standard Transport Layer Security (TLS) protocol, formerly known as Secure Sockets Layer (SSL). WTLS is intended for use with the WAP transport protocols and has been optimised for use over narrow-band communication channels. WTLS provides the following features: ƒ

Data integrity: WTLS contains facilities to ensure that data sent between the terminal and an application server is unchanged and uncorrupted

ƒ

Privacy: WTLS contains facilities to ensures that data transmitted between the terminal and an application server is private and cannot be understood by any intermediate parties that may have intercepted the data stream

ƒ

Authentication:

WTLS

contains

facilities

to

establish

the

facilities

for

authenticity of the terminal and application server ƒ

Denial-of-service

protection:

WTLS

contains

detecting and rejecting data that is replayed or not successfully verified. WTLS makes many typical denial-of-service attacks harder to accomplish and protects the upper protocol layers WTLS may also be used for secure communication between terminals, e.g.,

for

authentication

of

electronic

business

card

exchange.

Applications are able to selectively enable or disable WTLS features depending on their security requirements and the characteristics of the underlying network (e.g., privacy may be disabled on networks already providing this service at a lower layer).

Wireless Datagram Protocol (WDP) The Transport layer protocol in the WAP architecture is referred to as the Wireless Datagram Protocol (WDP). The WDP layer operates above the data capable bearer services supported by the various network types. As a general transport service, WDP offers a consistent service to the upper layer protocols of WAP and communicate transparently over one of the

87 available bearer services. Since the WDP protocols provide a common interface

to

the

upper

layer

protocols

the

Security,

Session

and

Application layers are able to function independently of the underlying wireless network. This is accomplished by adapting the transport layer to specific features of the underlying bearer. By keeping the transport layer interface and the basic features consistent, global interoperability can be achieved using mediating gateways.

Bearers The WAP protocols are designed to operate over a variety of different bearer services, including short message, circuit-switched data, and packet data. The bearers offer differing levels of quality of service with respect to throughput, error rate, and delays. The WAP protocols are designed to compensate for or tolerate these varying level of service. Since the WDP layer provides the convergence between the bearer service and the rest of the WAP stack, the WDP specification [WDP] lists the bearers that are supported and the techniques used to allow WAP protocols to run over each bearer. The list of supported bearers will change over time with new bearers being added as the wireless market evolves. The

protocols

described

above

can

be

used

in

four

different

configurations [36]. ƒ

Connectionless mode: This configuration utilizes only WSP on top of WDP. It offers a simple datagram service, and sent messages are not acknowledged, hence offering no guarantee of delivery. It resembles a simple send-and-forget model.

ƒ

Connectionless mode with security: in addition to the one above , WTLS is used to provide authentication, encryption etc.

ƒ

Connection mode: the connection mode uses WTP in addition to WSP and WDP. WTP means reliable transmissions, meaning that sent messages must be acknowledged and may be retransmitted if lost. It also uses a mode of WSP that handles long-lived sessions.

ƒ

Connection mode with security: it is similar to the above, with the additional use of WTLS.

88

Wireless Markup Language (WML) WML is WAP’s analogy to HTML us ed on WWW. WML is based on XML [37]. WML uses a deck/card metaphor to specify a service. A card is typically a unit of interaction with the user, that is, either presentation of information or request for information from the user. A collection of cards is called a deck, which usually constitutes a service. This is to ensure that a suitable amount of information is displayed to the user simultaneously so that inter-page navigation is avoided to the fullest extent possible. WML can be binary encoded by the WAP gateway in order to save bandwidth in the wireless domain. WMLScript is based on ECMAScript, the same scripting language that JavaScript is based on. It can be used for enhancing the services written in WML. However, for the scenario described in this thesis, WMLScript was not used.

Nokia WAP Server For the WAP based application described in the thesis, the Nokia WAP Server was used, as well as the Nokia Toolkit, which includes a development environment plus WAP phone emulators. The emulator for the real life Nokia phone 7110 was used. Nokia WAP server can be used as a gateway to web servers over HTTP. Alternatively, stand-alone applications to various back-end systems can be implemented as Java Servlets on top of Nokia WAP Server API. However, this feature of Nokia WAP Server was not used.

General General information about WAP and its usage can be found in [39].

89

Appendix C Oracle XML Utility This appendix describes briefly the Oracle XML Utility, that has been mentioned in Chapter 2 previously. Further details of the utility are available from the Oracle homepage [38]. Oracle XML-SQL Utility (XSU) incorporates the following functionality : i. Generating an XML document (String or DOM) given a SQL query or a JDBC ResultSet object (XSU is optimised for Oracle’s JDBC drivers) ii. Extracting data from an XML document, and then insert data into a DB table, updating a DB table, or deleting corresponding data from a DB table The functionality (i) above was used by incorporating XSU in a J2EE application. This has been described in [7]. This incorporation, given the particular

application,

was

found

to

perform

satisfactorily,

without

exceeding the performance limits deemed suitable for the application. These performance limits were determined in terms of the response time to the client (browser-based) of the J2EE application. It needs to be pointed out, though, that using the XSU does have an associated cost (associated mainly with JDBC access and converting data into XML format), and it would need to be determined on a per-application basis whether the use of XSU conforms to acceptable performance limits. The self-explanatory diagram in figure C.1 (reproduced from the Oracle documentation) provides graphical illustration of how the XSU can be put to use.

90

String representation of XML document, or in-memory XML DOM tree of elements.

Figure C.1

91

Appendix D References [1]

Bill Shannon: Java T M 2 Enterprise Edition Specification, v1.3. Proposed Final Draft, 20 October 2000

[2]

WAP Forum homepage http://www.wapforum.org

[3]

Sun Microsystems, Inc.: Developing XML Solutions with JSP Technology, White Paper, 2000.

[4]

Brokat Infosystems AG: Business goes Mobile – Mobile Business Applications

[5]

Nicholas Kassem and the Enterprise Team: Designing Enterprise Applications with the Java T M 2 Platform, Enterprise Edition. Version 1.0.1 Final Release, October 3, 2000.

[6]

Macalla Software: WAP Banking and Broking – Software System White Paper, V1.04, February 2000.

[7]

M. F. Kaleem: Using Oracle XML Utility in a J2EE application, TI5 Technical Document, 2000.

[8]

Govind Sheshadri: Understanding JavaServer Pages Model 2 Architecture, JavaWorld, December 1999.

[9]

Sun Microsystems, Inc.: Java T M Pet Store version 1.1.1, sample application, 2000.

[10] Richard Monson-Haefel: Enterprise Javabeans, 2 n d Edition, O'Reilly & Associates, Inc., March 2000 [11]

XSL Transformations (XSLT) version 1.0, http://www.w3.org/TR/xslt

[12]

David Geary: Create XML files using JSP, JavaReport, August 2000

[13] Document Object Model (DOM) Level 1 Specification, version 1.0, 1998. http://www.w3.org/TR/REC-DOM-level1/ [14]

Nokia WAP Server 1.1.1: Administration Guide , May 25, 2000.

[15]

Rohit Khare: W * Effect Considered Harmful, 4K Associates, 1999.

[16]

Brett McLaughlin: Java and XML, O’Reilly, June 2000.

[17]

Cocoon homepage http://xml.apache.org/cocoon/index.html.

[18] Mark Hapner, Rich Burridge, Rahul Sharma: Java T M Message Service Specification version 1.0.2, November 9, 1999.

92 [19]

Linda G. DeMichiel, L. Ümit Yalcinap, Sanjeev Krishnan: Enterprise Javabeans T M Specification, Version 2.0, Public Draft 2. 11 September, 2000.

[20] Sun Microsystems, Inc. Java Naming and Directory Interface T M Application Programming Interface, version 1.2, July 14, 1999. [21] Richard Monson-Haefel & David Chappell: Java Message Service, O'Reilly & Associates, Inc., December 2000 [22] M. F. Kaleem: Integrating JMS in a WAP-J2EE application, TI5 Technical document, December 2000 (planned date) [23] Vlada Matena & Mark Hapner: Enterprise Javabeans T M Specification, v1.1 [24] Gregory F. Pfister: In Search of Clusters, 2 n d Edition. Prentice-Hall Inc., 1998. [25] Ken Ueno et al: WebSphere Scalability:WLM and Clustering. IBM Redbooks, 2000. [26] WebLogic Server Documentation: Using WebLogic Server Clusters, http://www.weblogic.com/docs51/cluster/index.html [27] Inprise Application Server Documentation: User’s Guide, http://www.borland.com/techpubs/books/appserver/appserver41/pdf_index.html

[28] Inprise Application Server Documentation: Enterprise JavaBeans Programmer’s Guide, http://www.borland.com/techpubs/books/appserver/appserver41/pdf_index.html

[29]

VisiBroker for Java 4.1: Programmer’s Guide http://www.borland.com/techpubs/books/vbj/vbj41/pdf_index.html

[30] White, Fisher, Cattell, Hamilton, Hapner: JDBCTM API Tutorial and Reference, Second Edition. Addison Wesley, September 1999. [31] M. F. Kaleem: Configuring and using JProfiler with WLS and IAS, TI5 Technical document, 2000. [32]

WebLoad homepage: http://www.radview.com

[33]

e-Test Suite homepage: http://www.rswsoftware.com

[34]

Microsoft Web Application Stress Tool homepage: http://webtool.rte.microsoft.com

[35]

JProfiler homepage: http://www.sitraka.com

[36]

AU-system Radio: WAP White Paper …when time is of the essence, February 1999.

93 [37]

Pekka Niskanen: Inside WAP, IT Press, 2000.

[38] Oracle XML SQL Utility homepage: http://technet.oracle.com/tech/xml/oracle_xsu [39]

SCN Education B.V.: Mobile Networking with WAP, 2000.

[40]

Sun Microsystems Inc.: Java Transaction API (JTA), 1999.

[41] Dov Bulka: Java T M Performance and Scalability Volume 1, Server-Side Programming Techniques, Addison-Wesley, May 2000 [42]

Jonathan K. Weedon: On Clustering, White paper 2000, Inprise Corporation.

[43] M. F. Kaleem: Setting up a cluster of WebLogic Server 5.1.0 and Inprise Application Server 4.1.1, TI5 Technical Document, 2000.

94

Glossary EJB

: Enterprise JavaBeans

IAS

: Inprise Application Server

J2EE

: Java T M 2 Platform Enterprise Edition

JMS

: Java Message Service

JSP

: JavaServer Pages

SFSB

: Stateful session bean

SLSB

: Stateless session bean

WAP

: Wireless Application Protocol

WLS

: WebLogic Server

WML

: Wireless Markup Language

WAP, Scalability and Availability in a J2EE environment

Dec 4, 2000 - application within a domain, needs thorough investigation and testing ...... machine hosting the application server would cause all the services.

2MB Sizes 1 Downloads 139 Views

Recommend Documents

WAP, Scalability and Availability in a J2EE environment
Dec 4, 2000 - Support for JMS on part of J2EE compliant application servers has now been raised ... respect to ascertaining the best deployment scenario. ..... A JMS Provider is the entity which implements JMS for a messaging product.

Metaserver Locality and Scalability in a Distributed NFS
access the file system through the PVFS library or using an OS-specific kernel module. The latter ... M.L., Navaux, P.O.A., Song, S.W., eds.: Proceedings of the ...

Discovery Reliability Availability Discovery Reliability Availability
have identified information and have classified ... Often material appears on a Web site one day, ... the Internet as a source of material for papers, credibility.

Discovery Reliability Availability Discovery Reliability Availability
advent of the Web, large and small groups like societies ... Today, with millions of Web sites, and .... PsycEXTRA and how you can start your free trial, visit the.

Thesis - Spanish- High Availability Tolerances Correction in a Data ...
Thesis - Spanish- High Availability Tolerances Correction in a Data Backbones Support Infrastructure OOI.pdf. Thesis - Spanish- High Availability Tolerances ...

pdf-1470\software-performance-and-scalability-a-quantitative ...
pdf-1470\software-performance-and-scalability-a-quantitative-approachjpg.pdf. pdf-1470\software-performance-and-scalability-a-quantitative-approachjpg.pdf.

Availability in Globally Distributed Storage Systems - Usenix
layered systems for user goals such as data availability relies on accurate ... live operation at Google and describe how our analysis influenced the design of our ..... statistical behavior of correlated failures to understand data availability. In

Effects of Population Size on Selection and Scalability in Evolutionary ...
scalability of a conventional multi-objective evolutionary algorithm ap- ... scale up poorly to high dimensional objective spaces [2], particularly dominance-.

WAP-Memorandum-007
Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. WAP-Memorandum-007-Acceptance-of-ASHRAE-62-2-2013-Addendum-B.pdf. WAP-Memorandum-007-Acceptance-of-A

Availability in Globally Distributed Storage Systems - USENIX
Abstract. Highly available cloud storage is often implemented with complex, multi-tiered distributed systems built on top of clusters of commodity servers and disk drives. So- phisticated management, load balancing and recovery techniques are needed

Thinking about Availability in Large Service Infrastructures
2 A simple case study. In parts ... dimensionality of a precise definition, (2) the need to reduce ..... https://www.cms.gov/newsroom/mediareleasedatabase/press-.

Availability in Globally Distributed Storage Systems - Usenix
(Sections 5 and 6). • Formulate a Markov ..... Figure 6: Effect of the window size on the fraction of individual .... burst score, plus half the probability that the two scores are equal ... for recovery operations versus serving client read/write

Availability in Globally Distributed Storage Systems - USENIX
*Now at Dept. of Industrial Engineering and Operations Research. Columbia University the datacenter environment. We present models we derived from ...

Coping With Loneliness and Isolation in a College Environment
It is normal to feel lonely. Most students have some difficulty adjusting to college, a community that is often different from their home community. Here are a few ...

A High-availability and Fault-tolerant Distributed Data Management ...
reliable properties. HDFS is a special distributed file system ... the SQL style commands to perform map-reduce jobs. With. this tool ... A High-availability and Fault-tolerant Distributed Data Management Platform for Smart Grid Applications.pdf.

Nix in a Cluster Environment
Developer commits new version. 2. Packages are built in ... developers an environment just like production. ... result-marathon. "$(cat secrets/marathon)/v2/apps" ...

WAP-Application-Instructions.pdf
berkembang, database e-commerce juga. berkembang terus. Jadi, kita sekarang. kembangkan database-nya, baik untuk. pemain dalam negeri ataupun yang.

Reducing Costs and Complexity with WAP Gateway 2.0 ... - F5 Networks
Page 1 ... WAP Gateway 2.0 Offload. The biggest challenges communications service providers (CSPs) face when supporting their networks continue to be optimizing network architecture and reducing costs. Wireless. Access Protocol (WAP) ...