Nov 25, 2009

How to handle Memory Leaks in Java/J2EE Applications ?

In this article, I have tried to analyze the various causes which may lead to Memory Exception.

Once an ‘OutOfMemoryException’ is thrown, how best can it be handled has been discussed in this artilce.

OutOfMemoryException is thrown when there is not sufficient available memory to carry out a requested activity. In this article, I have tried to illustrate the different methods by which memory leaks can be handled in Web Applications.

Memory Management

When a program is loaded into memory, it is organized into three areas of memory, called segments: the text segment, stack segment, and heap segment. The text segment (sometimes also called the code segment) is where the compiled code of the program itself resides. This is the machine language representation of the program steps to be carried out, including all functions making up the program, both user defined and system.

The remaining two areas of system memory are where storage may be allocated by the compiler for data storage. The stack is where memory is allocated for automatic variables within functions. A stack is a Last In First Out (LIFO) storage device where new storage is allocated and deallocated at only one “end”, called the top of the stack.

When a program begins executing in the function main(), space is allocated on the stack for all variables declared within main(). If main() calls a function, func(), additional storage is allocated for the variables in func() at the top of the stack. It should be clear that memory allocated in this area will contain garbage values left over from previous usage.

The heap segment provides more stable storage of data for a program; memory allocated in the heap remains in existence for the duration of a program. Therefore, global variables (storage class external), and static variables are allocated on the heap. The memory allocated in the heap area, if initialized to zero at program start, remains zero until the program makes use of it. Thus, the heap area need not contain garbage.

OutofMemoryException Handling

Once an OutOfMemoryException has been thrown, the following checks should be carried out:

Handling Connection Objects: Check whether all objects pertaining to Connection, ResultSet, Statement and PreparedStatement have been properly closed in the finally block. Since the Connection objects are drawn from a Connection pool (the pool size depends on Server Configuration), if multiple Connection objects are created without closing them, it would result in reduction of the connection pool size and may throw an OutOfMemoryException if the pool is exhausted. Hence, even though there might be unused connection objects, the pool may be exhausted resulting in exception being thrown.

Handling OutputStream Objects: Check whether all OutputStream objects have been properly closed in the finally block. Streams are usually resource intensive objects and thus should be handled in a point-to-point manner. Hence it is imperative to close the streams individually to prevent memory leaks.

Checking for Static Variables: Static variables are stored in the heap and only one instance of variable is available throughout the lifetime of the application. Since static variables are not garbage collected till the class is unloaded or the variables are explicitly set to NULL, it needs to be checked whether all static variables are being used or some are unnecessarily occupying heap space.

Checking Third Party APIs: It is necessary to check the memory utilization of third party APIs. If the memory utilization is high (for example, if many static variables have been used), it may lead to OutOfMemoryException.

Usage of Singleton Classes: Adoption of the Singleton pattern results in creation of classes for which memory is allocated on the heap and not freed anywhere which may result in memory leaks. Hence, it is necessary to check the design pattern being followed, and in case of singleton pattern, check the number of classes which may be holding up space in the heap. Singleton pattern should not be used unless imperative.

Size of Session Objects: Since, it is possible to put large objects in a session, it is necessary to check the session size. Hence, it is important to check during memory leaks, the size of the session as well as the objects that might have been put in the session. To avoid memory leaks, only those objects should be put in the session which need to be put and should be removed when not required anymore.

If the above suggested checks do not throw a cause for memory leaks, the below mentioned methods may be carried out to determine the cause for memory leaks:

Using verbose:gc: The verbose:gc utility can be configured in startWebLogic.cmd to obtain information about the Java Object Heap in real time while running the Java application.

To activate the utility, Java must be run with -verbose:gc option.

For using verbose:gc; set the JVM size to 64 MB so that we may reach the maximum without load on the server. If the used memory does not drop to initial levels after full GC, then it would indicate a memory leak. This is a raw mechanism of analyzing in the absence of a profiling tool.

The following piece of code can also be used to track the amount of free memory, available memory and utilized memory.

long heapSize = Runtime.getRuntime().totalMemory();long heapMaxSize = Runtime.getRuntime().maxMemory();long heapFreeSize = Runtime.getRuntime().freeMemory();Log.logInfo("LeakAction", "perform", "heapSize in MB  : " + heapSize / 1000000);        Log.logInfo("LeakAction", "perform", "heapMaxSize: in MB" + heapMaxSize / 1000000);Log.logInfo("LeakAction", "perform", "heapFreeSizeheapFreeSize: in MB " + heapFreeSize / 1000000);


Considering the usage of Struts 1.1 Framework, the above code needs to be placed in the action class for the corresponding JSP just before method ‘perform’ returns the ActionForward.



Using JProbe with LoadRunner: JProbe is an enterprise-class Java profiler providing intelligent diagnostics on memory usage, performance and test coverage, allowing developers to quickly pinpoint the root cause of application code performance and stability problems that obstruct component and integration integrity. LoadRunner is a performance and load testing product by HP and can be used with JProbe to pinpoint the cause for memory leaks.



Eclipse Test & Performance Tools Platform:



The Eclipse Test and Performance Tools Platform (TPTP) Project provides an open platform supplying powerful frameworks and services that allow software developers to build unique test and performance tool, both open source and commercial, that can be easily integrated with the platform and with other tools.



TPTP addresses the entire test and performance life cycle, from early testing to production application monitoring, including test editing and execution, monitoring, tracing and profiling, and log analysis capabilities. 



CodePro Profiler:



CodePro Profiler™ is an Eclipse-based software development product that enables Java developers to efficiently identify performance issues early in the development cycle. Use CodePro Profiler during development to efficiently locate performance bottlenecks, pin down memory leaks and resolve threading issues in a painless and intuitive manner



Focus on creating business logic, rather than on tedious manual instrumentation of debugging code to identify performance problems. CodePro Profiler is the first enterprise-ready performance analysis tool that is designed specifically for Eclipse development platforms to ensure creation of fast, reliable and high quality applications.



VisualVM



VisualVM is a visual tool integrating several commandline JDK tools and lightweight profiling capabilities. Designed for both production and development time use, it further enhances the capability of monitoring and performance analysis for the Java SE platform.



References




How to handle Memory Leaks in Java/J2EE Applications ?

In this article, I have tried to analyze the various causes which may lead to Memory Exception.

Once an ‘OutOfMemoryException’ is thrown, how best can it be handled has been discussed in this artilce.

OutOfMemoryException is thrown when there is not sufficient available memory to carry out a requested activity. In this article, I have tried to illustrate the different methods by which memory leaks can be handled in Web Applications.

Memory Management

When a program is loaded into memory, it is organized into three areas of memory, called segments: the text segment, stack segment, and heap segment. The text segment (sometimes also called the code segment) is where the compiled code of the program itself resides. This is the machine language representation of the program steps to be carried out, including all functions making up the program, both user defined and system.

The remaining two areas of system memory are where storage may be allocated by the compiler for data storage. The stack is where memory is allocated for automatic variables within functions. A stack is a Last In First Out (LIFO) storage device where new storage is allocated and deallocated at only one “end”, called the top of the stack.

When a program begins executing in the function main(), space is allocated on the stack for all variables declared within main(). If main() calls a function, func(), additional storage is allocated for the variables in func() at the top of the stack. It should be clear that memory allocated in this area will contain garbage values left over from previous usage.

The heap segment provides more stable storage of data for a program; memory allocated in the heap remains in existence for the duration of a program. Therefore, global variables (storage class external), and static variables are allocated on the heap. The memory allocated in the heap area, if initialized to zero at program start, remains zero until the program makes use of it. Thus, the heap area need not contain garbage.

OutofMemoryException Handling

Once an OutOfMemoryException has been thrown, the following checks should be carried out:

Handling Connection Objects: Check whether all objects pertaining to Connection, ResultSet, Statement and PreparedStatement have been properly closed in the finally block. Since the Connection objects are drawn from a Connection pool (the pool size depends on Server Configuration), if multiple Connection objects are created without closing them, it would result in reduction of the connection pool size and may throw an OutOfMemoryException if the pool is exhausted. Hence, even though there might be unused connection objects, the pool may be exhausted resulting in exception being thrown.

Handling OutputStream Objects: Check whether all OutputStream objects have been properly closed in the finally block. Streams are usually resource intensive objects and thus should be handled in a point-to-point manner. Hence it is imperative to close the streams individually to prevent memory leaks.

Checking for Static Variables: Static variables are stored in the heap and only one instance of variable is available throughout the lifetime of the application. Since static variables are not garbage collected till the class is unloaded or the variables are explicitly set to NULL, it needs to be checked whether all static variables are being used or some are unnecessarily occupying heap space.

Checking Third Party APIs: It is necessary to check the memory utilization of third party APIs. If the memory utilization is high (for example, if many static variables have been used), it may lead to OutOfMemoryException.

Usage of Singleton Classes: Adoption of the Singleton pattern results in creation of classes for which memory is allocated on the heap and not freed anywhere which may result in memory leaks. Hence, it is necessary to check the design pattern being followed, and in case of singleton pattern, check the number of classes which may be holding up space in the heap. Singleton pattern should not be used unless imperative.

Size of Session Objects: Since, it is possible to put large objects in a session, it is necessary to check the session size. Hence, it is important to check during memory leaks, the size of the session as well as the objects that might have been put in the session. To avoid memory leaks, only those objects should be put in the session which need to be put and should be removed when not required anymore.

If the above suggested checks do not throw a cause for memory leaks, the below mentioned methods may be carried out to determine the cause for memory leaks:

Using verbose:gc: The verbose:gc utility can be configured in startWebLogic.cmd to obtain information about the Java Object Heap in real time while running the Java application.

To activate the utility, Java must be run with -verbose:gc option.

For using verbose:gc; set the JVM size to 64 MB so that we may reach the maximum without load on the server. If the used memory does not drop to initial levels after full GC, then it would indicate a memory leak. This is a raw mechanism of analyzing in the absence of a profiling tool.

The following piece of code can also be used to track the amount of free memory, available memory and utilized memory.

long heapSize = Runtime.getRuntime().totalMemory();long heapMaxSize = Runtime.getRuntime().maxMemory();long heapFreeSize = Runtime.getRuntime().freeMemory();Log.logInfo("LeakAction", "perform", "heapSize in MB  : " + heapSize / 1000000);        Log.logInfo("LeakAction", "perform", "heapMaxSize: in MB" + heapMaxSize / 1000000);Log.logInfo("LeakAction", "perform", "heapFreeSizeheapFreeSize: in MB " + heapFreeSize / 1000000);


Considering the usage of Struts 1.1 Framework, the above code needs to be placed in the action class for the corresponding JSP just before method ‘perform’ returns the ActionForward.



Using JProbe with LoadRunner: JProbe is an enterprise-class Java profiler providing intelligent diagnostics on memory usage, performance and test coverage, allowing developers to quickly pinpoint the root cause of application code performance and stability problems that obstruct component and integration integrity. LoadRunner is a performance and load testing product by HP and can be used with JProbe to pinpoint the cause for memory leaks.



Eclipse Test & Performance Tools Platform:



The Eclipse Test and Performance Tools Platform (TPTP) Project provides an open platform supplying powerful frameworks and services that allow software developers to build unique test and performance tool, both open source and commercial, that can be easily integrated with the platform and with other tools.



TPTP addresses the entire test and performance life cycle, from early testing to production application monitoring, including test editing and execution, monitoring, tracing and profiling, and log analysis capabilities. 



CodePro Profiler:



CodePro Profiler™ is an Eclipse-based software development product that enables Java developers to efficiently identify performance issues early in the development cycle. Use CodePro Profiler during development to efficiently locate performance bottlenecks, pin down memory leaks and resolve threading issues in a painless and intuitive manner



Focus on creating business logic, rather than on tedious manual instrumentation of debugging code to identify performance problems. CodePro Profiler is the first enterprise-ready performance analysis tool that is designed specifically for Eclipse development platforms to ensure creation of fast, reliable and high quality applications.



VisualVM



VisualVM is a visual tool integrating several commandline JDK tools and lightweight profiling capabilities. Designed for both production and development time use, it further enhances the capability of monitoring and performance analysis for the Java SE platform.



References




Nov 24, 2009

Genetic Algorithms

A genetic algorithm is a search technique used in computing to find true or approximate solutions to optimization and search problems, and is often abbreviated as GA. Genetic algorithms are categorized as global search heuristics. Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination).

Genetic algorithms are implemented as a computer simulation in which a population of abstract representations (called chromosomes or the genotype or the genome) of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly mutated) to form a new population. The new population is then used in the next iteration of the algorithm.

Genetic algorithms find application in computer science, engineering, economics,chemistry, physics, mathematics and other fields.

Genetic Algorithms

A genetic algorithm is a search technique used in computing to find true or approximate solutions to optimization and search problems, and is often abbreviated as GA. Genetic algorithms are categorized as global search heuristics. Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination).

Genetic algorithms are implemented as a computer simulation in which a population of abstract representations (called chromosomes or the genotype or the genome) of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly mutated) to form a new population. The new population is then used in the next iteration of the algorithm.

Genetic algorithms find application in computer science, engineering, economics,chemistry, physics, mathematics and other fields.

Nov 23, 2009

OpenSSO - Open Web SSO [Single Sign-On]

The Open Web SSO project (OpenSSO) provides core identity services to simplify the implementation of transparent single sign-on (SSO) as a security component in a network infrastructure. OpenSSO provides the foundation for integrating diverse web applications that might typically operate against a disparate set of identity repositories and are hosted on a variety of platforms such as web and application servers. This project is based on the code base of Sun Java System Access Manager, a core identity infrastructure product offered by Sun Microsystems.

OpenSSO Enterprise won the 'Security' category of the Developer.com Product of the Year 2009 awards.

OpenSSO provides complete and flexible access management and federation management capabilities, in the form of a simple lightweight Java EE application thereby scaling horizontally and vertically as enterprise security needs change over time.

Sun OpenSSO Builds

Sun offers Open SSO in two different distributions

· Open SSO Enterprise

· Open SSO Express

The differences between these two distributions are as below

Open SSO Enterprise

Open SSO Express

Commercially supported version

Available as open source as well as paid support

New features released every 12 months

New features available in every 3 months

Hot patches and fixes available when required

No patches or fixes.

Extensive manual and automated testing by Sun QA team

Extensive automated testing and moderate manual testing by Sun QA team

Suitable for Production

Suitable for development and staging environment.

 

Architecture Of OpenSSO

clip_image002

The following services are provided:

1. Authentication

The Authentication service is based on Java Authentication and Authorization Service (JAAS). Several authentication modules are supplied out of the box, examples: LDAP, Radius, SecureID, Windows Desktop, Certificate, and Active Directory. New authentication modules can be added using a JAAS based SPI.

2. Authorization (Policy)

The Policy service provides the authorization service of OpenSSO. It is a rules based engine. A Policy comprises:

Service name schema for the policy type that describes the syntax of policy (amPolicy.xml)

3. Session (SSO)

A session also serves as an efficient inter-process communication mechanism to communicate simple attributes related to the specific authenticated user.

4. Auditing/Logging

A common Logging service is invoked by all components - both residing on the server and those on the client. This allows the actual mechanism of logging to be separated from contents of the logs, which are specific to each component.

5. Identity Repository access

The Identity Repository service allows OpenSSO to integrate an existing user repository, such as the corporate LDAP server. It provides an abstraction to access user profiles as well as group & role assignments consumed by client and other OpenSSO services. This abstraction is capable of spanning multiple repositories even of different types. The current implementation supports any LDAPv3 compliant repository (certified for Sun Directory Server and Active Directory).

6. Federation

Virtual Federation is a recently added feature of OpenSSO. Virtual Federation addresses two key issues in deploying federation:

(i) More than one federation standard in a Circle of Trust and

(ii) Legacy applications and existing authentication mechanisms.

Policy Agents (PAs) are provided as add-on components one for each container type that ease the protection of web based network resources (enterprise applications and services). PAs consume the public APIs mentioned above and take care of the integration with the specific container such that its presence is largely transparent to the contained protected resources.

Features and Benefits of OpenSSO

Sun OpenSSO Enterprise integrates all the capabilities required to handle SSO, authorization, and personalization into a single, comprehensive solution

Sun OpenSSO Express Builds:

  • Makes it possible to deploy next-generation features developed by the OpenSSO community with the same support and indemnification provided by commercial releases without having to wait.
  • Accelerates time to market for new applications created with next-generation features.

Single WAR File Distribution:

Speeds installation and simplifies configuration by eliminating external dependencies

Simple Product Configuration:

Enables configuration within minutes, no matter how many instances of Sun OpenSSO Enterprise are being deployed

Embedded Directory Server:

  • Simplifies deployment by eliminating the need to configure a directory to support the configuration store.
  • Provides a robust, scalable directory for maintaining information

Common Task Flows:

Makes common features repeatable, scalable, and easy to use

Centralized Agent Configuration Management:

  • Simplifies agent configuration
  • Provides a scalable, repeatable method of centrally establishing agent enforcement policies

Centralized Server Configuration:

Allows configuration and management of complex horizontal deployments from an easy-to-use, central console

Virtual Federation Proxy:

  • Enables multiple legacy products to start federating before addressing internal SSO issues
  • Eliminates need to either federate-enable all existing products or solve SSO problems before federating

Popular Access Management Products

Below are the few Access Management products available and their features are listed.

OpenSSO

  • Open Source offering from Sun Microsystems. Same code base as Sun Java Systems Access Manager.
  • Available as commercial product (OpenSSO enterprise) as well as free (OpenSSO express).
  • Good documentation available.
  • Commercial support available through Sun Microsystems.
  • Policy agents available for BEA web logic / portal, Sun Java Systems Application Server, proxy server and Web Server, IBM web sphere, Apache Tomcat, IIS and SAP as well as web and J2EE agents.
  • OpenDS as embedded data store.
  • Code written in Java.
  • Single war file distribution

Acegi Security

  • Spring framework.
  • Good documentation.
  • Flexible. Most implementations can be replaced i.e. we can provide custom authentication providers to retrieve credentials from our own schema; we can replace the access decision manager implementation and so on.
  • ACL framework is provided. ACL checks are done using the parameters of the method being called i.e. the path (these can be configured to the user level)
  • Suitable for Spring Framework applications.
  • Needs external framework integration for SSO.

JOSSO

  • J2EE and spring transparent single sign on.
  • Runs in Apache Tomcat, JBoss application server, BEA Web Logic 9 and Web Logic 10 application server, Apache Geronimo application server.
  • LDAP support for storing user information and credentials
  • Password recovery support.
  • “Remember me” support.
  • Written in Java
  • Pluggable Framework to allow the implementation of custom identity components using Spring or built-in IoC container

Gabriel

  • Access management framework written in Java.
  • API’s available for extending and implementation.

Shibboleth

  • Consists of separate Identity provider and service provider packages.
  • Security information travels in SAML.
  • Attribute based access control also available.
  • Can be integrated with other access manager products as identity provider.
  • Policy agent not available. Custom/ third party policy agents will need to be used. This needs to be explored further during evaluation.

Evaluation

OpenSSO is quite a popular open source offering from Sun with code base same as that of SJS Access Manager. Since its commercial and open development happens on the same code base, the quality of the product can be trusted. This has an embedded data store (OpenDS). Also good documentation is available. Policy agents are available to work with OpenSSO.

Acegi security system does not look a likely candidate for evaluation as it is only for spring framework applications.

Gabriel is a framework for securing applications and from the initial evaluation it looks like there would be a need for lot of custom development if we use the same.

Shibboleth is again another popular offering in open source access management products. Attribute based access control is an interesting feature. Policy agents are not available for this. We might need to look at SAML (Security Assertion Markup Language) compliant third party agents or develop custom agents using SAML.

OpenSSO Vs Others

Feature

Oracle Access Manager

OpenSSO

JOSSO

       

Policy Agents

Weblogic

Weblogic, Sun(Application and Web Server), tomcat, apace, JBoss

Weblogic, tomcat, apace, JBoss

User / group provisioning through access manager

Yes

Yes

No

Pre and Post operation tasks

Yes

No

No

Centralized Policy Management

Yes

Yes

No

Application Policy management

No

No

Yes

User Interface pages for project requirements

Existing pages customized

Development required

Development required

Commercial Support

Yes (Oracle)

Yes (Sun Microsystems)

Yes (Atricore)


OpenSSO Installation

Install a GlassFish web container in global zone of virtual machine. Then deploy OpenSSO in the web container and verify the deployment.

Tasks are:

  • Install GlassFish Application Server software
  • Deploy OpenSSO

clip_image002[5]

An OpenSSO instance is running in the GlassFish web container on port 8080 in the global zone. The configuration data store, which holds the OpenSSO configuration, also holds the user directory. This deployment scenario is suitable only for very simple test deployments.

Preparation

Navigating Around the Solaris Sandbox

1) In a terminal window

2) Run the lab –p command

The lab –p command prepares the Solaris Sandbox zones for networking and GUI display.

3) Start a web browser:

Firefox &

Download the required software from the given link:

Software - GlassFish application server (version 2)

URL – https://glassfish.dev.java.net

Software – OpenSSO Enterprise 8.0

URL – https://opensso.dev.java.net

Copy to /opt/software/

Task 1  -  Installing GlassFish Application Server

1) Install the GlassFish software:

a. Run the following command:

/opt/software/glassfish-v3preview/glassfish-v3-prelude-unix.sh

The Welcome dialog box appears.

b. Click Next.

A dialog box with the GlassFish license appears.

c. Select I Accept the Terms in the License Agreement, and then click Next.

The Installation Directory dialog box appears.

d. Type /opt/glassfish in the Installation Directory field, then click Next.

The Administration Settings dialog box appears.

e. Select Provide Username and Password, and fill out fields in the Administration Settings dialog box as follows:

  • Username – admin
  • Password – cangetin

f. Click Next.

The Update Configuration dialog box appears.

g. Uncheck Install Update Tool, Then Click Next.

The Ready to Install dialog box appears.

h. Click Install.

Message appear in the Progress dialog box as GlassFish installation proceeds.

The Product Registration dialog box appears.

i. Select Skip Registration, then click Next.

The Summary dialog box appears.

j. Click Exit

2) Start the GlassFish domain administration server(DAS):

/opt/glassfish/bin/asadmin start-domain domain1

Do not create additional GlassFish instance. deploy OpenSSO software to the DAS, strictly as a convenience for learning purpose.

Task 2  -  Deploying OpenSSO

Deploy the OpenSSO software.

1) Deploy the OpenSSO web archive (WAR) file to the DAS using the asadmin CLI:

/opt/glassfish/bin/asadmin deploy –user admin /opt/software/opensso-ent- 8.0/opensso/deployable-war/opensso.war

2) Verify that the OpenSSO WAR file was deployed.

/opt/glassfish/bin/asadmin list-components –user admin

OpenSSO <web> appears in the list of components deployed to the GlassFish instance.

3) In a browser window, navigate to the following URL:

http://example.com:8080/opensso

A page appears with a link that lets you create a new configuration

4) Configuration the OpenSSO instance:

a. Click Create New Configuration (in the Custom Configuration section of the page).

The General page appears.

b. Enter data in the Default User Password section of the General page as follows:

  • Default User [amAdmin] : Type cangetin
  • Confirm : Type : cangetin

Click Next.

The Server Settings page appears.

Caution - On some Systems, when you attempt to scroll down to the next button, the OpenSSO configuration refuses to scroll down. This is a know problem – OpenSSO issue #1966. One of the following workarounds should fix the problem:

- Press the F11 key to use Firefox in full-screen mode. When you no longer need full-screen mode, press F11 again to leave full-screen mode.

c. Enter data in the Server Settings page as follows:

  • Server URL: Verify that the default value is  http://example.com:8080
  • Cookie Domain : Verify that the default value is .example.com. The cookie domain value should have a pried (“.”) as its first character.
  • Configuration Directory : Type /opt/opensso/instance

Click Next.

The Configuration Data Store Settings page appears.

Note - In the OpenSSO configuration pages, the terms configuration directory and configuration data store might be easily confused.

The configuration directory is a file system directory that contains flat files used for system configuration and other purposes. XML schema files, directory server schema files, log files, and debug files are all located in the configuration directory. In Sun Java System Access Manager (Access Manager) 7.1 – the predecessor release to OpenSSO – these files were stored in various locations, depending on operating system platform. For example, on the Solaris Operating System (Solaris OS), these files were located in the /etc and /var directories.

The configuration data store is an Lightweight Directory Access Protocol (LDAP) directory that contains information about OpenSSO realm, authentication, policy, and other configuration. By default, this LDAP directory in an OpenDS directory that is entirely managed by OpenSSO.

d. Click Next.

The User Data Store Settings page appears.

e. Select OpenSSO User Data Store and click Next.

The Site Configuration page appears.

f. Enter data in the Site Configuration page as follows :

  • Will This Instance be deployed behind a Load Balancer?

Select No.

Click Next.

The Default Policy Agent User page appears.

g. Enter data in the Default Policy Agent User page as follows:

  • Password : Type cangetinam
  • Confirm Password : Type cangetinam

Click Next.

The Configuration Summary Details page appears.

Review the values you have entered. If incorrect values appear on the Configuration Summary Details page, make corrections as necessary.

h. Click Create Configuration.

Progress messages inform you of configuration progress.

The configuration Completes page appears.

i. Click Proceed to Login.

5. The OpenSSO login screen appears.

Log in to OpenSSO as the amAdmin user. The password is cangetin

6. The OpenSSO console start page appears.

7. Log out of the OpenSSO console.

Now fully-operational OpenSSO instance is available. Use this instance as needed for experimentation, research, demonstrating features, and so forth.

OpenSSO - Open Web SSO [Single Sign-On]

The Open Web SSO project (OpenSSO) provides core identity services to simplify the implementation of transparent single sign-on (SSO) as a security component in a network infrastructure. OpenSSO provides the foundation for integrating diverse web applications that might typically operate against a disparate set of identity repositories and are hosted on a variety of platforms such as web and application servers. This project is based on the code base of Sun Java System Access Manager, a core identity infrastructure product offered by Sun Microsystems.

OpenSSO Enterprise won the 'Security' category of the Developer.com Product of the Year 2009 awards.

OpenSSO provides complete and flexible access management and federation management capabilities, in the form of a simple lightweight Java EE application thereby scaling horizontally and vertically as enterprise security needs change over time.

Sun OpenSSO Builds

Sun offers Open SSO in two different distributions

· Open SSO Enterprise

· Open SSO Express

The differences between these two distributions are as below

Open SSO Enterprise

Open SSO Express

Commercially supported version

Available as open source as well as paid support

New features released every 12 months

New features available in every 3 months

Hot patches and fixes available when required

No patches or fixes.

Extensive manual and automated testing by Sun QA team

Extensive automated testing and moderate manual testing by Sun QA team

Suitable for Production

Suitable for development and staging environment.

 

Architecture Of OpenSSO

clip_image002

The following services are provided:

1. Authentication

The Authentication service is based on Java Authentication and Authorization Service (JAAS). Several authentication modules are supplied out of the box, examples: LDAP, Radius, SecureID, Windows Desktop, Certificate, and Active Directory. New authentication modules can be added using a JAAS based SPI.

2. Authorization (Policy)

The Policy service provides the authorization service of OpenSSO. It is a rules based engine. A Policy comprises:

Service name schema for the policy type that describes the syntax of policy (amPolicy.xml)

3. Session (SSO)

A session also serves as an efficient inter-process communication mechanism to communicate simple attributes related to the specific authenticated user.

4. Auditing/Logging

A common Logging service is invoked by all components - both residing on the server and those on the client. This allows the actual mechanism of logging to be separated from contents of the logs, which are specific to each component.

5. Identity Repository access

The Identity Repository service allows OpenSSO to integrate an existing user repository, such as the corporate LDAP server. It provides an abstraction to access user profiles as well as group & role assignments consumed by client and other OpenSSO services. This abstraction is capable of spanning multiple repositories even of different types. The current implementation supports any LDAPv3 compliant repository (certified for Sun Directory Server and Active Directory).

6. Federation

Virtual Federation is a recently added feature of OpenSSO. Virtual Federation addresses two key issues in deploying federation:

(i) More than one federation standard in a Circle of Trust and

(ii) Legacy applications and existing authentication mechanisms.

Policy Agents (PAs) are provided as add-on components one for each container type that ease the protection of web based network resources (enterprise applications and services). PAs consume the public APIs mentioned above and take care of the integration with the specific container such that its presence is largely transparent to the contained protected resources.

Features and Benefits of OpenSSO

Sun OpenSSO Enterprise integrates all the capabilities required to handle SSO, authorization, and personalization into a single, comprehensive solution

Sun OpenSSO Express Builds:

  • Makes it possible to deploy next-generation features developed by the OpenSSO community with the same support and indemnification provided by commercial releases without having to wait.
  • Accelerates time to market for new applications created with next-generation features.

Single WAR File Distribution:

Speeds installation and simplifies configuration by eliminating external dependencies

Simple Product Configuration:

Enables configuration within minutes, no matter how many instances of Sun OpenSSO Enterprise are being deployed

Embedded Directory Server:

  • Simplifies deployment by eliminating the need to configure a directory to support the configuration store.
  • Provides a robust, scalable directory for maintaining information

Common Task Flows:

Makes common features repeatable, scalable, and easy to use

Centralized Agent Configuration Management:

  • Simplifies agent configuration
  • Provides a scalable, repeatable method of centrally establishing agent enforcement policies

Centralized Server Configuration:

Allows configuration and management of complex horizontal deployments from an easy-to-use, central console

Virtual Federation Proxy:

  • Enables multiple legacy products to start federating before addressing internal SSO issues
  • Eliminates need to either federate-enable all existing products or solve SSO problems before federating

Popular Access Management Products

Below are the few Access Management products available and their features are listed.

OpenSSO

  • Open Source offering from Sun Microsystems. Same code base as Sun Java Systems Access Manager.
  • Available as commercial product (OpenSSO enterprise) as well as free (OpenSSO express).
  • Good documentation available.
  • Commercial support available through Sun Microsystems.
  • Policy agents available for BEA web logic / portal, Sun Java Systems Application Server, proxy server and Web Server, IBM web sphere, Apache Tomcat, IIS and SAP as well as web and J2EE agents.
  • OpenDS as embedded data store.
  • Code written in Java.
  • Single war file distribution

Acegi Security

  • Spring framework.
  • Good documentation.
  • Flexible. Most implementations can be replaced i.e. we can provide custom authentication providers to retrieve credentials from our own schema; we can replace the access decision manager implementation and so on.
  • ACL framework is provided. ACL checks are done using the parameters of the method being called i.e. the path (these can be configured to the user level)
  • Suitable for Spring Framework applications.
  • Needs external framework integration for SSO.

JOSSO

  • J2EE and spring transparent single sign on.
  • Runs in Apache Tomcat, JBoss application server, BEA Web Logic 9 and Web Logic 10 application server, Apache Geronimo application server.
  • LDAP support for storing user information and credentials
  • Password recovery support.
  • “Remember me” support.
  • Written in Java
  • Pluggable Framework to allow the implementation of custom identity components using Spring or built-in IoC container

Gabriel

  • Access management framework written in Java.
  • API’s available for extending and implementation.

Shibboleth

  • Consists of separate Identity provider and service provider packages.
  • Security information travels in SAML.
  • Attribute based access control also available.
  • Can be integrated with other access manager products as identity provider.
  • Policy agent not available. Custom/ third party policy agents will need to be used. This needs to be explored further during evaluation.

Evaluation

OpenSSO is quite a popular open source offering from Sun with code base same as that of SJS Access Manager. Since its commercial and open development happens on the same code base, the quality of the product can be trusted. This has an embedded data store (OpenDS). Also good documentation is available. Policy agents are available to work with OpenSSO.

Acegi security system does not look a likely candidate for evaluation as it is only for spring framework applications.

Gabriel is a framework for securing applications and from the initial evaluation it looks like there would be a need for lot of custom development if we use the same.

Shibboleth is again another popular offering in open source access management products. Attribute based access control is an interesting feature. Policy agents are not available for this. We might need to look at SAML (Security Assertion Markup Language) compliant third party agents or develop custom agents using SAML.

OpenSSO Vs Others

Feature

Oracle Access Manager

OpenSSO

JOSSO

       

Policy Agents

Weblogic

Weblogic, Sun(Application and Web Server), tomcat, apace, JBoss

Weblogic, tomcat, apace, JBoss

User / group provisioning through access manager

Yes

Yes

No

Pre and Post operation tasks

Yes

No

No

Centralized Policy Management

Yes

Yes

No

Application Policy management

No

No

Yes

User Interface pages for project requirements

Existing pages customized

Development required

Development required

Commercial Support

Yes (Oracle)

Yes (Sun Microsystems)

Yes (Atricore)


OpenSSO Installation

Install a GlassFish web container in global zone of virtual machine. Then deploy OpenSSO in the web container and verify the deployment.

Tasks are:

  • Install GlassFish Application Server software
  • Deploy OpenSSO

clip_image002[5]

An OpenSSO instance is running in the GlassFish web container on port 8080 in the global zone. The configuration data store, which holds the OpenSSO configuration, also holds the user directory. This deployment scenario is suitable only for very simple test deployments.

Preparation

Navigating Around the Solaris Sandbox

1) In a terminal window

2) Run the lab –p command

The lab –p command prepares the Solaris Sandbox zones for networking and GUI display.

3) Start a web browser:

Firefox &

Download the required software from the given link:

Software - GlassFish application server (version 2)

URL – https://glassfish.dev.java.net

Software – OpenSSO Enterprise 8.0

URL – https://opensso.dev.java.net

Copy to /opt/software/

Task 1  -  Installing GlassFish Application Server

1) Install the GlassFish software:

a. Run the following command:

/opt/software/glassfish-v3preview/glassfish-v3-prelude-unix.sh

The Welcome dialog box appears.

b. Click Next.

A dialog box with the GlassFish license appears.

c. Select I Accept the Terms in the License Agreement, and then click Next.

The Installation Directory dialog box appears.

d. Type /opt/glassfish in the Installation Directory field, then click Next.

The Administration Settings dialog box appears.

e. Select Provide Username and Password, and fill out fields in the Administration Settings dialog box as follows:

  • Username – admin
  • Password – cangetin

f. Click Next.

The Update Configuration dialog box appears.

g. Uncheck Install Update Tool, Then Click Next.

The Ready to Install dialog box appears.

h. Click Install.

Message appear in the Progress dialog box as GlassFish installation proceeds.

The Product Registration dialog box appears.

i. Select Skip Registration, then click Next.

The Summary dialog box appears.

j. Click Exit

2) Start the GlassFish domain administration server(DAS):

/opt/glassfish/bin/asadmin start-domain domain1

Do not create additional GlassFish instance. deploy OpenSSO software to the DAS, strictly as a convenience for learning purpose.

Task 2  -  Deploying OpenSSO

Deploy the OpenSSO software.

1) Deploy the OpenSSO web archive (WAR) file to the DAS using the asadmin CLI:

/opt/glassfish/bin/asadmin deploy –user admin /opt/software/opensso-ent- 8.0/opensso/deployable-war/opensso.war

2) Verify that the OpenSSO WAR file was deployed.

/opt/glassfish/bin/asadmin list-components –user admin

OpenSSO <web> appears in the list of components deployed to the GlassFish instance.

3) In a browser window, navigate to the following URL:

http://example.com:8080/opensso

A page appears with a link that lets you create a new configuration

4) Configuration the OpenSSO instance:

a. Click Create New Configuration (in the Custom Configuration section of the page).

The General page appears.

b. Enter data in the Default User Password section of the General page as follows:

  • Default User [amAdmin] : Type cangetin
  • Confirm : Type : cangetin

Click Next.

The Server Settings page appears.

Caution - On some Systems, when you attempt to scroll down to the next button, the OpenSSO configuration refuses to scroll down. This is a know problem – OpenSSO issue #1966. One of the following workarounds should fix the problem:

- Press the F11 key to use Firefox in full-screen mode. When you no longer need full-screen mode, press F11 again to leave full-screen mode.

c. Enter data in the Server Settings page as follows:

  • Server URL: Verify that the default value is  http://example.com:8080
  • Cookie Domain : Verify that the default value is .example.com. The cookie domain value should have a pried (“.”) as its first character.
  • Configuration Directory : Type /opt/opensso/instance

Click Next.

The Configuration Data Store Settings page appears.

Note - In the OpenSSO configuration pages, the terms configuration directory and configuration data store might be easily confused.

The configuration directory is a file system directory that contains flat files used for system configuration and other purposes. XML schema files, directory server schema files, log files, and debug files are all located in the configuration directory. In Sun Java System Access Manager (Access Manager) 7.1 – the predecessor release to OpenSSO – these files were stored in various locations, depending on operating system platform. For example, on the Solaris Operating System (Solaris OS), these files were located in the /etc and /var directories.

The configuration data store is an Lightweight Directory Access Protocol (LDAP) directory that contains information about OpenSSO realm, authentication, policy, and other configuration. By default, this LDAP directory in an OpenDS directory that is entirely managed by OpenSSO.

d. Click Next.

The User Data Store Settings page appears.

e. Select OpenSSO User Data Store and click Next.

The Site Configuration page appears.

f. Enter data in the Site Configuration page as follows :

  • Will This Instance be deployed behind a Load Balancer?

Select No.

Click Next.

The Default Policy Agent User page appears.

g. Enter data in the Default Policy Agent User page as follows:

  • Password : Type cangetinam
  • Confirm Password : Type cangetinam

Click Next.

The Configuration Summary Details page appears.

Review the values you have entered. If incorrect values appear on the Configuration Summary Details page, make corrections as necessary.

h. Click Create Configuration.

Progress messages inform you of configuration progress.

The configuration Completes page appears.

i. Click Proceed to Login.

5. The OpenSSO login screen appears.

Log in to OpenSSO as the amAdmin user. The password is cangetin

6. The OpenSSO console start page appears.

7. Log out of the OpenSSO console.

Now fully-operational OpenSSO instance is available. Use this instance as needed for experimentation, research, demonstrating features, and so forth.

Spring Flex Integration using SBI

Flex and Spring Integration Architecture

Many applications are implemented with a three-tier architecture wherein data is stored in the database. The web server runs Java services that access the database and retrieve the information. The Java classes are responsible for the business logic that receives a call from the client tier, assembles the information from the database, and returns the result.

The client tier utilizes browser technology that runs a Flash application to provide an interface to the user and calls the Java services for required information. The client workstations call the Tomcat server, which calls the database to gather data and executes the Spring services that contain the application’s business logic.

In this architecture, the compiled Flex and Spring code resides on the Tomcat server. The client workstations download the Flex SWF and associated files, which contain the hooks into the Spring services.

Sample Application

To demonstrate the Spring and Flex integration, I am presenting a sample application which is a web based application with the following architecture.

image

Spring

Spring addresses Java/Java EE development by organizing your middle-tier objects and takes care of the plumbing that is usually left up for you to create. Spring can also work in any architecture layer while running in any runtime environment.

Spring Bean Wiring

The IoC container, also called the core container, wires Spring beans together. It is also responsible

for the configuration and creation of objects in Spring.

In most cases, when an object needs references to data, it does a lookup or retrieval from an external data repository. IoC allows the component to not require information regarding the location of the data source, thus cleaning up the process of data retrieval through inverting the direction of the retrieval.

IoC helps to create layers of abstraction through your Spring applications. Spring’s core container provides a central location for accessing objects, called the application context. The application context can be configured as Java 5 annotations or as an XML file that contains the signatures of each bean that is created in a Spring application.

Building Spring Service

To demonstrate I will present a simple spring service that gets a list of objects from the database using Hibernate as the ORM technology.

Building a Simple Bean

The first item of interest is to build a bean for our service. This bean is part of the IoC container.

Beans defined in the IoC container are nothing more than a pool of components you have access to by referencing the service with which they are associated.

The MyService is a simple POJO with a method to print the project list. This bean does not have a constructor, as no arguments need to be passed to the bean at instantiation.

Bean Interface (MyService.java)

package com.sampleapp.services;

import java.util.List;
import com.sampleapp.domain.MyDomainObject;

public interface MyService {

List<MyDomainObject> getProjects();

}


Bean Implementation (MyServiceImpl.java)



package com.sampleapp.services;

import java.util.List;
import java.util.Map;
import com.sampleapp.domain.MyDomainObject;
import com.sampleapp.dao.MyDao;
import org.springframework.beans.factory.annotation.Autowired;

public class MyServiceImpl implements MyService {

MyDao myDao;

@Autowired(required = true)
public void setConsDbDao(MyDao myDao) {
this.myDao = myDao;
}

public List<MyDomainObject> getProjects() {
// TODO Auto-generated method stub
return myDao.getProjects();
}
}


Building Data Persistence





The Spring Framework provides a layer of abstraction for transaction management to allow you to consistently program across different transaction APIs, such as JDBC, Hibernate, JPA, and Java Data Objects (JDO). Spring provides support for both programmatic and declarative transaction management.



Spring Transaction Managers



Spring provides transaction managers that delegate work to platform-specific transaction implementations through either JTA or the transaction manager’s framework; Spring does not manage transactions directly. Spring provides transaction managers for J2EE Connector Architecture (JCA), JDBC, JMS, Hibernate, JDO, JPA, TopLink, and others.



Since here I am going to use Hibernate as ORM solution, so you will use the HibernateTransactionManager.



To demonstrate how to persist data with Spring, we need a database, so lets create a database table DOMAINOBJECT.



clip_image002



Once that is complete, you can go ahead and create a class to support the table.



Domain Object (MyDomainObject.java)



package com.sampleapp.domain;

import java.io.Serializable;
import java.sql.Date;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.SequenceGenerator;
import javax.persistence.Table;

@Entity
@Table(name="DOMAINOBJECT")
public class MyDomainObject implements Serializable {

static final long serialVersionUID = 1L;

private int id;
private String project;
private String ownerName;
private Date lastUpdated;

public MyDomainObject() {}

@Id
@GeneratedValue(strategy=GenerationType.SEQUENCE, generator="gen_id")
@SequenceGenerator(name="gen_id", sequenceName = "DOMAINOBJ_SEQ")
@Column(name="ID")
public int getId() {
return this.id;
}

public void setId(int id) {
this.id = id;
}

@Column(name = "PROJECT", nullable = false, length = 255)
public String getProject() {
return project;
}

public void setProject(String project) {
this.project = project;
}

@Column(name = "OWNERNAME", length = 255)
public String getOwnerName() {
return ownerName;
}

public void setOwnerName(String ownerName) {
this.ownerName = ownerName;
}

@Column(name = "LASTUPDATED")
public Date getLastUpdated() {
return lastUpdated;
}

public void setLastUpdated(Date lastUpdated) {
this.lastUpdated = lastUpdated;
}
}


We will need to create a DAO to support database operations, each of which is carried out by a DAO method. Since we have only one operation here so will have one method defined in the interface.



These methods should be defined in the DAO interface to allow for different implementation technologies such as Hibernate and iBATIS.



Dao Interface (MyDao.java)



package com.sampleapp.dao;

import java.util.List;
import org.springframework.transaction.annotation.Transactional;
import org.springframework.transaction.annotation.Propagation;
import com.sampleapp.domain.MyDomainObject;

public interface MyDao {

@Transactional(readOnly=true, propagation=Propagation.SUPPORTS)
List<MyDomainObject> getProjects();

}


Since I am using Hibernate as the implementation technology so let’s create the Hibernate implementation of the above defined DAO interface.



Dao Implementation (MyDaoImpl.java)



package com.sampleapp.dao;

import java.util.List;
import com.sampleapp.domain.MyDomainObject;
import org.springframework.orm.hibernate3.support.HibernateDaoSupport;
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;

@Transactional(propagation=Propagation.SUPPORTS, readOnly=true)
public class MyDaoImpl extends HibernateDaoSupport implements MyDao {

@SuppressWarnings({"unchecked"})
@Transactional(propagation=Propagation.SUPPORTS, readOnly=true)
public List<MyDomainObject> getProjects() {
// TODO Auto-generated method stub
List<MyDomainObject> projects = getHibernateTemplate().find("from MyDomainObject");
return projects;
}
}


Once done we can now start registering the beans in the Spring ICO container called applicationContext.xml.



For simplicity, I am going to keep the database/hibernate related properties in a separate file called hibernate.properties. This has advantage that in future if the database needs to change I can change the settings in a file without touching rest of the application.



Hibernate SessionFactory Configuration (hibernate.properties)



# Hibernate 3 configuration
hibernate.connection.driver_class=oracle.jdbc.driver.OracleDriver
hibernate.connection.url=jdbc:oracle:thin:@servername:port:DB
hibernate.connection.username=username
hibernate.connection.password=password
hibernate.show_sql=true
hibernate.format_sql=false
hibernate.transaction.factory_class=org.hibernate.transaction.JDBCTransactionFactory
hibernate.dialect=org.hibernate.dialect.Oracle9Dialect
hibernate.c3p0.min_size=5
hibernate.c3p0.max_size=5
hibernate.c3p0.max_statements=50
hibernate.c3p0.timeout=1800


Now lets start registering the various components in Spring IOC container.



1. Configure Hibernate Session Factory



<bean id="sessionFactory" 
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="annotatedClasses">
<list>
<!-- Add annotated domain objects here-->
<value>com.sampleapp.domain.MyDomainObj</value>
</list>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.show_sql">${hibernate.show_sql}
</prop>
<prop key="hibernate.format_sql">${hibernate.format_sql}
</prop>
<prop key="hibernate.transaction.factory_class">${hibernate.transaction.factory_class}
</prop>
<prop
key="hibernate.dialect">${hibernate.dialect}
</prop>
<prop key="hibernate.c3p0.min_size">${hibernate.c3p0.min_size}
</prop>
<prop key="hibernate.c3p0.max_size">${hibernate.c3p0.max_size}</prop>
<prop key="hibernate.c3p0.timeout">${hibernate.c3p0.timeout}</prop>
<prop key="hibernate.c3p0.max_statements">${hibernate.c3p0.max_statements}</prop>
<prop key="hibernate.connection.driver_class">${hibernate.connection.driver_class}</prop>
<prop key="hibernate.connection.url">${hibernate.connection.url}</prop>
<prop key="hibernate.connection.username">${hibernate.connection.username}</prop>
<prop key="hibernate.connection.password">${hibernate.connection.password}</prop>
</props>
</property>
</bean>


2. Autowiring and Annotation Support



<!-- enable the configuration of transactional behavior based on annotations -->

<tx:annotation-driven transaction-manager="txManager"/>

<!-- enable autowiring -->
<context:annotation-config />

<bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor"/>


3. Transaction Manager Registration



<bean id="txManager"  class="org.springframework.orm.hibernate3.HibernateTransactionManager">    
<property name="sessionFactory"><ref local="sessionFactory" />
</property>
</bean>


4. Dao Registration



<bean id="myDao" 
class="com.sampleapp.dao.MyDaoImpl">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>


5. Registration of Services



<bean id="myService" 
class="com.sampleapp.services.MyServiceImpl"/>


That completes our Server side development of our sample application.



To expose the Spring services to the Flex, we need to have the BlazeDS integration done.



For this download the BlazeDS jars and Spring SBI jars and put it in your application classpath.



The SBI suite is designed as a best practice solution for integrating Flex and Spring. Its main goal is to simplify communication between Flex and Spring by providing Remoting and messaging capabilities in combination with BlazeDS.



Setting Up the BlazeDS MessageBroker



The MessageBroker is a the SBI component responsible for handling HTTP messages from the



Flex client. The MessageBroker is managed by Spring instead of BlazeDS. Messages are routed



to the Spring-managed MessageBroker via the Spring DispatcherServlet.



Lets modify the web.xml with the configuration required to handle the requests from Flex.



Web.xml



<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>

<listener>
<listener-class>flex.messaging.HttpFlexSession</listener-class>
</listener>

<servlet>
<servlet-name>Spring MVC Dispatcher Servlet</servlet-name>
<servlet-class>
org.springframework.web.servlet.DispatcherServlet
</servlet-class>
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/app-config/applicationContext.xml</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>

<servlet-mapping>
<servlet-name>Spring MVC Dispatcher Servlet</servlet-name>
<url-pattern>/spring/*</url-pattern>
</servlet-mapping>

<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
</welcome-file-list>

</web-app>


Since we are using the MessageBroker which is managed by Spring so we have to register it in the Spring IOC container applicationContext.xml



<bean 
class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">
<property name="mappings">
<value>
/*=springManagedMessageBroker
</value>
</property>
</bean>

<!-- Dispatches requests mapped to a MessageBroker -->
<bean
class="org.springframework.flex.servlet.MessageBrokerHandlerAdapter"/>

<!-- Bootstraps and exposes the BlazeDS MessageBroker -->
<bean id="springManagedMessageBroker"
class="org.springframework.flex.core.MessageBrokerFactoryBean" />


Also we need to expose our service beans to flex remoting



<bean id="myServiceRO" class="org.springframework.flex.remoting.RemotingDestinationExporter">
<property name="messageBroker"
ref="springManagedMessageBroker"/>
<property name="service" ref="consDbService"/>
<property name="destinationId" value="myService" />
<property name="channels" value="my-amf, my-secure-amf"/>
</bean>


Now lets register the channels my-amf in the BlazeDS configuration files (remoting-config.xml and services-config.xml)



remoting-config.xml



<?xml version="1.0" encoding="UTF-8"?>
<service id="remoting-service"
class="flex.messaging.services.RemotingService">
<adapters>
<adapter-definition id="java-object" class="flex.messaging.services.remoting.adapters.JavaAdapter" default="true"/>
</adapters>
<default-channels>
<channel ref="my-amf"/>
</default-channels>
</service>
services-config.xml
<channels>

<channel-definition id="my-amf" class="mx.messaging.channels.AMFChannel">
<endpoint url="http://{server.name}:{server.port}/{context.root}/spring/messagebroker/amf" class="flex.messaging.endpoints.AMFEndpoint"/>
<properties>
<polling-enabled>false</polling-enabled>
</properties>
</channel-definition>
</channels>

image

Deploy the application on your favourite web server.



I am using Tomcat



Flex



In this application, Flex makes an RPC call to the BlazeDS server to consume the getProjects we created in Spring.



So lets create the flex application which will make RPC calls to the spring services we defined.



SampleApp.mxml



<?xml version="1.0" encoding="utf-8"?>
<mx:Application
xmlns:mx="http://www.adobe.com/2006/mxml"
pageTitle="Projects List"
layout="absolute"
width="100%"
height="100%"
initialize="initializeHandler()" borderColor="#009DFF"
creationComplete="initializeHandler()">

<mx:RemoteObject
id="ro"
destination="myService"
endpoint="http://localhost:8080/SampleApp/spring/messagebroker/amf"
result="resultHandler(event)"
fault="faultHandler(event)"
showBusyCursor="true"
/>

<mx:Script>
<![CDATA[
import mx.collections.ArrayCollection;
import mx.rpc.events.ResultEvent;
import mx.rpc.events.FaultEvent;
import mx.utils.ObjectUtil;
import mx.controls.Alert;
import mx.utils.StringUtil;


[Bindable]
private var systemsData:ArrayCollection = new ArrayCollection();

private var selectedItem:Object;

private function initializeHandler():void
{
ro.getProjects();
}

private function resultHandler(event:ResultEvent):void
{
systemsData = ArrayCollection(event.result);
}

private function faultHandler(event:FaultEvent):void
{
Alert.show(ObjectUtil.toString(event.fault) );
}

]]>
</mx:Script>


<mx:VBox width="100%" height="100%" y="10" x="10">
<mx:ApplicationControlBar width="100%" height="81" borderColor="#F7FAFC" fillColors="[#009DFF, #009DFF]" fillAlphas="[0.53, 0.53]" autoLayout="true" themeColor="#F6F8F9">
<mx:Text text="Project List" fontSize="26" fontWeight="bold" fontStyle="italic" themeColor="#9E5814" alpha="0.52" color="#F4FAFB" width="310"/>
</mx:ApplicationControlBar>
<mx:HBox x="10" y="10" width="100%" height="100%" id="hbox1">

<mx:Panel id="systemStatus" width="100%" height="100%" layout="absolute" cornerRadius="0" borderColor="#009DFF" fontSize="13" backgroundAlpha="0.5" backgroundColor="#009DFF">
<mx:VBox label = 'Report' borderColor="#009DFF" borderStyle="solid" borderThickness="6" backgroundColor="#009DFF" x="33" y="9">
<mx:DataGrid
id="dgrid"
updateComplete=""
dataProvider="{systemsData}"
width="622" height="362"
bottom="0" right="0" alternatingItemColors="[#F7F7F7, #BDE7E8]" themeColor="#32C582" borderThickness="5" borderStyle="solid" borderColor="#009DFF">
<mx:columns>
<mx:DataGridColumn fontSize="9" headerText="Project Name" dataField="project" />
<mx:DataGridColumn fontSize="9" headerText="Last Updated" dataField="lastUpdated">
<mx:itemRenderer>
<mx:Component>
<mx:VBox>
<mx:DateFormatter id="formatDateTime" formatString="MM/DD/YY" />
<mx:Label text="{formatDateTime.format(data.lastUpdated)}"/>
</mx:VBox>
</mx:Component>
</mx:itemRenderer>
</mx:DataGridColumn>
<mx:DataGridColumn fontSize="9" headerText="Owner Name" dataField="ownerName" />
</mx:columns>
</mx:DataGrid>
</mx:VBox>
</mx:Panel>
</mx:HBox>
</mx:VBox>
</mx:Application>


Once done copy the sampleApp.html, sampleApp.swf, AC_OETags.js and playerProductInstall.swf in your web application and you should be able to view the application deployed on tomcat as below:



clip_image002[5]



References



Text Widget

Copyright © Vinay's Blog | Powered by Blogger

Design by | Blogger Theme by