Category Archives: Java

Add Maven dependencies to your Arquillian micro-deployments

Arquillian is a testing framework which lets you write real integration tests, run inside the container of your choice. With Arquillian you will be writing micro-deployments for your tests, small Java artifacts that is, which contains the bare minimum of classes and resoruces needed for your test to be executed within your container. To build these artifacts you will be using the ShrinkWrap API. You don’t have to adopt the micro-deployment strategy of course, but do you really want your whole project classpath available for the test of e.g. a single EJB component? Micro-deployments will isolate your test scenario and deploy to the container much faster than if you would bring in the entire classpath of your project.

When writing integration tests with Arquillian and ShrinkWrap you will probably, sooner or later, run into a use case where your test depend on a third party library. Since it would be very inconvenient to declare all the necessary classes in the third party library by hand, ShrinkWrap provides a way to attach complete libraries to your micro-deployment. If you’re using Maven, the Maven dependency resolver feature is very convenient. The 1.0.0.Final version of Arquillian is using a 1.0 beta version of the ShrinkWrap resolver module. The resolver API has in my opinion improved a lot for the latest 2.0 version (currently in Alpha) and I really recommend anyone using the resolver API to use the 2.0 version instead.

If you want to use the latest resovler API, you have to add it to the dependencyManagement tag in your Maven pom.xml before the actual Arquillian BOM, to make sure the 2.0 version of the resolver module will be loaded first:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.jboss.shrinkwrap.resolver</groupId>
            <artifactId>shrinkwrap-resolver-bom</artifactId>
            <version>2.0.0-alpha-1</version>
            <scope>test</scope>
            <type>pom</type>
        </dependency>
        <dependency>
            <groupId>org.jboss.arquillian</groupId>
            <artifactId>arquillian-bom</artifactId>
            <version>1.0.0.Final</version>
            <scope>test</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

To add third party libraries from your Maven dependencies to your micro-deployment, use the ShrinkWrap resolver API to add the necessary libaries to your artifact. In the example below, the commons-io and json libraries are specified with their respective Maven coordinates:

@Deployment
public static Archive createDeployment() {
    return ShrinkWrap.create(WebArchive.class, "fileviewer.war")
            .addClass(EnableRest.class)
            .addClass(FileViewer.class)
            .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
            .addAsLibraries(
                DependencyResolvers.use(MavenDependencyResolver.class)
                    .artifact("commons-io:commons-io:2.1")
                    .artifact("org.json:json:20090211")
                    .resolveAsFiles());
}

As you can see, it’s very easy to add libraries from your Maven dependencies to your ShrinkWrap micro-deployments. The behaviour of the ShrinkWrap resolver API can also be customized far more than what was shown in the above example. It is worth noting that in the example, any dependencies of the specified artifacts will be included as well. Transitivity is however one aspect which can be customized further through the API. An example project which uses the ShrinkWrap Maven dependency resolver can be found on GitHub.

 

Tommy Tynjä
@tommysdk

Bind a string to JNDI in JBoss 7.1

In JBoss 7.0.x there was no convenient way to bind an arbitrary java.lang.String to JNDI. This is however possible in JBoss 7.1.x. Just add your binding to the naming subsystem in standalone.xml such as:

<subsystem xmlns="urn:jboss:domain:naming:1.1">
    <bindings>
        <simple name="java:global/3rdparty/connection-url" value="http://localhost:12345"/>
    </bindings>
</subsystem>

You can then inject the value of your binding into your Java code through:

@Resource(mappedName ="java:global/3rdparty/connection-url")
private String serviceRemoteAddress = null;

To verify your JNDI bindings in JBoss 7.1.1 you can use the jboss-cli located in your $JBOSS_HOME/bin. Start the jboss-cli and type “/subsystem=naming:jndi-view”, such as:

[standalone@localhost:9999 /] /subsystem=naming:jndi-view
{
    "outcome" => "success",
    "result" => {
        "java: contexts" => {
            "java:" => {
                "TransactionManager" => {
                    "class-name" => "com.arjuna.ats.jbossatx.jta.TransactionManagerDelegate",
                    "value" => "com.arjuna.ats.jbossatx.jta.TransactionManagerDelegate@68fc12"
                },
...
Tommy Tynjä
@tommysdk

Asynchronous method invocations in Java EE 6

Lately I have been looking into asynchronous method invocations in Java EE 6. My use case was to execute multiple search engine queries in parallel and then collect the results afterwards, instead of having to wait for each query to finish before executing the next one etc. In Java EE 6 (EJB 3.1) you can implement asynchronous methods on your session beans, which means that the EJB container returns control to the client before executing the actual method. It is then possible to use the Java SE concurrency API to retrieve the results for this method. The asynchronous method must either have a void reutrn type or return a java.util.concurrent.Future<V>, where V is the result type value. This is very convenient for my use case, but also for exeuction of such tasks which are unimportant that they are executed before the application continues, e.g. sending an e-mail or putting a message on a message queue for processing by a third party application. In my use case I first execute the asynchronous methods which executes the actual queries, retrieve a Future object to each invocation and then I collect the results later on using these references. Here is a very simple example of an asynchronous method invocation:

@Stateless
public class AsyncService {
    @Asynchronous
    public void execute() {
        doAsyncStuff();
    }
    ...
}
@Stateless
public class AnotherService {
    @EJB AsyncService async;

    public void doAsyncCall() {
        async.execute();
    }
}

You basically annotate your asynchronous business method with @Asynchronous, and the EJB container handles the rest. I’ve created a small Java EE 6 Web App which demonstrates the behaviour, the source code can be found on Github. The example exposes a RESTful web service, which can be used to invoke the asynchronous service. The example comes with a JBoss 7.1.1 distribution and Arquillian integration tests, so you can just execute the test cases in AsyncTest.java, to see the example in action. One of the test cases are executed within the actual application server, thus invoking the asynchronous service directly, while the other test case runs as a client, and makes a HTTP GET request to the exposed REST interface. The only thing you need to run the example is Maven and Java 7.

Please note that asynchronous method invocations are not supported in EJB 3.1 Lite, which is included in the Java EE 6 Web Profile.

 

Tommy Tynjä
@tommysdk

Writing integration tests in Java with Arquillian

Arquillian is a JBoss project that focuses on integration testing for Java. It’s an Open Source project I’m contributing to and using on the current project I’m working on. It let’s you write integration tests just as you would write unit tests, but it adds some very important features into the mix. Arquillian actually lets you execute your test cases inside a target runtime, such as your application server of choice! It also lets you make use of dependency injection to let you test your services directly in your integration test. Arquillian comes with a whole bunch of adapters to different runtimes, such as JBoss 7,6,5, Glassfish 3, Weld, WebSphere and even Selenium. So if you’re working with a common runtime, it is probably already supported! If your runtime is not supported, you could contribute with an adapter for it! What Arquillian basically needs to know is how to manage your runtime, how to start and stop the runtime and how to do deployments and undeployments. Just put the adapter for your runtime on your classpath and Arquillian will handle the rest.

Let’s get down to business. What does an Arquillian test look like? Image that we have a very basic EJB which we want to integration test:

@Stateless
public class WeekService {
  public String weekOfYear() {
      return Integer.toString(Calendar.getInstance().get(Calendar.WEEK_OF_YEAR));
  }
}

This EJB provides one single method which returns what week of the year it is. Sure, we could just as well unit test the functionality of this EJB, but imagine that we might want to extend this bean with more functionality later on and that we want to have a proper integration test already in place. What would an Arquillian integration test look like that asserts the behaviour of this bean? With JUnit, it would look something like this:

@RunWith(Arquillian.class)
public class WeekServiceTest {

    @Deployment
    public static Archive deployment() {
        return ShrinkWrap.create(JavaArchive.class, "week.jar")
                .addClass(WeekService.class);
    }

    @EJB WeekService service;

    @Test
    public void shouldReturnWeekOfYear() {
        Assert.assertNotNull(service.weekOfYear());
        Assert.assertEquals("" + Calendar.getInstance().get(Calendar.WEEK_OF_YEAR),
                service.weekOfYear());
    }
}

The first thing you notice is the @RunWith annotation at the top. This tells JUnit to let Arquillian manage the test. The next thing you notice is the static method that is annotated with @Deployment. This method creates a Java Archive (jar) representation containing the WeekService class which we want to test. The archive is created using ShrinkWrap, which let’s you assemble archives through a fluent Java API. Arquillian will take this archive and deploy it to the target runtime before executing the test cases, and will after the test execution undeploy the same archive from the runtime. You are not forced to have a deployment method, as you might as well want to test something that is already deployed or available in your runtime. If you have a runtime that is already running somewhere, you can setup Arquillian to run in remote mode, which means that Arquillian expects the runtime to already be running, and just do deployments and undeployments in the current environment. You could also tell Arquillian to run in managed mode, where Arquillian expects to be able to start the runtime before the test execution, and will also shutdown the runtime when the test execution completes. Some runtimes also comes with an embedded mode, which means that Arquillian will run an own isolated runtime instance during the test execution.

Probably the biggest difference to a standard unit test in the above example is the @EJB annotation on the WeekService property. This actually lets you specify that you want an EJB dependency injected into your test case when executing your tests, and Arquillian will handle that for you! If you want to make sure that is the case, just remove the @EJB annotation and you will indeed get a NullPointerException when calling service.weekOfYear() in your test method. A feature which is going to be added to a feature version of Arquillian is to fail fast if dependencies can’t be fullfilled (instead of throwing NPE’s).

The @Test annotated method handles the actual test logic, just like a unit test. The difference here is that the test is actually executed within your target runtime!

This was just a basic example of what you can do with Arquillian. The more you work with it, the more you will start to recognize different use cases where integration tests becomes a natural way of testing the intended behaviour.

The full source code of the above example is available on GitHub: https://github.com/tommysdk/showcase/tree/master/arquillian/. It also contains a more sophisticated example which uses CDI and JPA within an integration test. The examples runs on a JBoss 7.0.2 runtime, which comes bundled with the example. For more details and a deeper dive into Arquillian, visit the project website at http://arquillian.org.

 

Tommy Tynjä
@tommysdk

Configure datasources in JBoss AS 7.1

JBoss released their latest application server 7.1.0 last week which is a full fledged Java EE 6 server (full Java EE 6 profile certified). As I wanted to start working with the application server right away, I downloaded it and the first thing I needed to do was to configure my datasource. Setting up the datasources was simple and I’ll cover the steps in the process in this blog post.

I’ll use Oracle as an example in this blog post, but the same procedure worked with the other JDBC datasources I tried. First, add your driver as a module to the application server (if it does not already exist). Go to JBOSS_ROOT/modules and add a directory structure appropriate for your driver, such as: com/oracle/ojdbc14/main (note the main directory at the end) which gives you the path: JBOSS_ROOT/modules/com/oracle/ojdbc14/main.

In the main directory, add your driver jar-file and create a new file named module.xml. Add the following content to the module.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.1" name="com.oracle.ojdbc14">
    <resources>
        <resource-root path="the_name_of_your_driver_jar_file.jar"/>
    </resources>
    <dependencies>
        <module name="javax.api"/>
        <module name="javax.transaction.api"/>
    </dependencies>
</module>

In the module.xml file, the module name attribute have to match the structure of the directory structure and the resource-root path should point to the driver jar-file name.

As I wanted to run JBoss in standalone mode, I edited the JBOSS_ROOT/standalone/configuration/standalone.xml, where I added my datasource under the datasource subsystem, such as:

<subsystem xmlns="urn:jboss:domain:datasources:1.0">
  <datasources>
    <datasource jndi-name="java:jboss/datasources/OracleDS" pool-name="OracleDS" enabled="true" jta="true" use-java-context="true" use-ccm="true">
      <connection-url>jdbc:oracle:thin:@my_url:my_port</connection-url>
      <driver>oracle</driver>
      ... other settings
    </datasource>
  </datasources>
  ...
</subsystem>

Within the same subsystem, in the drivers tag, specify your driver module:

<subsystem xmlns="urn:jboss:domain:datasources:1.0">
  ...
  <drivers>
    <driver name="oracle" module="com.oracle.ojdbc14">
      <driver-class>oracle.jdbc.OracleDriver</driver-class>
    </driver>
  </drivers>
</subsystem>

Note that the driver-tag in your datasource should point to the driver name in standalone.xml and your driver module should point to the name of the module in the appropariate module.xml file.

Please also note that if your JDBC driver is not JDBC4 compatible, you need to specify your driver-class within the driver tag, otherwise it can be omitted.

Now you have your datasources setup, and when you start your JBoss AS 7.1 instance you should see it starting without any datasource related errors and see logging output along with the following lines:
org.jboss.as.connector.subsystems.datasources (ServerService Thread Pool — 27) JBAS010403: Deploying JDBC-compliant driver class oracle.jdbc.OracleDriver
org.jboss.as.connector.subsystems.datasources (MSC service thread 1-1) JBAS010400: Bound data source java language=”:jboss/datasources/OracleDS”

Tommy Tynjä
@tommysdk

Obey the DRY principle with code generation!

In the current project I’m in, we mostly use XML for data interchange between systems. Some of the XML schemas which have been handed to us (final versions) are unfortunatly violating the DRY (Don’t repeat yourself) principle. As we make use of Apache XMLBeans to generate the corresponding Java represenations, this gives us a lot of object duplicates. ArbitraryElement in schema1.xsd and ArbitraryElement in schema2.xsd seem identical in XML, but are defined once in each XSD, which makes XMLBeans generate duplicate objects, one for each occurance in the schemas. This is of course the expected outcome, but it’s not what we want in our hands. We have a lot of business logic surrounding the assembly of the XML content so we would like to use the same code to assemble ArbitraryElement, whether it’s the one from schema1.xsd or schema2.xsd. To implement the business logic with Java’s type safety, it would inevitable lead to Java code duplication. The solution I crafted for this was to use a simple code generator to generate the duplicate code.

First, I refactored the code to duplicate to it’s own class file which only contains such logic which is to be duplicated. I then wrote a small code generator using Apache Commons IO to read the contents of the class, replace the package names, specific method parameters and other appropriate stuff. Here is a simple example:

public class GenerateDuplicateXmlCode {
  public static void main(final String args[]) {
    String code = IOUtils.toString(new FileInputStream(new File(PATH + SOURCE + ".java")), "UTF-8");
    code = code.replaceAll("my.package.schema1.", "my.package.schema2"); // Replace imports
    code = code.replaceAll("public class " + SOURCE, "public class " + TARGET); // Replace class name
    code = code.replaceAll("Schema1TYPE", "Schema2TYPE"); // Replace method parameter type

    if (code.contains("/**") // Add Javadoc if present
       code = code.replace("/**", "/**\n * Code generated from " + SOURCE + " by " + GenerateDuplicateXmlCode.class.getSimpleName() + "\n * ");

    IOUtils.write(code, new FileOutputStream(new File(PATH + TARGET + ".java")), "UTF-8");
  }
}

The example above can easily be extended to include more advanced operations and to produce more output files.

Now we just need to implement the business logic once, and for each addition/bug fix etc in the template class (SOURCE in the example above), we just run the code generator and get the equivalent classes with business logic for the duplicate XML object classes, which we check into source control. And yes, the generated classes are test covered!

 

Tommy Tynjä
@tommysdk

Metrics, metrics everywhere with Graphite

What useful metrics does you application provide and how accessible are they?
In my experience, many times metrics of any application is bolted on by operations before going live or maybe even afterwards when you start experiencing strange problems and realize that the only way of knowing how the application performs is looking at cpu usage and stuff like that. Even though cpu, io and memory usage can be very helpful for ops it is probably not very useful metrics when looking at how your application performs in business terms.You need to build in metrics to your application and it should be as natural and common as any other logging you put there. Live metrics and stats presented in a appealing graphs are priceless feedback for practically everybody in the organisation ranging from operations, development, marketing, sales and even executives. Since all those people have very different views on what useful metrics are you need to start pushing out metrics of everything. You never know when you need it since it is so easy there’s really no excuse for not doing it. With very little effort you can be the graphing hero and hopefully cool dashboards with customized live metrics graphs will start to pop up everywhere.

Install Graphite

Graphite is a cool little project that allows you to collect/aggregate metrics and in a very easy and flexible way create customized real time graphs on demand. It is a python/django app with a web front that hooks into Apache. The data aggregator is called Carbon and is essentially a python deamon that slurps data from a udp port. The whole “package” can be a bit tricky to install (at least when you are on REL), it depends on some image processing libraries and stuff but you will get it done in an hour or two at them most, just follow the install instructions. Needless to say it must be installed on a server that is accessible from where the applications are running so they can push metrics to it on a udp port, but I’m sure there’s one laying around running some old monitoring tools or something. There are default examples of all config files so once all the python packs and dependencies are installed you will be up n’ running in no time and can start to push metrics to Carbon.

Start pushing metrics

They way you push data to Carbon is extremely easy, just push a udp package (udp for low cost fire-and-forget communication) like this:

node-123.myCoolApplication.enviroment.activeSessions 87 1320316143

The first part is a unique metric key which in a clustered environment also should include the node identifier. The second part is the actual metric value so in this case there are 87 active sessions. The last part is a timestamp.

This kind of metrics should preferably be pushed regularly with some scheduling utility like quartz or similar but you can of course also push metrics as events of business transactions like this:

node-123.myCoolApplication.service.buyBook.success 1 1320316143

In this case I push the metric of the event of 1 book being sold successfully. These metrics will be scattered in time but nevertheless very useful when you look at them cumulative for trends or compare them with other technical metrics.

It is also very important that you measure failures since they can provide powerful insights compared to other metrics. So in buyBook service I would also push this metrics every time it for some reason failed:

node-123.myCoolApplication.service.buyBook.failed 1 1320316143

My advice is to take a few minutes to think about a good naming convention for you metric keys since it will have some impact on they way you can aggregate data and graph it later and you don’t want to change a key once you have started to measure it.

Here’s a simple java utility class that would do the trick:

public class GraphiteLogger {
    private static final Logger LOGGER = LoggerFactory.getLogger(GraphiteLogger.class);
    private String graphiteHost;
    private int graphitePort;
    private boolean enabled;
    private String nodeIdentifier;

    public static GraphiteLogger getDefaultLogger() {
        String gHost =  “localhost”;  // get it from application startup properties or something
        int gPort = 2003 ; // get it from application startup properties or something
        boolean enabled = true; // good thing to have a on/off switch in application config
        return new GraphiteLogger(gHost, gPort, enabled);
    }

    public GraphiteLogger(String graphiteHost, int graphitePort, boolean enabled) {
        this.enabled = enabled;
        this.graphiteHost = graphiteHost;
        this.graphitePort = graphitePort;
        try {
            this.nodeIdentifier = java.net.InetAddress.getLocalHost().getHostName();
        } catch (UnknownHostException ex) {
            LOGGER.warn("Failed to determin host name",ex);
        }
       if (this.graphiteHost==null || this.graphiteHost.length()==0 ||
           this.nodeIdentifier==null || this.nodeIdentifier.length()==0 ||
           this.graphitePort<0 || !logToGraphite("connection.test", 1L))
       {
            LOGGER.warn("Faild to create GraphiteLogger, graphiteHost graphitePost or nodeIdentifier could not be defined properly: " + about());
            this.enabled=false;
        }
    }

    public final String about() {
        return new StringBuffer().append("{ graphiteHost=").append(this.graphiteHost).append(", graphitePort=").append(this.graphitePort).append(", nodeIdentifier=").append(this.nodeIdentifier).append(" }").toString();
    }

    public void logMetric(String key, long value) {
        logToGraphite(key,value);
    }

    public boolean logToGraphite(String key, long value) {
        Map stats = new HashMap();
        stats.put(key, value);
        return logToGraphite(stats);
    }

    public boolean logToGraphite(Map stats) {
        if (stats.isEmpty()) {
            return true;
        }

        try {
            logToGraphite(nodeIdentifier, stats);
        } catch (Throwable t) {
            LOGGER.warn("Can't log to graphite", t);
            return false;
        }
        return true;
    }

    private void logToGraphite(String nodeIdentifier, Map stats) throws Exception {
        Long curTimeInSec = System.currentTimeMillis() / 1000;
        StringBuffer lines = new StringBuffer();
        for (Object entry : stats.entrySet()) {
            Entry stat = (Entry)entry;
            String key = nodeIdentifier + "." + stat.getKey();
            lines.append(key).append(" ").append(stat.getValue()).append(" ").append(curTimeInSec).append("\n"); //even the last line in graphite
        }
        logToGraphite(lines);
    }
    private void logToGraphite(StringBuffer lines) throws Exception {
        if (this.enabled) {
            LOGGER.debug("Writing [{}] to graphite", lines.toString);
            byte[] bytes = lines.toString().getBytes();
            InetAddress address = InetAddress.getByName(graphiteHost);
            DatagramPacket packet = new DatagramPacket(bytes, bytes.length,address, graphitePort);
            DatagramSocket dsocket = new DatagramSocket();
            try {
                dsocket.send();
            } finally {
                dsocket.close();
            }
        }
    }
}

As easy as you log info and debug to your logging framework of choice you can now use this to push technical and business metrics to graphite everywhere in your app:

public class BookService {
private static final GraphiteLogger GRAPHITELOGGER = GraphiteLogger.getDefaultLogger();
    public void buyBook(..) {
        try {
        // do your service stuff
    } catch (ServiceException e) {
        // do your exception handling
        GRAPHITELOGGER.logMetric(“bookstore.service.buyBook.failed”, 1L);
    }
    GRAPHITELOGGER.logMetric(“bookstore.service.buyBook.success”, 1L);
}

Start Graphing

Now when you have got graphite up n’ running and your app is pushing all sorts of useful metrics to it you can start with the fun part, graphing!Graphite comes with a web front for elaborating with graphs, just brows to it on the installed Apache (defaults as document root). There you can browse your metric keys and create graphs in a graph composer, apply misc functions and rendering options etc. From here you can also access the documentation and some experimental feature for flot and events.
However, the really useful interface graphite provides is the url for rendering a graph on demand. This url e.g.:

http://localhost:8000/render?target=keepLastValue(integral(sum(usbeta13.epsos-web.service.*.failed.*)))&target=keepLastValue(integral(sum(usbeta13.epsos-web.service.*.success.*)))&from=20111024

Will give you a png image of a graph of the sum of all service calls (success and failed) accumulated over time from 2011-11-24

Yes, it is that easy!

There’s also a great deal of functions you can apply to your data e.g integral, cumulative, sum, average, max, min, etc and there’s also a lot of parameters to customize the graph with colors, fonts, texts etc. So just go crazy and define all the graphs you can think of and put them on a self-refreshing webpage, embedd them in a wiki or some other dashboard mash-up you may already have.

And if you find the graphs a bit crude and want to do something more fancy you can just pull the raw data by adding these parameter to the url:

&rawData=true&format=csv

And then use your favorite graph tool and do what ever cool trix you want. The formats available are raw | csv | json. A cool thing to try would be to pull the raw data in json format into a grails app and do some cool eye-candy charts with google charts… I’ll put that in the list of cool-things-to-try

Find the useful graphs

Now you have all the tools in place to make really useful dashboards about your applications real business performance in addition to the technical perfomance. You can in real time graph all kinds of interesting stuff and compare metrics that can give you very valuable insight, lets say you are running a business with a site of some sort and you wan’t to see the business impact on new released features, make sure you push metric to graphite when you deploy and then graph deploys vs what ever business metric you are interested in (e.g. sold books), hopefully you will see a boost after each deploy that contains new cool features and if not maybe you have something to think about. Like this you can combine technical metrics and business value metrics to see patterns and trends which can be really useful for a lot of people in the organisation.

Make them visible

Put the graphs on the biggest displays you can find in a place where as many people as possible can see them. Make sure they are updated frequently enough to provide real-time information and continuously improve, create new and remove old graphs that wasn’t really useful. If you don’t have access to big dashboard displays maybe instead write a small script what will pick useful graphs on a daily basis and email them through out the company, just be sure to spread the knowledge that the graphs provide.

And again, don’t forget to measure failures, many times just visualizing the problems in a sometimes painful way to everyone will give a boost on quality because nobody wants to be the bad guy and everybody wants to be a hero like you!

Andreas Rehn
@andreasrehn

An introduction to Java EE 6

Enterprise Java is really taking a giant leap forward with its latest specification, the Java EE 6. What earlier required (more or less) third party frameworks to achieve are now available straight out of the box in Java EE. EJB’s for example have gone from being cumbersome and complex to easy and lightweight, without compromises in functionality. For the last years, every single project I’ve been working on has in one way or another incorporated the Spring framework, and especially the dependency injection (IoC) framework. One of the best things with Java EE 6 in my opinion is that Java EE now provides dependency injection straight out of the box, through the CDI (Context and Dependency Injection) API. With this easy to use, standardized and lightweight framework I can now see how many projects can actually move away from being dependent on Spring just for this simple reason. CDI is not enabled by default and to enable it you need to put a beans.xml file in the META-INF/WEB-INF folder of your module (the file can be empty though). With CDI enabled you can just inject your dependencies with the javax.inject.Inject annotation:

@Stateless
public class MyArbitraryEnterpriseBean {

   @Inject
   private MyBusinessBean myBusinessBean;

}

Also note that the above POJO is actually a stateless session bean thanks to the @Stateless annotation! No mandatory interfaces or ejb-jar.xml are needed.

Working with JSF and CDI is just as simple. Imagine that you have the following bean, where the javax.inject.Named annotation marks it as a CDI bean:

@Named("controller")
public class ControllerBean {

   public void happy() { ... }

   public void sad() { ... }
}

You could then invoke the methods from a JSF page like this:

<h:form id="controllerForm">
   <fieldset>
      <h:commandButton value=":)" action="#{controller.happy}"/>
      <h:commandButton value=":(" action="#{controller.sad}"/>
   </fieldset>
</h:form>

Among other nice features of Java EE 6 is that EJB’s are now allowed to be packaged inside a war package. That alone can definitely save you from packaging headaches. Another step in making Java EE lightweight.

If you are working with servlets, there are good news for you. The notorious web.xml is now optional, and you can declare a servlet as easy as:

@WebServlet("/myContextRoot")
public class MyServlet extends HttpServlet {
   ...
}

To start playing with Java EE 6 with the use of Maven, you could just do mvn archetype:generate and select one of the jee6-x archetypes to get yourself a basic Java EE 6 project structure, e.g. jee6-basic-archetype.

Personally I believe Java EE 6 is breaking new grounds in terms of enterprise Java. Java EE 6 is what J2EE was not, e.g. easy, lightweight, flexible, straightforward and it has a promising future. Hopefully Java EE will from now on be the natural choice when building applications, over the option of depending on a wide selection of third party frameworks, which has been the case in the past.

 

Tommy Tynjä
@tommysdk

Application startup order in IBM WebSphere Application Server

If you are hosting an application server with multiple applications deployed and one of them is dependent on another, you might want to configure in what order they start. Typical use cases would be assuring that e.g. the server side of a web service is up before a client is available, or to assure that resources have been initialized into JNDI.

In IBM WebSphere Application Server (6.1) this has to be specified through container specific configuration. You need to make sure the application dependent on another has a higher startup order value than the one it depends on. You can set this either through the management console under Applications > Enterprise Applications > MY APPLICATION > Startup behaviour > General Properties > Startup order. It is also possible to specify this through the IBM WebSphere deployment.xml deployment descriptor by specifying an XML attribute startingWeight to the deployedObject tag for your application with dependencies. Example where the startup order has been set to the arbitrary value of 97:

<appdeployment:Deployment xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI"
         xmlns:appdeployment="http://www.ibm.com/websphere/appserver/schemas/5.0/appdeployment.xmi"
         xmi:id="Deployment_1">
   <deployedObject xmi:type="appdeployment:ApplicationDeployment" xmi:id="ApplicationDeployment_1"
            deploymentId="0" startingWeight="97" binariesURL="$(APP_INSTALL_ROOT)/node/myapplication.ear"
            useMetadataFromBinaries="false" enableDistribution="true" createMBeansForResources="true"
            reloadEnabled="false" appContextIDForSecurity="href:node/myapplication"
            backgroundApplication="false" filePermission=".*\.dll=755#.*\.so=755#.*\.a=755#.*\.sl=755"
            allowDispatchRemoteInclude="false" allowServiceRemoteInclude="false">
      ... other configuration omitted
   </deployedObject>
   <deploymentTargets xmi:type="appdeployment:ServerTarget" xmi:id="ServerTarget_1" nodeName="node"/>
</appdeployment:Deployment>

After the configuration has been saved, the next time you restart your server, the applications will be started in the desired order.

 

Tommy Tynjä
@tommysdk

BTrace can save your day

Today I had a problem with a scheduled job in a application deployed on GlassFish. The execution time was too long but I could not find out how long. The system was deployed in a test environment and to do changes in the code to log out execution time was possible, but the roundtrip time to do the changes and rebuild and deploy is always a little bit to long.

So I saw the chance to using some BTrace scripts instead. BTrace is a great tool for instrumenting your classes in runtime without any restart of your application. It just connects to the JVM process and installs the BTrace javaagent in runtime and compiles and connects to the javaagent and instruments the classes specified in your BTrace script.

So I downloaded BTrace for here. Installed it on the test server.

Created the java class MethodTime.java

import static com.sun.btrace.BTraceUtils.*;
import com.sun.btrace.annotations.*;

@BTrace public class MethodTimer {
   // store entry time in thread local
   @TLS private static long startTime;

   @OnMethod(
     clazz="foo.bar.TheClass",
     method="doWork"
   )
   public static void onMethodEntry() {
       startTime = timeMillis();
   }

   @OnMethod(
    clazz="foo.bar.TheClass",
    method="doWork",
    location=@Location(Kind.RETURN)
   )
   public static void onMethodReturn() {
       println(strcat("Time taken (msec) ", str(timeMillis() - startTime)));
       println("==========================");
   }
}
}

Then I just issued the command:

btrace <pid> MethodTimer.java

Then the my little BTrace script prints the execution time for every invocation of foo.bar.TheClass.doWork. Without any recompilation and redeployment of the application.