Tag Archives: java

JavaOne Latin America summary

Last week I attended the JavaOne Latin America software development conference in São Paulo, Brazil. This was a joint event with Oracle Open World, a conference with focus on Oracle solutions. I was accepted as a speaker for the Java, DevOps and Methodologies track at JavaOne. This article intends to give a summary of my main takeaways from the event.

The main points Oracle made during the opening keynote of Oracle Open World was their commitment to cloud technology. Oracle CEO Mark Hurd made the prediction that in 10 years, most production workloads will be running in the cloud. Oracle currently engineers all of their solutions for the cloud, which is something that also their on-premise solutions can leverage from. Security is a very important aspect in all of their solutions. Quite a few sessions during JavaOne showcased Java applications running on Oracles cloud platform.

The JavaOne opening keynote demonstrated the Java flight recorder, which enables you to profile your applications with near zero overhead. The aim of the flight recorder is to have as minimal overhead as possible so it can be enabled in production systems by default. It has a user interface which provides valuable data in case you want to get an understanding of how your Java applications perform.

The next version of Java, Java 9, will feature long awaited modularity through project Jigsaw. Today, all of your public APIs in your source code packages are exposed and can be used by anyone who has access to the classes. With Jigsaw, you as a developer can decide which classes are exported and exposed. This gives you as a developer more control over what are internal APIs and what are intended for use. This was quickly demonstrated on stage. You will also have the possibility to decide what your application dependencies are within the JDK. For instance it is quite unlikely that you need e.g. UI related libraries such as Swing or audio if you develop back-end software. I once tried to dissect the Java rt.jar for this particular purpose (just for fun), so it is nice to finally see this becoming a reality. The keynote also mentioned project Valhalla (e.g. value types for classes) and project Panama but at this date it is still uncertain if they will be included in Java 9.

Mark Heckler from Pivotal had an interesting session on the Spring framework and their Spring cloud projects. Pivotal works closely with Netflix, which is a known open source contributor and one of the industry leaders when it comes to developing and using new technologies. However, since Netflix is committed to run their applications on AWS, Spring cloud aims to make use of Netflix work to create portable solutions for a wider community by introducing appropriate abstraction layers. This enables Spring cloud users to avoid changing their code if there is a need to change the underlying technology. Spring cloud has support for Netflix tools such as Eureka, Ribbon, Zuul, Hystrix among others.

Arquillian, by many considered the best integration testing framework for Java EE projects, was dedicated a full one hour session at the event. As a long time contributor to project, it was nice to see a presentation about it and how its Cube extension can make use of Docker containers to execute your tests against.

One interesting session was a non-technical one, also given by Mark Heckler, which focused on financial equations and what factors drives business decisions. The purpose was to give an understanding of how businesses calculate paybacks and return on investments and when and why it makes sense for a company to invest in new technology. For instance the payback period for an initiative should be as short as possible, preferably less than a year. The presentation also covered net present values and quantification’s. Transforming a monolithic architecture to a microservice style approach was the example used for the calculations. The average cadence for releases of key monolithic applications in our industry is just one release per year. Which in many cases is even optimistic! Compare these numbers with Amazon, who have developed their applications with a microservice architecture since 2011. Their average release cadence is 7448 times a day, which means that they perform a production release once every 11.6s! This presentation certainly laid a good foundation for my own session on continuous delivery.

Then it was finally time for my presentation! When I arrived 10 minutes before the talk there was already a long line outside of the room and it filled up nicely. In my presentation, The Road to Continuous Delivery, I talked about how a Swedish company revamped its organization to implement a completely new distributed system running on the Java platform and using continuous delivery ways of working. I started with a quick poll of how many in the room were able to perform production releases every day and I was apparently the only one. So I’m glad that there was a lot of interest for this topic at the conference! If you are interested in having me presenting this or another talk on the topic at your company or at an event, please contact me! It was great fun to give this presentation and I got some good questions and interesting discussions afterwards. You can find the slides for my presentation here.

Java Virtual Machine (JVM) architect Mikael Vidstedt had an interesting presentation about JVM insights. It is apparently not uncommon that the vast majority of the Java heap footprint is taken up by String objects (25-50% not uncommon) and many of them have the same value. JDK 8 however introduced an important JVM improvement to address this memory footprint concern (through string deduplication) in their G1 garbage collector improvement effort.

I was positively surprised that many sessions where quite non-technical. A few sessions talked about possibilities with the Raspberry Pi embedded device, but there was also a couple of presentations that touched on software development in a broader sense. I think it is good and important for conferences of this size to have such a good balance in content.

All in all I thought it was a good event. Both JavaOne and Oracle Open World combined for around 1300 attendees and a big exhibition hall. JavaOne was multi-track, but most sessions were in Portuguese. However, there was usually one track dedicated to English speaking sessions but it obviously limited the available content somewhat for non-Portuguese speaking attendees. An interesting feature was that the sessions held in English were live translated to Portuguese to those who lent headphones for that purpose.

The best part of the conference was that I got to meet a lot of people from many different countries. It is great fun to meet colleagues from all around the world that shares my enthusiasm and passion for software development. The strive for making this industry better continues!

Tommy Tynjä
@tommysdk
LinkedIn profile
http://www.diabol.se

Jfokus 2015 main takeaways

The biggest Java conference in Sweden, Jfokus, wrapped up a three day conference on Wednesday. This was my fifth consecutive one and I thought it was the best in recent years. In this blog post I’ll summarize my main takeaways.

During the first day, the tutorial day, I truly enjoyed Ken Sipe from Mesosphere and his talk “Docker at Production Scale”. It was supposed to be a tutorial but it wasn’t focused on one subject, which one might expect from the title. The talk was divided in several parts where he first started talking about infrastructure in general, what’s problematic in today’s datacenters regarding resource consumption etc. He then gave a good intro to Docker, where even I who have been using Docker on a day-to-day basis during the last two months learned a couple of new things. The last part of the talk focused on Apache Mesos, which problems it solves and how. He briefly also touched on subjects such as Service Discovery. For me, the main takeaway from this talk is that the way we structure our datacenters and our applications are definitely changing and that there is a lot of exciting technologies taking form in this area at this point of time.

Christian Heilmann held an interesting opening keynote where he talked a lot about mobile apps and their profitability. He pinpointed that many think apps are sure way to make good money, but in reality, that is not the case. Very few apps out of all apps in the marketplace are actually that profitable. Your company will have a greater chance of making money out of apps if targeting the enterprise customers instead of individuals. Enterprises are constantly looking for ways for their employees to be more efficient and certain apps could possible help them achieve that. Individuals are also very conservative now days when it comes to installing new apps. Most people just go with the apps they have and install one or no new app per month. He had some interesting figures backing up those statements.

A great talk was “Thinking fast and slow in software development” by Daniel Bryant. His talk was based on the book Thinking fast and slow by Daniel Kahneman, but applied to software development. One good quote from this talk was “look for actual problems instead of solutions”. As developers we tend to start elaborating solutions instead of trying to understand the actual problem. Are we really understanding what the actual problem is? Or do we just think we know what the problem is when in fact do not? He also mentioned that some of the most common factors for failures in software projects (source IEEE) is poor communication among customers, developers and users, the use of immature technology and sloppy development practices. These are things that we definitely could do better at in our industry! There is no reason why we should accept this happening. We should all care more about software craftsmanship.

Jeremy Deane held an interesting presetnation on concurrent processing techniques using e.g. plain java.util.concurrent techniques and actors. He had a few good tips on how to decouple a web service which under the hood depend on a slow responding third party web service by making the communication in between them asynchronous. All of his examples can be found in his GitHub repo.

As usual at Jfokus, Arun Gupta presented what’s to come in the next Java EE version, in this case Java EE 8. Focus will be to improve the new features introduced in Java EE 7 to make them more usable. Support for HTTP 2 and HTML 5 will be added to help those technologies gain traction. Servlet 4.0 will be introduced as well as MVC 1.0. JAX-RS, JMS and JSON APIs will also get facelifts. The Batch processing API will not be tied to Java EE only but will in the future be available in conjunction with Java SE as well. Obviously Java EE 8 will contain improvements for the new language features introduced in Java 8, such as functional interfaces etc.

The second day kicked off with a talk by Jez Humble called “21st Century Software Delivery”. He really put emphasis on how important continuous delivery is along with continuous experimentation. As an industry, we are bad at experimenting. We try to build this big thing that we think our customers want (but which we don’t really know). Instead we should try out the thing we’re building in a small scale first, not only as a protype but a fully functional, nice and shiny feature that the customers will appreciate, but with very limited scope. We can then measure how well this experiment turns out by conducting A/B testing and comparing the analytics for this feature. Should we continue to build on top of that experiment or not? You wouldn’t go out to build a very large complex building without building it in small scale first and to assure that techniques and functionality is matching the expections. We should do that in software development as well.

Jez Humbles second talk, on automated acceptance testing was valuable. For me, who am passionated about delivering quality software, there wasn’t much new in his content but it was still refreshing to get reminded on why we actually want to do testing in a certain way, even though you might do it already. Most people have probably encountered flaky tests. Those tests that for one reason or another goes red with or without an apparant reason, just to go green in a build later containing a totally unrelated code change. Flaky tests leads to less confidence and trust for the tests which eventually leads to people ignoring them or not even running them at all. One interesting technique to battle flaky tests is to move known flaky tests to separate suite, which is to be run separately (preferrably after) your actual suite. Then, you can remain confident about your test suite always being reliable. When a flaky test has become stable again, it should move back into the main suite. It is important to remember that test suite shouldn’t be static. Tests should come and go and be moved appropriately alongside the codebase development. Do also not be afraid to actually delete tests that no longer brings value. We are horrible at identifying and actually removing those tests.

In between talks I also enjoyed meeting a lot of people I’ve worked with in various projects throughout my career. Always a real pleasure to catch up with old friends as well as getting to know a few new ones as well. I also got a brief chat with Jez Humble regarding continuous delivery and strategies for balancing between maintenance and development of new features in highly autonomous teams with end-to-end responsibility, a discussion which was very much appreciated. Even though the vast majority of sessions were awesome, I almost wish there would have been some more breaks as well so I could have had the opportunity for even more socializing.

All in all, a great event. Thanks to everyone involved in organizing this event and to all the speakers and attendees. See you in 2016!

Tommy Tynjä
@tommysdk

Writing integration tests with an in-memory Mongo DB

As I mentioned in my previous post, I’ve been working closely to the document oritented NoSQL database Mongo DB lately. As an advocate of sustainable systems development (test driven development that is), I took the lead in our team for designing tests for our business logic towards Mongo. For relational databases there are a lot of options, a common solution for testing against relational databases is to use an in-memory database such as H2. For NoSQL databases the options are not always as generous from an automated test perspective. Luckily, we found an in-memory version for Mongo DB from Flapdoodle which is easy to use and fits our use case perfectly. If your (Java) code base relies on Maven, just add the following dependency:

<dependency>
    <groupId>de.flapdoodle.embed</groupId>
    <artifactId>de.flapdoodle.embed.mongo</artifactId>
    <version>1.27</version>
    <scope>test</scope>
</dependency>

Then, in your test class (here based on JUnit), use the provided classes to start an in-memory Mongo DB, preferrably in a @Before method, and tear down the instance in a @After method, as in the example below:

public class TestInMemoryMongo {

    private static final String MONGO_HOST = "localhost";
    private static final int MONGO_PORT = 27777;
    private static final String IN_MEM_CONNECTION_URL = MONGO_HOST + ":" + MONGO_PORT;

    private MongodExecutable mongodExe;
    private MongodProcess mongod;
    private Mongo mongo;

    /**
     * Start in-memory Mongo DB process
     */
    @Before
    public void setup() throws Exception {
        MongodStarter runtime = MongodStarter.getDefaultInstance();
        mongodExe = runtime.prepare(new MongodConfig(Version.V2_0_5, MONGO_PORT, Network.localhostIsIPv6()));
        mongod = mongodExe.start();
        mongo = new Mongo(MONGO_HOST, MONGO_PORT);
    }

    /**
     * Shutdown in-memory Mongo DB process
     */
    @After
    public void teardown() throws Exception {
        if (mongod != null) {
            mongod.stop();
            mongodExe.stop();
        }
    }

    @Test
    public void shouldAssertSomeInteractionWithMongo() {
        // Create a connection to Mongo using the IN_MEM_CONNECTION_URL property
        // Execute some business logic towards Mongo
        // Assert the expected the behaviour
    }

    protected MongoPersistenceContext getMongoPersistenceContext() {
        // Returns an instance of the class containing business logic towards Mongo
        return new MongoPersistenceContext();
    }
}

… then you have an in-memory Mongo DB available for your test cases. The private member named “mongo” in the example above is of type com.mongodb.Mongo, which you can use for interaction with the Mongo database. As you notice, all you need is basically JUnit och Mongo, nothing more. But life is not always as easy as the simplest examples. In our case, we leverage from EJBs and other components which relies on running inside an container. As I’m contributor to the JBoss Arquillian project, a Java framework for integration tests, I was obviously curious about trying the approach of making the in-memory available when executing tests inside the container. If you use Arquillian already like we do, the transition is smooth. Just make sure to put the in-memory Mongo and Java driver on your classpath. The following Arquillian test extends the JUnit test class above, but with a bean injection for the business class under test:

@RunWith(Arquillian.class)
public class TestInMemoryMongoWithArquillian extends TestInMemoryMongo {

    @Deployment
    public static Archive getDeployment() {
        return ShrinkWrap.create(WebArchive.class, "mongo.war")
                .addClass(MongoPersistenceContext.class)
                .addAsManifestResource(EmptyAsset.INSTANCE, "beans.xml")
                .addAsWebInfResource(new StringAsset("<web-app></web-app>"), "web.xml")
                .addAsLibraries(DependencyResolvers.use(MavenDependencyResolver.class)
                        .artifact("de.flapdoodle.embed:de.flapdoodle.embed.mongo:jar:1.27")
                        .artifact("org.mongodb:mongo-java-driver:jar:2.9.1")
                        .resolveAs(JavaArchive.class));
    }

    @Inject MongoPersistenceContext mpc;

    @Override
    protected MongoPersistenceContext getMongoPersistenceContext() {
        return mpc;
    }
}

I’ve uploaded a full example project, based on Java 7, Maven, Mongo DB, Arquillian and Apache TomEE (embedded) on GitHub to demonstrate how to use an in-memory Mongo DB in a unit test as well as with Arquillian inside TomEE. This should serve as a good base when starting to write automated tests for your business logic towards Mongo.

 

Tommy Tynjä
@tommysdk

Writing integration tests in Java with Arquillian

Arquillian is a JBoss project that focuses on integration testing for Java. It’s an Open Source project I’m contributing to and using on the current project I’m working on. It let’s you write integration tests just as you would write unit tests, but it adds some very important features into the mix. Arquillian actually lets you execute your test cases inside a target runtime, such as your application server of choice! It also lets you make use of dependency injection to let you test your services directly in your integration test. Arquillian comes with a whole bunch of adapters to different runtimes, such as JBoss 7,6,5, Glassfish 3, Weld, WebSphere and even Selenium. So if you’re working with a common runtime, it is probably already supported! If your runtime is not supported, you could contribute with an adapter for it! What Arquillian basically needs to know is how to manage your runtime, how to start and stop the runtime and how to do deployments and undeployments. Just put the adapter for your runtime on your classpath and Arquillian will handle the rest.

Let’s get down to business. What does an Arquillian test look like? Image that we have a very basic EJB which we want to integration test:

@Stateless
public class WeekService {
  public String weekOfYear() {
      return Integer.toString(Calendar.getInstance().get(Calendar.WEEK_OF_YEAR));
  }
}

This EJB provides one single method which returns what week of the year it is. Sure, we could just as well unit test the functionality of this EJB, but imagine that we might want to extend this bean with more functionality later on and that we want to have a proper integration test already in place. What would an Arquillian integration test look like that asserts the behaviour of this bean? With JUnit, it would look something like this:

@RunWith(Arquillian.class)
public class WeekServiceTest {

    @Deployment
    public static Archive deployment() {
        return ShrinkWrap.create(JavaArchive.class, "week.jar")
                .addClass(WeekService.class);
    }

    @EJB WeekService service;

    @Test
    public void shouldReturnWeekOfYear() {
        Assert.assertNotNull(service.weekOfYear());
        Assert.assertEquals("" + Calendar.getInstance().get(Calendar.WEEK_OF_YEAR),
                service.weekOfYear());
    }
}

The first thing you notice is the @RunWith annotation at the top. This tells JUnit to let Arquillian manage the test. The next thing you notice is the static method that is annotated with @Deployment. This method creates a Java Archive (jar) representation containing the WeekService class which we want to test. The archive is created using ShrinkWrap, which let’s you assemble archives through a fluent Java API. Arquillian will take this archive and deploy it to the target runtime before executing the test cases, and will after the test execution undeploy the same archive from the runtime. You are not forced to have a deployment method, as you might as well want to test something that is already deployed or available in your runtime. If you have a runtime that is already running somewhere, you can setup Arquillian to run in remote mode, which means that Arquillian expects the runtime to already be running, and just do deployments and undeployments in the current environment. You could also tell Arquillian to run in managed mode, where Arquillian expects to be able to start the runtime before the test execution, and will also shutdown the runtime when the test execution completes. Some runtimes also comes with an embedded mode, which means that Arquillian will run an own isolated runtime instance during the test execution.

Probably the biggest difference to a standard unit test in the above example is the @EJB annotation on the WeekService property. This actually lets you specify that you want an EJB dependency injected into your test case when executing your tests, and Arquillian will handle that for you! If you want to make sure that is the case, just remove the @EJB annotation and you will indeed get a NullPointerException when calling service.weekOfYear() in your test method. A feature which is going to be added to a feature version of Arquillian is to fail fast if dependencies can’t be fullfilled (instead of throwing NPE’s).

The @Test annotated method handles the actual test logic, just like a unit test. The difference here is that the test is actually executed within your target runtime!

This was just a basic example of what you can do with Arquillian. The more you work with it, the more you will start to recognize different use cases where integration tests becomes a natural way of testing the intended behaviour.

The full source code of the above example is available on GitHub: https://github.com/tommysdk/showcase/tree/master/arquillian/. It also contains a more sophisticated example which uses CDI and JPA within an integration test. The examples runs on a JBoss 7.0.2 runtime, which comes bundled with the example. For more details and a deeper dive into Arquillian, visit the project website at http://arquillian.org.

 

Tommy Tynjä
@tommysdk

Configure datasources in JBoss AS 7.1

JBoss released their latest application server 7.1.0 last week which is a full fledged Java EE 6 server (full Java EE 6 profile certified). As I wanted to start working with the application server right away, I downloaded it and the first thing I needed to do was to configure my datasource. Setting up the datasources was simple and I’ll cover the steps in the process in this blog post.

I’ll use Oracle as an example in this blog post, but the same procedure worked with the other JDBC datasources I tried. First, add your driver as a module to the application server (if it does not already exist). Go to JBOSS_ROOT/modules and add a directory structure appropriate for your driver, such as: com/oracle/ojdbc14/main (note the main directory at the end) which gives you the path: JBOSS_ROOT/modules/com/oracle/ojdbc14/main.

In the main directory, add your driver jar-file and create a new file named module.xml. Add the following content to the module.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.1" name="com.oracle.ojdbc14">
    <resources>
        <resource-root path="the_name_of_your_driver_jar_file.jar"/>
    </resources>
    <dependencies>
        <module name="javax.api"/>
        <module name="javax.transaction.api"/>
    </dependencies>
</module>

In the module.xml file, the module name attribute have to match the structure of the directory structure and the resource-root path should point to the driver jar-file name.

As I wanted to run JBoss in standalone mode, I edited the JBOSS_ROOT/standalone/configuration/standalone.xml, where I added my datasource under the datasource subsystem, such as:

<subsystem xmlns="urn:jboss:domain:datasources:1.0">
  <datasources>
    <datasource jndi-name="java:jboss/datasources/OracleDS" pool-name="OracleDS" enabled="true" jta="true" use-java-context="true" use-ccm="true">
      <connection-url>jdbc:oracle:thin:@my_url:my_port</connection-url>
      <driver>oracle</driver>
      ... other settings
    </datasource>
  </datasources>
  ...
</subsystem>

Within the same subsystem, in the drivers tag, specify your driver module:

<subsystem xmlns="urn:jboss:domain:datasources:1.0">
  ...
  <drivers>
    <driver name="oracle" module="com.oracle.ojdbc14">
      <driver-class>oracle.jdbc.OracleDriver</driver-class>
    </driver>
  </drivers>
</subsystem>

Note that the driver-tag in your datasource should point to the driver name in standalone.xml and your driver module should point to the name of the module in the appropariate module.xml file.

Please also note that if your JDBC driver is not JDBC4 compatible, you need to specify your driver-class within the driver tag, otherwise it can be omitted.

Now you have your datasources setup, and when you start your JBoss AS 7.1 instance you should see it starting without any datasource related errors and see logging output along with the following lines:
org.jboss.as.connector.subsystems.datasources (ServerService Thread Pool — 27) JBAS010403: Deploying JDBC-compliant driver class oracle.jdbc.OracleDriver
org.jboss.as.connector.subsystems.datasources (MSC service thread 1-1) JBAS010400: Bound data source java language=”:jboss/datasources/OracleDS”

Tommy Tynjä
@tommysdk

Obey the DRY principle with code generation!

In the current project I’m in, we mostly use XML for data interchange between systems. Some of the XML schemas which have been handed to us (final versions) are unfortunatly violating the DRY (Don’t repeat yourself) principle. As we make use of Apache XMLBeans to generate the corresponding Java represenations, this gives us a lot of object duplicates. ArbitraryElement in schema1.xsd and ArbitraryElement in schema2.xsd seem identical in XML, but are defined once in each XSD, which makes XMLBeans generate duplicate objects, one for each occurance in the schemas. This is of course the expected outcome, but it’s not what we want in our hands. We have a lot of business logic surrounding the assembly of the XML content so we would like to use the same code to assemble ArbitraryElement, whether it’s the one from schema1.xsd or schema2.xsd. To implement the business logic with Java’s type safety, it would inevitable lead to Java code duplication. The solution I crafted for this was to use a simple code generator to generate the duplicate code.

First, I refactored the code to duplicate to it’s own class file which only contains such logic which is to be duplicated. I then wrote a small code generator using Apache Commons IO to read the contents of the class, replace the package names, specific method parameters and other appropriate stuff. Here is a simple example:

public class GenerateDuplicateXmlCode {
  public static void main(final String args[]) {
    String code = IOUtils.toString(new FileInputStream(new File(PATH + SOURCE + ".java")), "UTF-8");
    code = code.replaceAll("my.package.schema1.", "my.package.schema2"); // Replace imports
    code = code.replaceAll("public class " + SOURCE, "public class " + TARGET); // Replace class name
    code = code.replaceAll("Schema1TYPE", "Schema2TYPE"); // Replace method parameter type

    if (code.contains("/**") // Add Javadoc if present
       code = code.replace("/**", "/**\n * Code generated from " + SOURCE + " by " + GenerateDuplicateXmlCode.class.getSimpleName() + "\n * ");

    IOUtils.write(code, new FileOutputStream(new File(PATH + TARGET + ".java")), "UTF-8");
  }
}

The example above can easily be extended to include more advanced operations and to produce more output files.

Now we just need to implement the business logic once, and for each addition/bug fix etc in the template class (SOURCE in the example above), we just run the code generator and get the equivalent classes with business logic for the duplicate XML object classes, which we check into source control. And yes, the generated classes are test covered!

 

Tommy Tynjä
@tommysdk

Metrics, metrics everywhere with Graphite

What useful metrics does you application provide and how accessible are they?
In my experience, many times metrics of any application is bolted on by operations before going live or maybe even afterwards when you start experiencing strange problems and realize that the only way of knowing how the application performs is looking at cpu usage and stuff like that. Even though cpu, io and memory usage can be very helpful for ops it is probably not very useful metrics when looking at how your application performs in business terms.You need to build in metrics to your application and it should be as natural and common as any other logging you put there. Live metrics and stats presented in a appealing graphs are priceless feedback for practically everybody in the organisation ranging from operations, development, marketing, sales and even executives. Since all those people have very different views on what useful metrics are you need to start pushing out metrics of everything. You never know when you need it since it is so easy there’s really no excuse for not doing it. With very little effort you can be the graphing hero and hopefully cool dashboards with customized live metrics graphs will start to pop up everywhere.

Install Graphite

Graphite is a cool little project that allows you to collect/aggregate metrics and in a very easy and flexible way create customized real time graphs on demand. It is a python/django app with a web front that hooks into Apache. The data aggregator is called Carbon and is essentially a python deamon that slurps data from a udp port. The whole “package” can be a bit tricky to install (at least when you are on REL), it depends on some image processing libraries and stuff but you will get it done in an hour or two at them most, just follow the install instructions. Needless to say it must be installed on a server that is accessible from where the applications are running so they can push metrics to it on a udp port, but I’m sure there’s one laying around running some old monitoring tools or something. There are default examples of all config files so once all the python packs and dependencies are installed you will be up n’ running in no time and can start to push metrics to Carbon.

Start pushing metrics

They way you push data to Carbon is extremely easy, just push a udp package (udp for low cost fire-and-forget communication) like this:

node-123.myCoolApplication.enviroment.activeSessions 87 1320316143

The first part is a unique metric key which in a clustered environment also should include the node identifier. The second part is the actual metric value so in this case there are 87 active sessions. The last part is a timestamp.

This kind of metrics should preferably be pushed regularly with some scheduling utility like quartz or similar but you can of course also push metrics as events of business transactions like this:

node-123.myCoolApplication.service.buyBook.success 1 1320316143

In this case I push the metric of the event of 1 book being sold successfully. These metrics will be scattered in time but nevertheless very useful when you look at them cumulative for trends or compare them with other technical metrics.

It is also very important that you measure failures since they can provide powerful insights compared to other metrics. So in buyBook service I would also push this metrics every time it for some reason failed:

node-123.myCoolApplication.service.buyBook.failed 1 1320316143

My advice is to take a few minutes to think about a good naming convention for you metric keys since it will have some impact on they way you can aggregate data and graph it later and you don’t want to change a key once you have started to measure it.

Here’s a simple java utility class that would do the trick:

public class GraphiteLogger {
    private static final Logger LOGGER = LoggerFactory.getLogger(GraphiteLogger.class);
    private String graphiteHost;
    private int graphitePort;
    private boolean enabled;
    private String nodeIdentifier;

    public static GraphiteLogger getDefaultLogger() {
        String gHost =  “localhost”;  // get it from application startup properties or something
        int gPort = 2003 ; // get it from application startup properties or something
        boolean enabled = true; // good thing to have a on/off switch in application config
        return new GraphiteLogger(gHost, gPort, enabled);
    }

    public GraphiteLogger(String graphiteHost, int graphitePort, boolean enabled) {
        this.enabled = enabled;
        this.graphiteHost = graphiteHost;
        this.graphitePort = graphitePort;
        try {
            this.nodeIdentifier = java.net.InetAddress.getLocalHost().getHostName();
        } catch (UnknownHostException ex) {
            LOGGER.warn("Failed to determin host name",ex);
        }
       if (this.graphiteHost==null || this.graphiteHost.length()==0 ||
           this.nodeIdentifier==null || this.nodeIdentifier.length()==0 ||
           this.graphitePort<0 || !logToGraphite("connection.test", 1L))
       {
            LOGGER.warn("Faild to create GraphiteLogger, graphiteHost graphitePost or nodeIdentifier could not be defined properly: " + about());
            this.enabled=false;
        }
    }

    public final String about() {
        return new StringBuffer().append("{ graphiteHost=").append(this.graphiteHost).append(", graphitePort=").append(this.graphitePort).append(", nodeIdentifier=").append(this.nodeIdentifier).append(" }").toString();
    }

    public void logMetric(String key, long value) {
        logToGraphite(key,value);
    }

    public boolean logToGraphite(String key, long value) {
        Map stats = new HashMap();
        stats.put(key, value);
        return logToGraphite(stats);
    }

    public boolean logToGraphite(Map stats) {
        if (stats.isEmpty()) {
            return true;
        }

        try {
            logToGraphite(nodeIdentifier, stats);
        } catch (Throwable t) {
            LOGGER.warn("Can't log to graphite", t);
            return false;
        }
        return true;
    }

    private void logToGraphite(String nodeIdentifier, Map stats) throws Exception {
        Long curTimeInSec = System.currentTimeMillis() / 1000;
        StringBuffer lines = new StringBuffer();
        for (Object entry : stats.entrySet()) {
            Entry stat = (Entry)entry;
            String key = nodeIdentifier + "." + stat.getKey();
            lines.append(key).append(" ").append(stat.getValue()).append(" ").append(curTimeInSec).append("\n"); //even the last line in graphite
        }
        logToGraphite(lines);
    }
    private void logToGraphite(StringBuffer lines) throws Exception {
        if (this.enabled) {
            LOGGER.debug("Writing [{}] to graphite", lines.toString);
            byte[] bytes = lines.toString().getBytes();
            InetAddress address = InetAddress.getByName(graphiteHost);
            DatagramPacket packet = new DatagramPacket(bytes, bytes.length,address, graphitePort);
            DatagramSocket dsocket = new DatagramSocket();
            try {
                dsocket.send();
            } finally {
                dsocket.close();
            }
        }
    }
}

As easy as you log info and debug to your logging framework of choice you can now use this to push technical and business metrics to graphite everywhere in your app:

public class BookService {
private static final GraphiteLogger GRAPHITELOGGER = GraphiteLogger.getDefaultLogger();
    public void buyBook(..) {
        try {
        // do your service stuff
    } catch (ServiceException e) {
        // do your exception handling
        GRAPHITELOGGER.logMetric(“bookstore.service.buyBook.failed”, 1L);
    }
    GRAPHITELOGGER.logMetric(“bookstore.service.buyBook.success”, 1L);
}

Start Graphing

Now when you have got graphite up n’ running and your app is pushing all sorts of useful metrics to it you can start with the fun part, graphing!Graphite comes with a web front for elaborating with graphs, just brows to it on the installed Apache (defaults as document root). There you can browse your metric keys and create graphs in a graph composer, apply misc functions and rendering options etc. From here you can also access the documentation and some experimental feature for flot and events.
However, the really useful interface graphite provides is the url for rendering a graph on demand. This url e.g.:

http://localhost:8000/render?target=keepLastValue(integral(sum(usbeta13.epsos-web.service.*.failed.*)))&target=keepLastValue(integral(sum(usbeta13.epsos-web.service.*.success.*)))&from=20111024

Will give you a png image of a graph of the sum of all service calls (success and failed) accumulated over time from 2011-11-24

Yes, it is that easy!

There’s also a great deal of functions you can apply to your data e.g integral, cumulative, sum, average, max, min, etc and there’s also a lot of parameters to customize the graph with colors, fonts, texts etc. So just go crazy and define all the graphs you can think of and put them on a self-refreshing webpage, embedd them in a wiki or some other dashboard mash-up you may already have.

And if you find the graphs a bit crude and want to do something more fancy you can just pull the raw data by adding these parameter to the url:

&rawData=true&format=csv

And then use your favorite graph tool and do what ever cool trix you want. The formats available are raw | csv | json. A cool thing to try would be to pull the raw data in json format into a grails app and do some cool eye-candy charts with google charts… I’ll put that in the list of cool-things-to-try

Find the useful graphs

Now you have all the tools in place to make really useful dashboards about your applications real business performance in addition to the technical perfomance. You can in real time graph all kinds of interesting stuff and compare metrics that can give you very valuable insight, lets say you are running a business with a site of some sort and you wan’t to see the business impact on new released features, make sure you push metric to graphite when you deploy and then graph deploys vs what ever business metric you are interested in (e.g. sold books), hopefully you will see a boost after each deploy that contains new cool features and if not maybe you have something to think about. Like this you can combine technical metrics and business value metrics to see patterns and trends which can be really useful for a lot of people in the organisation.

Make them visible

Put the graphs on the biggest displays you can find in a place where as many people as possible can see them. Make sure they are updated frequently enough to provide real-time information and continuously improve, create new and remove old graphs that wasn’t really useful. If you don’t have access to big dashboard displays maybe instead write a small script what will pick useful graphs on a daily basis and email them through out the company, just be sure to spread the knowledge that the graphs provide.

And again, don’t forget to measure failures, many times just visualizing the problems in a sometimes painful way to everyone will give a boost on quality because nobody wants to be the bad guy and everybody wants to be a hero like you!

Andreas Rehn
@andreasrehn

An introduction to Java EE 6

Enterprise Java is really taking a giant leap forward with its latest specification, the Java EE 6. What earlier required (more or less) third party frameworks to achieve are now available straight out of the box in Java EE. EJB’s for example have gone from being cumbersome and complex to easy and lightweight, without compromises in functionality. For the last years, every single project I’ve been working on has in one way or another incorporated the Spring framework, and especially the dependency injection (IoC) framework. One of the best things with Java EE 6 in my opinion is that Java EE now provides dependency injection straight out of the box, through the CDI (Context and Dependency Injection) API. With this easy to use, standardized and lightweight framework I can now see how many projects can actually move away from being dependent on Spring just for this simple reason. CDI is not enabled by default and to enable it you need to put a beans.xml file in the META-INF/WEB-INF folder of your module (the file can be empty though). With CDI enabled you can just inject your dependencies with the javax.inject.Inject annotation:

@Stateless
public class MyArbitraryEnterpriseBean {

   @Inject
   private MyBusinessBean myBusinessBean;

}

Also note that the above POJO is actually a stateless session bean thanks to the @Stateless annotation! No mandatory interfaces or ejb-jar.xml are needed.

Working with JSF and CDI is just as simple. Imagine that you have the following bean, where the javax.inject.Named annotation marks it as a CDI bean:

@Named("controller")
public class ControllerBean {

   public void happy() { ... }

   public void sad() { ... }
}

You could then invoke the methods from a JSF page like this:

<h:form id="controllerForm">
   <fieldset>
      <h:commandButton value=":)" action="#{controller.happy}"/>
      <h:commandButton value=":(" action="#{controller.sad}"/>
   </fieldset>
</h:form>

Among other nice features of Java EE 6 is that EJB’s are now allowed to be packaged inside a war package. That alone can definitely save you from packaging headaches. Another step in making Java EE lightweight.

If you are working with servlets, there are good news for you. The notorious web.xml is now optional, and you can declare a servlet as easy as:

@WebServlet("/myContextRoot")
public class MyServlet extends HttpServlet {
   ...
}

To start playing with Java EE 6 with the use of Maven, you could just do mvn archetype:generate and select one of the jee6-x archetypes to get yourself a basic Java EE 6 project structure, e.g. jee6-basic-archetype.

Personally I believe Java EE 6 is breaking new grounds in terms of enterprise Java. Java EE 6 is what J2EE was not, e.g. easy, lightweight, flexible, straightforward and it has a promising future. Hopefully Java EE will from now on be the natural choice when building applications, over the option of depending on a wide selection of third party frameworks, which has been the case in the past.

 

Tommy Tynjä
@tommysdk

BTrace can save your day

Today I had a problem with a scheduled job in a application deployed on GlassFish. The execution time was too long but I could not find out how long. The system was deployed in a test environment and to do changes in the code to log out execution time was possible, but the roundtrip time to do the changes and rebuild and deploy is always a little bit to long.

So I saw the chance to using some BTrace scripts instead. BTrace is a great tool for instrumenting your classes in runtime without any restart of your application. It just connects to the JVM process and installs the BTrace javaagent in runtime and compiles and connects to the javaagent and instruments the classes specified in your BTrace script.

So I downloaded BTrace for here. Installed it on the test server.

Created the java class MethodTime.java

import static com.sun.btrace.BTraceUtils.*;
import com.sun.btrace.annotations.*;

@BTrace public class MethodTimer {
   // store entry time in thread local
   @TLS private static long startTime;

   @OnMethod(
     clazz="foo.bar.TheClass",
     method="doWork"
   )
   public static void onMethodEntry() {
       startTime = timeMillis();
   }

   @OnMethod(
    clazz="foo.bar.TheClass",
    method="doWork",
    location=@Location(Kind.RETURN)
   )
   public static void onMethodReturn() {
       println(strcat("Time taken (msec) ", str(timeMillis() - startTime)));
       println("==========================");
   }
}
}

Then I just issued the command:

btrace <pid> MethodTimer.java

Then the my little BTrace script prints the execution time for every invocation of foo.bar.TheClass.doWork. Without any recompilation and redeployment of the application.

ShrinkWrap together with Maven

I have lately been looking a bit into the JBoss Shrinkwrap project, which is a simple framework for building archives such as JARs, WARs and EARs with Java code through a straightforward API. You can assemble a simple JAR with a single line of Java code:

JavaArchive jar = ShrinkWrap.create(JavaArchive.class, "myJar.jar")
       .addClasses(MyClass.class, MyOtherClass.class)
       .addAsResource("my_application.properties");

With ShrinkWrap you basically have the option to skip the entire build process if you want to. What if you would want to integrate ShrinkWrap with your existing Maven based project? I found myself in a situation where I had a somewhat small web application project setup with Maven, with a single pom.xml which specified war packaging. Due to some external factors I suddenly found myself with a demand of being able to support one of my Spring beans as an EJB. One of the application servers the application should run on demanded that my application was packaged as an ear with the EJB in an own jar with the ejb-jar.xml descriptor file. In this case it would be too much of a trouble to refactor the Spring bean/EJB into an own module with ejb packaging, and then to assemble an ear through the maven-ear-plugin. I also wanted to remain independent of the application server, and wanted to be able to deploy the application as a war artifact on another application server if I wanted to. So I thought I could use ShrinkWrap to help me out.

Add a ShrinkWrap packaging to Maven
I will now describe what I needed to do to add “ShrinkWrap awareness” to my Maven build. I tried to keep this example as simple as possible, therefore I have omitted “unnecessary” configuration, error handling, reusability aspects etc. This example shows you how to build a simple custom EJB jar file with Maven and ShrinkWrap, e.g. together with the build of a war module. First, I obviously had to add the maven dependencies:

<dependency>
   <groupId>org.jboss.shrinkwrap</groupId>
   <artifactId>shrinkwrap-api</artifactId>
   <version>1.0.0-alpha-12</version>
   <scope>compile</scope>
</dependency>
<dependency>
   <groupId>org.jboss.shrinkwrap</groupId>
   <artifactId>shrinkwrap-impl-base</artifactId>
   <version>1.0.0-alpha-12</version>
   <scope>provided</scope>
</dependency>

I added the api dependency with compile scope as I only need it when building my application. I then added the exec-maven-plugin which basically allows me to execute Java main classes in my Maven build process:

<build>
  <plugins>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>exec-maven-plugin</artifactId>
      <version>1.1.1</version>
      <executions>
        <execution>
          <phase>package</phase>
          <goals>
            <goal>java</goal>
          </goals>
          <configuration>
            <mainClass>se.diabol.example.MyPackager</mainClass>
            <arguments>
              <argument>${project.build.directory}</argument>
            </arguments>
          </configuration>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>

Notice that I specified the package execution phase, which tells Maven to execute the plugin at the package phase of the build process. The plugin will execute the se.diabol.example.MyPackager class with the project build directory as an argument.

Let’s take a look at the se.diabol.example.MyPackager class and the comments on the end of the rows:

public class MyPackager{
   public static void main(final String args[]){
      String buildDir = args[0];          // The build directory, passed as an argument from the exec-maven-plugin
      String jarName = "my_ejb_archive.jar";                   // Chosen jar name
      File actualOutFile = new File(buildDir + "/" + jarName); // The actual output file as a java.io.File
      JavaArchive ejbJar = ShrinkWrap.create(JavaArchive.class, jarName)
            .addClasses(MyEjbClass.class)                      // Add my EJB class and the ejb-jar.xml
            .addAsResource("ejb-jar.xml");                     // These exist on classpath so ShrinkWrap will find them
      ejbJar.as(ZipExporter.class).exportTo(actualOutFile);    // Create the physical file
   }
}

Now when you have successfully built your project with e.g. mvn clean package, you will now see that ShrinkWrap created my_ejb_archive.jar archive in the project build directory, containing the MyEjbClass.class and the ejb-jar.xml! When you’ve started off by a simple example like this, you can now move on to more sophisticated solutions and take advantage of ShrinkWraps features.

Tommy Tynjä
@tommysdk