All posts by Tommy Tynjä

Twitter: @tommysdk

Configure datasources in JBoss AS 7.1

JBoss released their latest application server 7.1.0 last week which is a full fledged Java EE 6 server (full Java EE 6 profile certified). As I wanted to start working with the application server right away, I downloaded it and the first thing I needed to do was to configure my datasource. Setting up the datasources was simple and I’ll cover the steps in the process in this blog post.

I’ll use Oracle as an example in this blog post, but the same procedure worked with the other JDBC datasources I tried. First, add your driver as a module to the application server (if it does not already exist). Go to JBOSS_ROOT/modules and add a directory structure appropriate for your driver, such as: com/oracle/ojdbc14/main (note the main directory at the end) which gives you the path: JBOSS_ROOT/modules/com/oracle/ojdbc14/main.

In the main directory, add your driver jar-file and create a new file named module.xml. Add the following content to the module.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.1" name="com.oracle.ojdbc14">
    <resources>
        <resource-root path="the_name_of_your_driver_jar_file.jar"/>
    </resources>
    <dependencies>
        <module name="javax.api"/>
        <module name="javax.transaction.api"/>
    </dependencies>
</module>

In the module.xml file, the module name attribute have to match the structure of the directory structure and the resource-root path should point to the driver jar-file name.

As I wanted to run JBoss in standalone mode, I edited the JBOSS_ROOT/standalone/configuration/standalone.xml, where I added my datasource under the datasource subsystem, such as:

<subsystem xmlns="urn:jboss:domain:datasources:1.0">
  <datasources>
    <datasource jndi-name="java:jboss/datasources/OracleDS" pool-name="OracleDS" enabled="true" jta="true" use-java-context="true" use-ccm="true">
      <connection-url>jdbc:oracle:thin:@my_url:my_port</connection-url>
      <driver>oracle</driver>
      ... other settings
    </datasource>
  </datasources>
  ...
</subsystem>

Within the same subsystem, in the drivers tag, specify your driver module:

<subsystem xmlns="urn:jboss:domain:datasources:1.0">
  ...
  <drivers>
    <driver name="oracle" module="com.oracle.ojdbc14">
      <driver-class>oracle.jdbc.OracleDriver</driver-class>
    </driver>
  </drivers>
</subsystem>

Note that the driver-tag in your datasource should point to the driver name in standalone.xml and your driver module should point to the name of the module in the appropariate module.xml file.

Please also note that if your JDBC driver is not JDBC4 compatible, you need to specify your driver-class within the driver tag, otherwise it can be omitted.

Now you have your datasources setup, and when you start your JBoss AS 7.1 instance you should see it starting without any datasource related errors and see logging output along with the following lines:
org.jboss.as.connector.subsystems.datasources (ServerService Thread Pool — 27) JBAS010403: Deploying JDBC-compliant driver class oracle.jdbc.OracleDriver
org.jboss.as.connector.subsystems.datasources (MSC service thread 1-1) JBAS010400: Bound data source java language=”:jboss/datasources/OracleDS”

Tommy Tynjä
@tommysdk

Obey the DRY principle with code generation!

In the current project I’m in, we mostly use XML for data interchange between systems. Some of the XML schemas which have been handed to us (final versions) are unfortunatly violating the DRY (Don’t repeat yourself) principle. As we make use of Apache XMLBeans to generate the corresponding Java represenations, this gives us a lot of object duplicates. ArbitraryElement in schema1.xsd and ArbitraryElement in schema2.xsd seem identical in XML, but are defined once in each XSD, which makes XMLBeans generate duplicate objects, one for each occurance in the schemas. This is of course the expected outcome, but it’s not what we want in our hands. We have a lot of business logic surrounding the assembly of the XML content so we would like to use the same code to assemble ArbitraryElement, whether it’s the one from schema1.xsd or schema2.xsd. To implement the business logic with Java’s type safety, it would inevitable lead to Java code duplication. The solution I crafted for this was to use a simple code generator to generate the duplicate code.

First, I refactored the code to duplicate to it’s own class file which only contains such logic which is to be duplicated. I then wrote a small code generator using Apache Commons IO to read the contents of the class, replace the package names, specific method parameters and other appropriate stuff. Here is a simple example:

public class GenerateDuplicateXmlCode {
  public static void main(final String args[]) {
    String code = IOUtils.toString(new FileInputStream(new File(PATH + SOURCE + ".java")), "UTF-8");
    code = code.replaceAll("my.package.schema1.", "my.package.schema2"); // Replace imports
    code = code.replaceAll("public class " + SOURCE, "public class " + TARGET); // Replace class name
    code = code.replaceAll("Schema1TYPE", "Schema2TYPE"); // Replace method parameter type

    if (code.contains("/**") // Add Javadoc if present
       code = code.replace("/**", "/**\n * Code generated from " + SOURCE + " by " + GenerateDuplicateXmlCode.class.getSimpleName() + "\n * ");

    IOUtils.write(code, new FileOutputStream(new File(PATH + TARGET + ".java")), "UTF-8");
  }
}

The example above can easily be extended to include more advanced operations and to produce more output files.

Now we just need to implement the business logic once, and for each addition/bug fix etc in the template class (SOURCE in the example above), we just run the code generator and get the equivalent classes with business logic for the duplicate XML object classes, which we check into source control. And yes, the generated classes are test covered!

 

Tommy Tynjä
@tommysdk

Distributed version control systems

Distributed version control systems (DVCS) has been around for many years already, and is increasing in popularity all the time. There are however many projects that are still using a traditional version control system (VCS), such as Subversion. I have until recently, only been working with Subversion as a VCS. Subversion sure has its flaws and problems but mostly got the job done over the years I’ve been working with it. I started contributing to the JBoss ShrinkWrap project early this spring, where they use a DVCS in form of Git. The more I’ve been working with Git, the more I have been aware of the problems which are imposed by Subversion. The biggest transition for me has been to adopt the new mindset that DVCS brings. Suddenly I realized that my daily work has many many times been influenced on the way the VCS worked, rather than doing things the way that feels natural for me as a developer. I think this is one of the key benefits with DVCS, and I think you start being aware of this as soon as you start using a DVCS.

While a traditional VCS can be sufficent in many projects, DVCSs brings new interesting dimensions and possibilites to version control.

What is a distributed version control system?
The fundamental of a DVCS is that each user keeps an own self-contained repository on his/her computer. There is no need to have a central master repository, even if most projects have one, e.g. to allow continuous integration. This allows for the following characteristics:

* Rapid startup. Install the DVCS of choice and start committing instantly into your local repository.
* As there is no need for a central repository, you can pull individual updates from other users. They do not have to be checked in into a central repository (even if you use one) like in Subversion.
* A local repository allows you the flexibility to try out new things without the need to send them to a central repository and make them available to others just to get them under version control. E.g. it is not necessary to create a branch on a central server for these kind of operations.
* You can select which updates you wish to apply to your repository.
* Commits can be cherry-picked, which means that you can select individual patches/fixes from users as you like
* Your repository is available offline, so you can check in, view project history etc. regardless of your Internet connection status.
* A local repository allows you to check in often, even though your code might not even compile, to create checkpoints of your current work. This without interfering with other peoples work.
* You can change history, modify, reorder and squash commits locally as you like before other users get access to your work. This is called rebasing.
* DVCSs are far more fault-tolerant as there are many copies of the actual repository available. If a central/master repository is used it should be backed up though.

One of the biggest differences between Git and Subversion which I’ve noticed is not listed above and is the speed of the version control system. The speed of Git has really been blowing me away and in terms of speed, it feels like comparing a Bugatti Veyron (Git) with an old Beetle (Subversion). A project which would take minutes to download from a central Subversion repository is literally taking seconds with Git. Once, I actually had to investigate that my file system acutally contained all the files Git told me it downloaded, as it went so incredibly fast! I want to emphasize that Git is not only faster when downloading/checking out source code the first time, it also applies to commiting, retrieving history etc.

Squashing commits with Git
To be able to change history is something I’ve longed for in all these years working with Subversion. With a DVCS, it is possible! When I’ve been working on a new feature for instance, previously I’ve ususally wanted to commit my progress (as checkpoints, mentioned above) but in a Subversion environment this would screw things up for other team members. When I work with Git, it allows me the freedom to do what I’ve wanted to do during all these years, committing small incremental changes to the code base, but without disturbing other team members in their work. For example, I could add a new method to an interface, commit it, start working on the implementation, commit often, work some more on the implementation, commit some more stuff, then realize that I need to rethink some of the implementation, revert a couple of commits, redo the implementation, commit etc. All this without disturbing my colleagues working on the same code base. When I feel like commiting my work, I don’t necessarily want to bring in all small commits I’ve made at development time, e.g. just adding javadoc to a method in a commit. With Git I can do something called squash, which means that I can bunch commits together, e.g. bunch my latest 5 commits together to a single one, which I then can share with other users. I can even modify the commit message, which I think is a very neat feature.

Example: Squash the latest 5 commits on the current working tree
$ git rebase -i HEAD~5

This will launch a VI editor (here I assume you are familiar with it). Leave the first commit as pick, change the rest of the signatures to “squash”, such as:

pick 79f4edb Work done on new feature
pick 032aab2 Refactored
pick 7508090 More work on implementation
pick 368b3c0 Began stubbing out interface implementation
pick c528b95 Added new interface method

to:

pick 79f4edb Work done on new feature
squash 032aab2 Refactored
squash 7508090 More work on implementation
squash 368b3c0 Began stubbing out interface implementation
squash c528b95 Added new interface method

On the next screen, delete or comment all lines you don’t want and add a more proper commit message:

# This is a combination of 5 commits.
# The first commit's message is:
Added new interface method
# This is the 2nd commit message:
Began stubbing out interface implementation
...

to:

# This is a combination of 5 commits.
# The first commit's message is:
Finished work on new feature
#Added new interface method
# This is the 2nd commit message:
#Began stubbing out interface implementation
...

Save to execute the squash. This will leave you with a single commit with the message you provided. Now you can just share this single commit with other users, e.g. via push to the master repository (if used).

Another interesting aspect of DVCSs is that if you use master repository, it won’t get hit that often since you execute your commits locally before squashing things together and send them upstream. This makes DVCSs more attractive from a scalability point of view.

Summary
A DVCS does not enforce you to have a central repository and every user has its own local repository with full history. Users can work and commit locally before sharing code with other users. If you haven’t tried out DVCS yet, do it! It is actually as easy as stated earlier: Download, install and create your first repository! The concepts of DVCS may be confusing for a non-DVCS user at first, but there are a lot of tutorials out there and “cheat sheets” which covers the most basic (and more advanced) tasks. You will soon discover many nice features with the DVCS of your choice, making it harder and harder to go back to a traditional VCS. If you have experience from DVCSs, please share your experiences!

Tommy Tynjä
@tommysdk

An introduction to Java EE 6

Enterprise Java is really taking a giant leap forward with its latest specification, the Java EE 6. What earlier required (more or less) third party frameworks to achieve are now available straight out of the box in Java EE. EJB’s for example have gone from being cumbersome and complex to easy and lightweight, without compromises in functionality. For the last years, every single project I’ve been working on has in one way or another incorporated the Spring framework, and especially the dependency injection (IoC) framework. One of the best things with Java EE 6 in my opinion is that Java EE now provides dependency injection straight out of the box, through the CDI (Context and Dependency Injection) API. With this easy to use, standardized and lightweight framework I can now see how many projects can actually move away from being dependent on Spring just for this simple reason. CDI is not enabled by default and to enable it you need to put a beans.xml file in the META-INF/WEB-INF folder of your module (the file can be empty though). With CDI enabled you can just inject your dependencies with the javax.inject.Inject annotation:

@Stateless
public class MyArbitraryEnterpriseBean {

   @Inject
   private MyBusinessBean myBusinessBean;

}

Also note that the above POJO is actually a stateless session bean thanks to the @Stateless annotation! No mandatory interfaces or ejb-jar.xml are needed.

Working with JSF and CDI is just as simple. Imagine that you have the following bean, where the javax.inject.Named annotation marks it as a CDI bean:

@Named("controller")
public class ControllerBean {

   public void happy() { ... }

   public void sad() { ... }
}

You could then invoke the methods from a JSF page like this:

<h:form id="controllerForm">
   <fieldset>
      <h:commandButton value=":)" action="#{controller.happy}"/>
      <h:commandButton value=":(" action="#{controller.sad}"/>
   </fieldset>
</h:form>

Among other nice features of Java EE 6 is that EJB’s are now allowed to be packaged inside a war package. That alone can definitely save you from packaging headaches. Another step in making Java EE lightweight.

If you are working with servlets, there are good news for you. The notorious web.xml is now optional, and you can declare a servlet as easy as:

@WebServlet("/myContextRoot")
public class MyServlet extends HttpServlet {
   ...
}

To start playing with Java EE 6 with the use of Maven, you could just do mvn archetype:generate and select one of the jee6-x archetypes to get yourself a basic Java EE 6 project structure, e.g. jee6-basic-archetype.

Personally I believe Java EE 6 is breaking new grounds in terms of enterprise Java. Java EE 6 is what J2EE was not, e.g. easy, lightweight, flexible, straightforward and it has a promising future. Hopefully Java EE will from now on be the natural choice when building applications, over the option of depending on a wide selection of third party frameworks, which has been the case in the past.

 

Tommy Tynjä
@tommysdk

Application startup order in IBM WebSphere Application Server

If you are hosting an application server with multiple applications deployed and one of them is dependent on another, you might want to configure in what order they start. Typical use cases would be assuring that e.g. the server side of a web service is up before a client is available, or to assure that resources have been initialized into JNDI.

In IBM WebSphere Application Server (6.1) this has to be specified through container specific configuration. You need to make sure the application dependent on another has a higher startup order value than the one it depends on. You can set this either through the management console under Applications > Enterprise Applications > MY APPLICATION > Startup behaviour > General Properties > Startup order. It is also possible to specify this through the IBM WebSphere deployment.xml deployment descriptor by specifying an XML attribute startingWeight to the deployedObject tag for your application with dependencies. Example where the startup order has been set to the arbitrary value of 97:

<appdeployment:Deployment xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI"
         xmlns:appdeployment="http://www.ibm.com/websphere/appserver/schemas/5.0/appdeployment.xmi"
         xmi:id="Deployment_1">
   <deployedObject xmi:type="appdeployment:ApplicationDeployment" xmi:id="ApplicationDeployment_1"
            deploymentId="0" startingWeight="97" binariesURL="$(APP_INSTALL_ROOT)/node/myapplication.ear"
            useMetadataFromBinaries="false" enableDistribution="true" createMBeansForResources="true"
            reloadEnabled="false" appContextIDForSecurity="href:node/myapplication"
            backgroundApplication="false" filePermission=".*\.dll=755#.*\.so=755#.*\.a=755#.*\.sl=755"
            allowDispatchRemoteInclude="false" allowServiceRemoteInclude="false">
      ... other configuration omitted
   </deployedObject>
   <deploymentTargets xmi:type="appdeployment:ServerTarget" xmi:id="ServerTarget_1" nodeName="node"/>
</appdeployment:Deployment>

After the configuration has been saved, the next time you restart your server, the applications will be started in the desired order.

 

Tommy Tynjä
@tommysdk

ShrinkWrap together with Maven

I have lately been looking a bit into the JBoss Shrinkwrap project, which is a simple framework for building archives such as JARs, WARs and EARs with Java code through a straightforward API. You can assemble a simple JAR with a single line of Java code:

JavaArchive jar = ShrinkWrap.create(JavaArchive.class, "myJar.jar")
       .addClasses(MyClass.class, MyOtherClass.class)
       .addAsResource("my_application.properties");

With ShrinkWrap you basically have the option to skip the entire build process if you want to. What if you would want to integrate ShrinkWrap with your existing Maven based project? I found myself in a situation where I had a somewhat small web application project setup with Maven, with a single pom.xml which specified war packaging. Due to some external factors I suddenly found myself with a demand of being able to support one of my Spring beans as an EJB. One of the application servers the application should run on demanded that my application was packaged as an ear with the EJB in an own jar with the ejb-jar.xml descriptor file. In this case it would be too much of a trouble to refactor the Spring bean/EJB into an own module with ejb packaging, and then to assemble an ear through the maven-ear-plugin. I also wanted to remain independent of the application server, and wanted to be able to deploy the application as a war artifact on another application server if I wanted to. So I thought I could use ShrinkWrap to help me out.

Add a ShrinkWrap packaging to Maven
I will now describe what I needed to do to add “ShrinkWrap awareness” to my Maven build. I tried to keep this example as simple as possible, therefore I have omitted “unnecessary” configuration, error handling, reusability aspects etc. This example shows you how to build a simple custom EJB jar file with Maven and ShrinkWrap, e.g. together with the build of a war module. First, I obviously had to add the maven dependencies:

<dependency>
   <groupId>org.jboss.shrinkwrap</groupId>
   <artifactId>shrinkwrap-api</artifactId>
   <version>1.0.0-alpha-12</version>
   <scope>compile</scope>
</dependency>
<dependency>
   <groupId>org.jboss.shrinkwrap</groupId>
   <artifactId>shrinkwrap-impl-base</artifactId>
   <version>1.0.0-alpha-12</version>
   <scope>provided</scope>
</dependency>

I added the api dependency with compile scope as I only need it when building my application. I then added the exec-maven-plugin which basically allows me to execute Java main classes in my Maven build process:

<build>
  <plugins>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>exec-maven-plugin</artifactId>
      <version>1.1.1</version>
      <executions>
        <execution>
          <phase>package</phase>
          <goals>
            <goal>java</goal>
          </goals>
          <configuration>
            <mainClass>se.diabol.example.MyPackager</mainClass>
            <arguments>
              <argument>${project.build.directory}</argument>
            </arguments>
          </configuration>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>

Notice that I specified the package execution phase, which tells Maven to execute the plugin at the package phase of the build process. The plugin will execute the se.diabol.example.MyPackager class with the project build directory as an argument.

Let’s take a look at the se.diabol.example.MyPackager class and the comments on the end of the rows:

public class MyPackager{
   public static void main(final String args[]){
      String buildDir = args[0];          // The build directory, passed as an argument from the exec-maven-plugin
      String jarName = "my_ejb_archive.jar";                   // Chosen jar name
      File actualOutFile = new File(buildDir + "/" + jarName); // The actual output file as a java.io.File
      JavaArchive ejbJar = ShrinkWrap.create(JavaArchive.class, jarName)
            .addClasses(MyEjbClass.class)                      // Add my EJB class and the ejb-jar.xml
            .addAsResource("ejb-jar.xml");                     // These exist on classpath so ShrinkWrap will find them
      ejbJar.as(ZipExporter.class).exportTo(actualOutFile);    // Create the physical file
   }
}

Now when you have successfully built your project with e.g. mvn clean package, you will now see that ShrinkWrap created my_ejb_archive.jar archive in the project build directory, containing the MyEjbClass.class and the ejb-jar.xml! When you’ve started off by a simple example like this, you can now move on to more sophisticated solutions and take advantage of ShrinkWraps features.

Tommy Tynjä
@tommysdk

Get started with AWS Elastic Beanstalk

Amazon Web Services (AWS) launched a beta of their new concept Elastic Beanstalk in January. AWS Elastic Beanstalk allows you to in a few clicks setup a new environment where you can deploy your application. Say you have a development team developing a web-app running on Tomcat and you need a test server where you can test your application. In a few simple steps you can setup a new machine with a fresh installation of Tomcat where you can deploy your application. You can even use the AWS Elastic Beanstalk command line client to deploy your application as simple as with:

$ elastic-beanstalk-update-application -a my_app.war -d "Application description"

I had the opportunity to try it out and I would like to share how you get started with the service.

As with other AWS cloud based services, you only pay for the resources your application consumes and if you’re only interested in a short evaluation, it will propably only cost you a couple of US dollars. This first release of Elastic Beanstalk is targeted for Java developers who are familiar with the Apache Tomcat software stack. The concept is simple, you simply upload your application (e.g. war-packaged web application) to a Elastic Beanstalk instance through the AWS web interface called the Management Console. The Management Console allows you to handle versioning of your applications, monitoring and log viewing straight throught the web interface. It also provides load balancing and scaling out of the box. The Elastic Beanstalk service is currently running on a 32-bit Amazon Linux AMI using Apache Tomcat 6.0.29.

But what if you would like to customize the software stack your application is running on? Common tasks you might want to do is adding jar-files to the Tomcat lib-directory, configure connection pooling capabilities, edit the Tomcat server.xml or even install third party products such as ActiveMQ. Fortunatly, all of this is possible! You first have to create your custom AMI (Amazon Machine Image). Go to the EC2 tab in the Management Console, select your default Elastic Beanstalk instance and select Instance Actions > Create Image (EBS AMI). You should then see your custom image under Images / AMIs in the left menu with a custom AMI ID. Back in the Elastic Beanstalk tab, select Environment Details of the environment you want to customize and select Edit Configuration. Under the Server tab, you can specify a Custom AMI ID to your instance, which should refer to the AMI ID of your newly created custom image. After applying the changes, your environment will “reboot”, running your custom AMI.

Now you might ask yourself, what IP address do my instance actually have? Well, you have to assign an IP address to it first. You can then either use this IP or the DNS name to log in to your instance. AWS is using a concept called Elastic IPs, which means that they provide a pool of IP addresses from where you can get a random free IP address to bind to your current instance. If you want to release the IP address from your instance, it is just as easy. All of this is done straight out of the Management Console in the EC2 tab under Elastic IPs. You just select Allocate Address and then bind this Elastic IP to your instance. To prevent users to have unused IP addresses lying around, AWS is charging your account for every unbound Elastic IP, which might end up a costful experience. So make sure you release your Elastic IP address back to the pool if you’re not using it.

To be able to log in to your machine, you will have to gerenate a key pair which is used as an authentication token. You generate a key pair in the EC2 tab under Networking & Security / Key Pairs. Then go back to the Elastic Beanstalk tab, select Environment Details of your environment and attach your key pair by providing it in the Server > Existing Key Pair field. You then need to download the key file (with a .pem extension by default) to the machine you will actually connect from. You also need to open the firewall for your instance to allow connections from the IP address you are connecting from. Do that by creating a security group Networking & Security Groups on the EC2 tab. Make sure to allow SSH over tcp for port 22 for the IP address you are connecting from. Then attach the security group to your Elastic Beanstalk environment by going to the Elastic Beanstalk tab and selecting Environment Details > Edit Configuration for your environment. Add the security group in the field EC2 Security Group on the Server tab.

So now you’re currently running a custom image (which is actually a replica of the default one) which has an IP address, a security group and a key pair assigned to it. Now is the time to log in to the machine and do some actual customization! Log in to your machine using ssh and the key you downloaded from your AWS Management Console, e.g.:

$ ssh -i /home/tommy/mykey.pem ec2-user@ec2-MY_ELASTIC_IP.compute-1.amazonaws.com

… where MY_ELASTIC_IP is the IP address of your machine, such as:

$ ssh -i /home/tommy/mykey.pem ec2-user@ec2-184-73-226-161.compute-1.amazonaws.com

Please note the hyphens instead of dots in the IP address! Then, you’re good to go and start customizing your current machine! If you would like to save these configurations onto a new image (AMI), just copy the current AMI through the Management Console. You can then use this image when booting up new environments. You find Tomcat under /usr/share/tomcat6/. See below for a login example:

tommy@linux /host $ ssh -i /home/tommy/mykey.pem ec2-user@ec2-184-73-226-161.compute-1.amazonaws.com
The authenticity of host 'ec2-184-73-226-161.compute-1.amazonaws.com (184.73.226.161)' can't be established.
RSA key fingerprint is 13:7d:aa:31:5c:3b:17:ed:74:6d:87:07:23:ee:33:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-184-73-226-161.compute-1.amazonaws.com,184.73.226.161' (RSA) to the list of known hosts.
Last login: Thu Feb  3 23:48:38 2011 from 72-21-198-68.amazon.com

 __|  __|_  )  Amazon Linux AMI
 _|  (     /     Beta
 ___|\___|___|

See /usr/share/doc/amzn-ami/image-release-notes for latest release notes.
[ec2-user@ip-10-122-194-97 ~]$  sudo su -
[root@ip-10-122-194-97 /]# ls -al /usr/share/tomcat6/
total 12
drwxrwxr-x  3 root tomcat 4096 Feb  3 23:50 .
drwxr-xr-x 64 root root   4096 Feb  3 23:50 ..
drwxr-xr-x  2 root root   4096 Feb  3 23:50 bin
lrwxrwxrwx  1 root tomcat   12 Feb  3 23:50 conf -> /etc/tomcat6
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 lib -> /usr/share/java/tomcat6
lrwxrwxrwx  1 root tomcat   16 Feb  3 23:50 logs -> /var/log/tomcat6
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 temp -> /var/cache/tomcat6/temp
lrwxrwxrwx  1 root tomcat   24 Feb  3 23:50 webapps -> /var/lib/tomcat6/webapps
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 work -> /var/cache/tomcat6/work
Tommy Tynjä
@tommysdk