Adding columns to join tables (in the context of JPA/Hibernate)

At some point in a @ManyToMany relationship I had to add some extra columns in the join table (the middle table).

Here's what Gavin King says in Java Persistence with Hibernate (a notable book on the subject):

Adding columns to join tables
You can use two common strategies to map such a structure to Java
classes. The first strategy requires an intermediate entity class for
the join table and is mapped with one-to-many associations. The second
strategy utilizes a collection of components, with a value-type class
for the join table.

Later in that chapter for the first approach (the extra entity for the middle table):

The primary advantage of this strategy is the possibility for
bidirectional navigation: You can get all items in a category {...} and
the also navigate from the opposite direction {...}. A disadvantage is
the more complex code needed to manage {...} entity instances to create
and remove associations—they have to be saved and deleted
independently, and you need some infrastructure, such as the composite
identifier. However, you can enable transitive persistence with
cascading options on the
collections {...}, as explained {...}, “Transitive persistence.”

Later in that chapter for the second approach (the collection of components approach):

That’s it: You’ve mapped a ternary association with annotations. What looked
incredibly complex at the beginning has been reduced to a few lines of annota-
tion metadata, most of it optional.

Naively enough I chose the second approach. Who cares that there's a
hibernate dependency in my JPA data access layer. I already have a few
(a hibernate interceptor).

In this approach I had to use the @CollectionOfElements annotation. @CollectionOfElements
works like that: it maps a collection (set, map, list) of something to
a table. This table has no entity attached to it. It can work with
value types, Strings and @Embeddables. In my case it had to be the @Embeddable.

Let me give you an example - it will clear things up: there are classes
and there are students - two entities. There can be two classes with
many students some of which are the same - so the relationship is @ManyToMany. The extra column in the join table would the grade of the student in that class.

So the approach with the @CollectionOfElements works like that: one of the entities holds the relationship - let it be the class entity - so it has something like that:


public class
Class {
    private int version;
    private Set<GradedStudent> students;

Student is a simple entity, no code needed. Let's call the student with the grade an GradedStudent:


public class
GradedStudent {
    @OneToOne(..., cascade = {MERGE, PERSIST, REFRESH})
    private Student student;
    @Column( nullable = false, ... )
    private int grade;

That's pretty much it. Seems simple, you would think and straightforward.


Here's what gets wrong:

  1. Everytime a class entity gets queried, it's version gets incremented. This makes updating a disconnected entity far more difficult and makes the @Version kind of obsolete.
    Solution: none, I couldn't find anything remotely connected to this problem on the net.
  2. The primary key in the join table (with a name like 'class_gradedstudent') is not the [class_id, student_id] but is [class_id,
    student_id, grade]. If you put extra columns in the join table and they
    are nullable = false, they would become part of the primary key.
  3. Cascading fails. You have to create and persist a Student first in order it to become a part of a certain class entity. Even though a GradedStudent is said to cascade a Student.
    Solution: none, I tried everything I could think of - no luck. I
    couldn't find anything remotely connected to this problem on the net.

Regarding 2: a quotation from the same book:

There is only
one change to the database tables: The {...} table now has a primary
key that is a composite of all columns, not only the ids of the two object, as in
the previous section. Hence, all properties should never be nullable—otherwise
you can’t identify a row in the join table.

Well, what if I don't want that? It doesn't say.

So, actually the second approach is not an option.

@CollectionOfElements, JPA, the documentation and the problems it raises

In a project that I mention a lot there's an persistent model using JPA as an interface to Hibernate.

There was case where I wanted to put extra columns in a @ManyToMany relationship. JPA cannot do that.

So I had to use the Hibernate's @CollectionOfElements.
It works like that: if there are the objects Class and Student, the extra columns go to the wrapper class EnhancedStudent. The EnhancedStudent has a property of type Student.

Now I have a few bugs related to it:

  1. Causes the @Version of the containing object (Class) to increment on EntityManager.html#find(java.lang.Class, java.lang.Object)
  2. The creation of an ExtraStudent cannot propagate (cascade) the inner Student - no matter what.

The funniest thing is that the documentation of the @CollectionOfElements is a single line:

Annotation used to mark a collection as a collection of elements or a collection of embedded objects



From some time I noticed that Mtel (my mobile service operation) is using VoIP when I'm calling an Mtel number while roaming.
When I'm calling another operator's number while still roaming, there's no VoIP.

CHEAP BASTARDS. For these prices you can at least provide a decent service - a VoIP call is with bad quality and the callee cannot see who's calling, he sees some service number.


The top of the cherry is that when Mtel is using VoIP they clearly cannot see whether the other side answers or not, so no matter whether you make a successful call or not - while you're ringing you get billed. That's not right, is it?!

I found out all that today while checking my bill. Two months ago while in Brno I made a call and was quite sure that I would get billed for that call even though nobody answered.


Update: other case of proved cheating:
Mtel made me pay 200 bucks for half a megabyte of internet while roaming. Before that I explicitly asked for the price. It came out that the price is 10 times more and they charge every GPRS session for a full megabyte (with 10 times the prices) without saying it.

Few months ago I caught them billing me for an international call to a friend I haven't spoken with for at least a couple of years.

Mocking an EJB Container

There are a lot of tests for some EJBs.

Starting an J2EE server is slow. I wanted to test the business logic faster.

Most of the beans reference only each other and a persistence context.

So what should the mock worry about - transaction demarcation and dependency injection. And... exception wrapping and unwrapping.

  1. First just add some setters or change the access level modifiers (from private to protected for example).
  2. Know your transaction authority.
  3. Count started transactions. If 0, begin. If 0, commit. Rollback when necessary.
  4. Listen for all the exceptions and wrap and unwrap them.
  5. Reuse as much as possible of the code.

If I have some spare time left, I'll write some code.

Serialization, cyclic references (via hashmaps) and overriding hashCode()

I'll try to simplify it.
There's an object model. In it there are cyclic references (one object references a second one, the second one - a third one, the third one - the first one).
Some of the cyclic references are through aggregations - one object has a map of other objects.
Some of the objects have a meaningful hashCode() and equals() overridden. These two depend on some properties in the object itself.

Some of the objects get serizalized/deserialized (travel through a stream).

Now here comes the problem - the deserialization first sees the cyclic reference, makes instances of all the objects, initializes all the primitive fileds, does not initialize the other fields, then links the objects.

Here comes the problem, linking two objects (one of which has a map of the other) requires hashCode(). This requires some specific properties in that object that are not initialized - this causes NullPointerException (or in my case an AssertionError).

If the hashCode returns a default value if the properties are not there - another serious problem si caused - there are objects in the map in the wrong buckets - they entered the map with the default hash, but when they got completely initialized - they now have a different hash. I think that is really bad - the map has to be rehashed.

Here's a bug detail:

Here's what some of the guys say on the subject:

The problem is that HashMap's readObject() implementation , in order to re-hash the map, invokes the hashCode() method of some of its keys, regardless of whether those keys have been fully deserialized.


The fix for this is actually quite easy: Modify the readObject() and writeObject() of HashMap so that it also saves the original hash code. (I am currently using this fix in production code for a large web site.) That way, when the map is reconstructed, you don't have to recompute the hashcode----the problem is caused by recomputing the hashcode at a moment when it is not computable.

What you *give up* with this fix is that HashMaps containing Objects that don't override hashCode() and equals() will not be deserialized properly.

So basically, you have a choice: either it will be robust for classes that implement hashCode(), or it will work for bare Objects(). One or the other. I prefer the former, because people are supposed to implement hashCode().

But, not all my object have a rewritten equals (of course I can check with reflection which ones do and which ones don't, but...). This would also mean that I'm using a customized collection.

There's another proposition - to hash the hashcode.

The fix for this is actually quite easy:  Modify the readObject() and writeObject() of HashMap so that it also saves the original hash code.  (I am currently using this fix in production code for a large web site.)

The hashcode is a primitive type, so it would get initialized first and the problem would be solved. This would mean to have an hashCode() and equals() which check which one is available - the cached hash or the properties - isn't that UGLY.

I'll investigate more.

MySQL, I hate you so much [Installing mysql as a service]

The first idea was to have the DB (MySQL) start as part of the script that launched the tests. Google couldn't find anything good. MySQL is one of the most unintuitive things I have ever seen, so the decision was to not try to figure it out on my own.

The next thing was to install MySQL as a service. That turned out to be difficult.

I tried to install MySQL as a service:

/>mysqld --install

As easy as that. The problem is IT DID NOT WORK. The service did not want to start.

After some reading it came out that the RIGHT way was to do it like that:

__correct dir__/><<path>>\mysqld.exe --install MySQL --defaults-file="<<path>>\my.ini"

Why do I have to supply the mysql directory explicitly?
Why do I have to supply the my.ini directory explicitly?

Why doesn't mysqld give me an error message when I install it without the needed parameters?
Why doesn't mysqld give me an error message when I install it without the needed explicit paths? beyond my inderstanding.

MySQL, you are a disgrace.
MySQL, not only are you not a real RDBMS but you can't even start without making your users' life misserable.

Google mail servers fail?! All of them?

Connection timed out on all their servers?!

Technical details of temporary failure:
TEMP_FAILURE: The recipient server did not accept our requests to connect.
[ (1): Connection timed out]
[ (5): Connection timed out]
[ (5): Connection timed out]
[ (10): Connection timed out]
[ (10): Connection timed out]
[ (10): Connection timed out]
[ (10): Connection timed out]

The sources for hibernate-entitymanager.jar, version 3.2.1GA

Yea, it was difficult. So difficult that I had to extract the sources from a zipped project file, put them into archive and rename to jar file. And hope that the versions I'm using are mathing....

The JBoss jar says it's implemented by JBoss.

Manifest-Version: 1.0
Product: Hibernate EntityManager
Specification-Title: JBoss
Created-By: 1.5.0_09-b03 (Sun Microsystems Inc.)
Specification-Version: 4.2.2.GA
Implementation-Vendor-Id:   WTF?
Version: 3.2.1.GA
Ant-Version: Apache Ant 1.6.5
Implementation-Title: JBoss [Trinity]
Specification-Vendor: JBoss (
Implementation-Version: 4.2.2.GA (build: SVNTag=JBoss_4_2_2_GA date=20
Implementation-Vendor: JBoss Inc.

However, the JBoss source does not have the sources.
WTF number 2?

In the hibernate downloads the source is missing as a separate download
WTF number 3?

Finally, found it in the zip file in the last link. I was desparate, thinking of using the repositories here:
Had to extract it, re-zip it and hope the versions match. So far so good.

Why does it have to be so difficult?!

Overriding a method with a raw type, want to use generics in the override

I want to override that
I want to do it like that:

public void postFlush( Iterator<?> entities ) throws CallbackException {...}

Does not work - the method is not with the same signature ?!


public void postFlush( Iterator<Object> entities ) throws CallbackException {...}

Does not work - the method is not with the same signature ?! WTF?

The only thing that does work (without a silly warning) is that:

public void postFlush( @SuppressWarnings( "unchecked" ) Iterator entities ) throws CallbackException {


Iterating over iterator contents with a for loop

How an iterator works:

Iterator i = collection.iterator();
while( i.hasNext() ) {
 Object o =;
 //do stuff

Well, I want to move the red line in the loop init, so it would not mingle with the loop body where only business logic would reside.

Unfortunately that cannot happen. A for loop first inits the data, then checks the condition. With an iterator one has to first check if there are elements, then to get them.

And why would someone give you an iterator instead of a collection, my mind cannot understand. (org.hibernate.Interceptor#postFlush(java.util.Iterator))

JUnit, exceptions in @Before and @After methods

JUnit spec (not very easy to find) states, that if there's an exception in the @Before method, the test is not called. True. BUT

JUnit spec does not state that in this case the @After method is called.

spec also does not state that if there's an exception in both the
@Before and @After methods, the second exception (guess which one)
overrides the first one. In my case the first one causes the second one
(the first one is ConnectionClosed or similar, the second one is a
NullPointerException because the resource is not initialized).

So the
real reason for the problem is lost. One will say - avoid exceptions in
the @After method - that's what I just did (and the real exception was
printed), but finding the reason for that was not easy and

I'm using junit 4.4, the default runner is JUnit4ClassRunner, JUnit4ClassRunner is calling ClassRoadie.runProtected():

public void runProtected() {
    try {
    } catch (FailedBefore e) {
    } finally {

This is the reason for the mentioned override. I hate when finally does that.