Mendix & Java: part II – common errors

Once your application starts performing poorly, becomes unstable or even worse: crashes. The first thing to do is check your application log for hints on what could be causing this. If there are any “FATAL” or “CRITICAL” log lines in there, immediately start working on resolving them. Any “ERROR” log line should be treated as such as well, so you should always strive to get rid of them.

Some of the more common errors you can find in the application log that can cause your application to go down are the topic of this article. Let’s dive right in.

java.lang.OutOfMemoryError: PermGen space error

If you were using lots of large Java libraries in Java 6 (Mendix 4) or Java 7 (Mendix 5) you could run into this error, which will cause your application to crash. It is easily solved by adding more memory to the PermGen (something our CloudOps team can do for you) or by replacing some of the Java libraries by smaller ones (if they exist, that is).

java.lang.StackOverflowError

Your application is not going to recover from one of these bad boys. When you encounter one of these running your Mendix application it is practically always going to be caused by an infinite loop. You can easily recreate this by creating a microflow called “Microflow” with a single action: call microflow and select the microflow called “Microflow”. Something like this:

 

Now place it in a page somewhere behind an action button, start your application, press that button and… enjoy.

java.lang.OutOfMemoryError: Java heap space

This is an error you run into when the JVM Heap tells you: “Enough is enough. I can’t fit all of this into my memory.” Which usually translates to: this application has become unstable and should be restarted before it crashes. Oh and you also have a real problem to solve.

The following items can cause this error:

  • Memory leak
    • Introduced by developer, custom code.
    • Bug in Mendix Runtime
    • Bug in Java library used by custom code of the developer or by the Mendix Runtime.
    • Bug in Java Runtime.
  • Massive creation of objects (e.g. by retrieving 1 trillion entities in a single microflow, all at once).
  • Configuration issue / sizing issue.

A memory leak should look like the garbage collector stops running, see the first half of the graph here for example:

It is advisable to always take a look at the Object Cache (=Mendix objects in the Heap) graph to see if it resembles the Heap. For example:

This looks quite healthy.

If you would see the object cache going up indefinitely (i.e. the lines never go back down). You might have introduced a memory leak yourself and it would be best to immediately analyse your application to see if that could be the case.

On the other hand if it looks like this:

There is much larger chance you are dealing with a bug outside of your control (e.g. Mendix Runtime) causing a memory leak somehow.

java.lang.OutOfMemoryError: GC overhead limit exceeded

Such a cryptic description. But it is quite simple really, this is the JVM telling you: “I am taking an excessive amount of time garbage collecting (by default 98% of all CPU time) and am recovering very little memory (by default <=2% of the total Heap size) each time. Let me just stop your application now, so you can figure out what’s wrong before it crashes.”

The most common causes for this error are:

  1. Mostly: creating lots of objects in a short amount of time.
  2. Sometimes: creating lots of objects in rapid succession.
  3. Rarely: something else.

If you want to replicate this error, do something like this:

Eventually memory will run low due to all the accounts being created and the GC will try to free up memory. But it won’t be able to as all these Account objects are still alive. After a while it will return the error above and by know, you hopefully understand why.

That concludes the list of some of the more common errors you can find in the application log that can cause your application to go down. But there is one more item to share. While it is not an error in the error log, it might match some of the symptoms outlined. And it is easy to check for, so never a bad idea:

Lack of resources on the application server

If you see the grey “committed” line peak into the white part of the “Application node operating system memory” graph your app node needs more memory. Which means upgrading to a larger container should be strongly considered in that case. See the following graph for an example of this problem:

 

Have more questions? Submit a request

0 Comments

Article is closed for comments.