On docker containers, developers often come across OOM (out-of-memory) errors and get baffled because most of their applications are stateless applications such as REST services.
So why do docker containers throw OOM errors even when there are no long-living objects in memory and all short-lived objects should be garbage collected?
The answer lies in how the JVM calculates the max heap size that should be allocated to the Java process.
By default, the JVM assigns 1/4 of the total physical memory as the max heap size for the Java runtime. But this is the physical memory of the server (or VM) and not of the docker container.
So let's say your server has a memory of 16 GB on which you are running 8 docker containers with each container configured for 2 GB. But your JVM has no understanding of the docker max memory size. The JVM will allocate 1/4 of 16 GB = 4 GB as the max heap size. Hence garbage collection MAY not run unless the full heap is utilized and your JVM memory may go beyond 2 GB.
When this happens, docker or Kubernetes would kill your JVM process with an OOM error.
You can simulate the above scenario and print the Java heap memory stats to understand the issue. Print the runtime.freeMemory(), runtime.MaxMemory() and runtime.totalMemory() to understand the patterns of memory allocation and garbage collection.
The diagram below gives a good illustration of the memory stats of JVM.
So how do we solve the problem? There is a very simple solution - Just explicitly set the max memory of the JVM when launching the program - e.g. java -Xmx 2000m -jar myjar.jar
This will ensure that the garbage collection runs appropriately and an OOM error does not occur.
A good article throwing more details on this is available here.
Also, it is important to understand that the total memory consumed by a Java process (called as Resident Set Size RSS) is equal to the heap size + Perm Size (MetaSpace) + native memory (required for thread stacks, file pointers). If you have a large number of threads, then native memory consumption can also be high (No. of threads * -Xss)
Max memory = [-Xmx] + [-XX:MaxPermSize/MaxMetaSpace] + [number_of_threads * (-Xss)]
Since JDK 8, you can also utilize the following -XX parameters:
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap //This allows the JDK to understand the CGroup memory limits on the Linux kernel
-XX:MaxRAM //This specifies the max memory allocated to the JVM...includes both on-heap and off-heap memory.
In Java 10, a lot of changes have been made to the JDK to make it container friendly and you would not need to specify any parameters.
Other articles worth perusing are:
http://trustmeiamadeveloper.com/2016/03/18/where-is-my-memory-java/
https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/
https://dzone.com/articles/how-to-decrease-jvm-memory-consumption-in-docker-u
https://plumbr.io/blog/memory-leaks/why-does-my-java-process-consume-more-memory-than-xmx
http://stas-blogspot.blogspot.com/2011/07/most-complete-list-of-xx-options-for.html
If you wish to check all the -XX options for a JVM, then you can specify the following command.
java -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal -version
If you have a debug build of Java, then you can also try:
java -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions -XX:+PrintFlagsFinal -XX:+PrintFlagsWithComments -version
There are also tools that you can utilize to understand the best memory options to configure, such as the Java Memory Build Pack.
https://github.com/cloudfoundry/java-buildpack-memory-calculator
https://github.com/dsyer/spring-boot-memory-blog/blob/master/cf.md
So why do docker containers throw OOM errors even when there are no long-living objects in memory and all short-lived objects should be garbage collected?
The answer lies in how the JVM calculates the max heap size that should be allocated to the Java process.
By default, the JVM assigns 1/4 of the total physical memory as the max heap size for the Java runtime. But this is the physical memory of the server (or VM) and not of the docker container.
So let's say your server has a memory of 16 GB on which you are running 8 docker containers with each container configured for 2 GB. But your JVM has no understanding of the docker max memory size. The JVM will allocate 1/4 of 16 GB = 4 GB as the max heap size. Hence garbage collection MAY not run unless the full heap is utilized and your JVM memory may go beyond 2 GB.
When this happens, docker or Kubernetes would kill your JVM process with an OOM error.
You can simulate the above scenario and print the Java heap memory stats to understand the issue. Print the runtime.freeMemory(), runtime.MaxMemory() and runtime.totalMemory() to understand the patterns of memory allocation and garbage collection.
The diagram below gives a good illustration of the memory stats of JVM.
So how do we solve the problem? There is a very simple solution - Just explicitly set the max memory of the JVM when launching the program - e.g. java -Xmx 2000m -jar myjar.jar
This will ensure that the garbage collection runs appropriately and an OOM error does not occur.
A good article throwing more details on this is available here.
Also, it is important to understand that the total memory consumed by a Java process (called as Resident Set Size RSS) is equal to the heap size + Perm Size (MetaSpace) + native memory (required for thread stacks, file pointers). If you have a large number of threads, then native memory consumption can also be high (No. of threads * -Xss)
Max memory = [-Xmx] + [-XX:MaxPermSize/MaxMetaSpace] + [number_of_threads * (-Xss)]
Since JDK 8, you can also utilize the following -XX parameters:
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap //This allows the JDK to understand the CGroup memory limits on the Linux kernel
-XX:MaxRAM //This specifies the max memory allocated to the JVM...includes both on-heap and off-heap memory.
In Java 10, a lot of changes have been made to the JDK to make it container friendly and you would not need to specify any parameters.
Other articles worth perusing are:
http://trustmeiamadeveloper.com/2016/03/18/where-is-my-memory-java/
https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/
https://dzone.com/articles/how-to-decrease-jvm-memory-consumption-in-docker-u
https://plumbr.io/blog/memory-leaks/why-does-my-java-process-consume-more-memory-than-xmx
http://stas-blogspot.blogspot.com/2011/07/most-complete-list-of-xx-options-for.html
If you wish to check all the -XX options for a JVM, then you can specify the following command.
java -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal -version
If you have a debug build of Java, then you can also try:
java -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions -XX:+PrintFlagsFinal -XX:+PrintFlagsWithComments -version
There are also tools that you can utilize to understand the best memory options to configure, such as the Java Memory Build Pack.
https://github.com/cloudfoundry/java-buildpack-memory-calculator
https://github.com/dsyer/spring-boot-memory-blog/blob/master/cf.md
No comments:
Post a Comment