java - Why is Direct ByteBuffer ever increasing on HornetQ server leading to OOM? -


configuration

i have setup standalone hornetq (2.4.7-final) cluster on ubuntu 12.04.3 lts (gnu/linux 3.8.0-29-generic x86_64). instance has 16gb of ram 2 cores , have allocated -xms5g -xmx10g jvm.

following address setting in hornetq configuration:

   <address-settings>       <address-setting match="jms.queue.pollingqueue">          <dead-letter-address>jms.queue.dlq</dead-letter-address>          <expiry-address>jms.queue.expiryqueue</expiry-address>          <redelivery-delay>86400000</redelivery-delay>          <max-delivery-attempts>10</max-delivery-attempts>          <max-size-bytes>1048576000</max-size-bytes>          <page-size-bytes>10485760</page-size-bytes>          <address-full-policy>page</address-full-policy>          <message-counter-history-day-limit>10</message-counter-history-day-limit>       </address-setting>       <address-setting match="jms.queue.offerqueue">          <dead-letter-address>jms.queue.dlq</dead-letter-address>          <expiry-address>jms.queue.expiryqueue</expiry-address>          <redelivery-delay>3600000</redelivery-delay>          <max-delivery-attempts>25</max-delivery-attempts>          <max-size-bytes>1048576000</max-size-bytes>          <page-size-bytes>10485760</page-size-bytes>          <address-full-policy>page</address-full-policy>          <message-counter-history-day-limit>10</message-counter-history-day-limit>       </address-setting>       <address-setting match="jms.queue.smsqueue">          <dead-letter-address>jms.queue.dlq</dead-letter-address>          <expiry-address>jms.queue.expiryqueue</expiry-address>          <redelivery-delay>3600000</redelivery-delay>          <max-delivery-attempts>25</max-delivery-attempts>          <max-size-bytes>1048576000</max-size-bytes>          <page-size-bytes>10485760</page-size-bytes>          <address-full-policy>page</address-full-policy>          <message-counter-history-day-limit>10</message-counter-history-day-limit>       </address-setting>       <!--default catch all-->       <!-- delay redelivery of messages 1hr -->       <address-setting match="#">          <dead-letter-address>jms.queue.dlq</dead-letter-address>          <expiry-address>jms.queue.expiryqueue</expiry-address>          <redelivery-delay>3600000</redelivery-delay>          <max-delivery-attempts>25</max-delivery-attempts>          <max-size-bytes>1048576000</max-size-bytes>          <page-size-bytes>10485760</page-size-bytes>          <address-full-policy>page</address-full-policy>          <message-counter-history-day-limit>10</message-counter-history-day-limit>       </address-setting>    </address-settings> 

there 10 other queues bound default address specified wildcard.

problem

over period of time direct bytebuffer memory gradually increases in size , occupies swap space throwing outofmemoryerror ("direct buffer memory").

i have tried lot of jvm , jms tuning in vain. specifying -xx:maxdirectmemorysize=4g jvm resulted in oome same reason. seems either bytebuffer isn't being read or gc isn't claiming unreferenced memory.

has faced same issue before?

any suggestions welcome , in advance.

i don't know hornetq's internals, answer covers dbbs in general:

  • its ordinary leak, dbb objects still reachable , not freed. arise either bug in or incorrect usage of application.
    the usual approach here take heap dump , determine keeps objects alive.

  • the buffers become unreachable garbage collector performs old gen collection takes long time until collected , native memory gets freed. if server runs -xx:+disableexplicitgc suppresses last-ditch full gc attempted when maxdirectmemorysize limit reached.
    tuning gc run more ensure timely release of dbbs solve case.


Comments