Salocin.TEN Salocin.TEN - 6 months ago 61
Java Question

Preventing Cassandra from dumping hprof files

I would like to stop Cassandra from dumping hprof files as I do not require the use of them.

I also have very limited disk space (50GB out of 100 GB is used for data), and these files swallow up all the disk space before I can say "stop".

How should I go about it?

Is there a shell script that I could use to erase these files from time to time?

Answer

It happens because Cassandra starts with -XX:+HeapDumpOnOutOfMemoryError Java option. Which is good stuff if you want to analyze. Also, if you are getting lots of heap-dump that indicate that you should probably tune the memory available to Cassandra.

I haven't tried it. But to block this option, comment the following line in $CASSANDRA_HOME/conf/cassandra-env.sh

JVM_OPTS="$JVM_OPTS -XX:+HeapDumpOnOutOfMemoryError"

Optionally, you may comment this block as well, but not really required, I think. This block is available in 1.0+ version I guess. I can't find this in 0.7.3.

# set jvm HeapDumpPath with CASSANDRA_HEAPDUMP_DIR
if [ "x$CASSANDRA_HEAPDUMP_DIR" != "x" ]; then
    JVM_OPTS="$JVM_OPTS -XX:HeapDumpPath=$CASSANDRA_HEAPDUMP_DIR/cassandra-`date +%s`-pid$$.hprof"
fi

Let me know if this worked.


Update

...I guess it is JVM throwing it out when Cassandra crashes / shuts down. Any way to prevent that one from happening?

If you want to disable JVM heap-dump altogether, see here how to disable creating java heap dump after VM crashes?

Comments