Wednesday, July 25, 2012

Managing Oracle JDBC Memory Usage - Executive Summary

Keywords:
Oracle JDBC memory leak usage java.lang.OutOfMemoryError fetchSize white paper summary 10g 11.2

Problem:
So there appears to be memory issues with your system. Oracle JDBC drivers seem like the prime candidate (based on the heap dumps). The Oracle JDBC Memory Management white paper says as much:
The Oracle JDBC drivers can use large amounts of memory. This is a conscious design choice, to trade off large memory use for improved performance. For the most part and for most users this has proved to be a good choice. Some users have experienced problems with the amount of memory the JDBC drivers use.

The white paper is only a dozen or so pages, but it seems each JDBC version introduces new connection properties or changes the meaning of previously used properties. Is there an executive-summary/cheat-sheet/quick-reference that would essentially say what you can do for a given version?

Solution:
There doesn't seem to be, but I'll have a go ...

First, some summary points:
  • From 9i to 10g performance of the JDBC drivers has been improved "... on average about 30% faster".
  • This was achieved by a greater use a caches. The justification being "memory is relatively cheap".
  • In large scale applications with complex (and batch) data usage, there seem to be two key caches that may cause memory usage issues - the "Implicit Statement Cache" and the "Internal Buffer Cache".
  • Tuning the caches can be done via connection properties (all of which can also be defined as system "-D" properties).
    Tuning should be done in consideration of:
    • the table design (column types and sizes)
    • the query design (columns needed / batches)
    • the fetch size. Setting this incorrectly will definitely cause OutOfMemoryErrors - as seen in JCR-2892.

Now, for the connection properties:

Property

Default

Notes

Applicable for Version

10.0.2.4

11.1.0.6.0

11.1.0.7.0

11.2

oracle.jdbc.freeMemoryOnEnterImplicitCache

false

When true row-data buffers are cleared when a PreparedStatement is cached (in the "Implicit Statement Cache").

Yes (new)

Ignored
(from 11.1.0.6.0 onwards, buffers are put in a new "Internal Buffer Cache")

oracle.jdbc.maxCachedBufferSize

Integer.MAX_VALUE
(2,147,483,647 ~ 2Gb)


Sets an upper limit on the "Internal Buffer Cache".
Look out for the change in meaning from 11.1.0.7 to 11.2 - though if you use a value >30 in 11.2 it will revert to treating it as an integer. Oracle recommends: "... start with 18. If you have to set the value to less than 16, you probably need more memory."
Each connection will (may?) have its own buffer cache. So in a connection pool setup multiply the pool-size by the maxCachedBufferSize.

N/A (no "Internal Buffer Cache")

No (no way to set the size)

Yes
Set as an integer value.
102400 ~ 100Kb

Yes
Set as a log2 value.
18 = 2^18 = 262,144 ~ 256Kb

oracle.jdbc.useThreadLocalBufferCache

false

By storing the buffer cache as a TreadLocal instead of on the Connection you'll save memory if (and only if) there are less Threads than Connections. Avoid if using code that uses Connections across Threads.

N/A

N/A

Yes (new)

Yes

oracle.jdbc.implicitStatementCacheSize

0

An initial size for the "Implicit Statement Cache". But it doesn't seem that setting it to 0 or -1 will disable it, so it can perhaps be ignored for memory management issues - it may improve performance.

N/A

N/A

N/A

Yes (new)