Thursday, October 31, 2013

Quick Start to tracing JDBC SQL operations using log4jdbc

database logging show SQL log4jdbc tomcat JNDI log4j trace performance timing

Given a complex web application (with many moving parts), there's performance issues that first impressions seem to point at environment-specific database latency. Is there any way to trace/profile/monitor all the SQL statements being made to the database (preferably with timings)?

log4jdbc seamlessly lets you configure a logger between your application and the JDBC connection. Documentation on the log4jdbc site is very good and has just about all you need to know, below is a "quick start" for the impatient.

Step 1: Jar File(s)

Drop the log4jdbc4-[version].jar file into [webapp]/WEB-INF/lib.
You'll also need:
  • a supported logging framework (log4j: log4j-[version].jar)
  • the SLF4J API (slf4j-api-[version].jar - already had it)
  • the SLF4J jar to use this logging framework (slf4j-log4j12-[version].jar - already had it)

Step 2: Driver Class and URL

  1. Change the database resource definition to use the driver class net.sf.log4jdbc.DriverSpy ('spy' as it will work out the correct 'real' driver class to use).
  2. Simply prepend jdbc:log4 to the existing URL.

For a PostgreSQL database connection configured as a JNDI/JDBC Resource in Tomcat, this will look like:
            username="postgres" password="***" 
            maxActive="20" maxIdle="10" maxWait="-1" 
            removeAbandoned="true" removeAbandonedTimeout="120" logAbandoned="true"
            charset="UTF-8" />

Step 3: Logging Config

There's detailed notes on the five key loggers and an example is available. But if you just want to see SQL + timings, the following will log this all to console:





log4j.appender.Log4JDBC.layout.ConversionPattern=%-5p [%d{DATE} %c]: %m%n

That's it. Start up the application and it should function exactly as it did before, but all SQL operations (made via the JNDI connection pool resource in my example) will be logged in the form:
INFO  [01 Nov 2013 12:00:00,000 jdbc.sqltiming]: SELECT NAME FROM PERSON WHERE ID=54321
{executed in 1 msec}


Bonus Step: Trace back to the Application Code

If your issue is working what part of the application code triggered the execution of a given SQL statement, a neat feature of the log4jdbc loggers is including the class & method-name of the code that invoked the JDBC driver. As there may be a few layers between your code and the JDBC driver (hibernate > application server connection pool, etc) you can tell it what level you what to capture from.
  1. change log level to DEBUG
  2. set the system property log4jdbc.debug.stack.prefix
Log statements will then be in the form:
DEBUG [01 Nov 2013 12:30:00,000 jdbc.sqltiming]:  com.example.PersonDAO.findPerson(
{executed in 53 msec}

NB: this extra logging adds a noticeable overhead to the database usage (and timings).

Friday, August 02, 2013

Tomcat IIS Connector "request entity is too large"

tomcat IIS IIS7 jk connector max_packet_size packetSize "The page was not displayed because the request entity is too large"

With IIS successfully configured with tomcat using the Apache Tomcat Connector (aside: if you haven't got that far the IIS Admin Blog - How To Configure IIS 7.0 and Tomcat is a good reference - with screenshots) you find that some users can access the web-apps ok, others cannot. They get a plain error page saying:
The page was not displayed because the request entity is too large
How do you fix it?

The issue is with attributes in the request exceeding the AJP 8kb default (for me, the ISAPI redirector was logging the error was with the 'Authorization' attribute). You can increase this to maximum of 65Kb.

This needs to be done in the tomcat-connector and tomcat itself.

Step 1: Set max_packet_size in the worker definition

In the file referenced by the tomcat-connection definition, set the packet size to the maximum:
worker.<worker name>.max_packet_size=65536
for example:
Check the workers documentation for more information.

Step 2: Set packetSize in the AJP Connector definition

In the server.xml configuration file for tomcat, set the packet size to the maximum:
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" packetSize="65536"
        tomcatAuthentication="false" />
Check the AJP Connector documentation for more information.

You'll then need to restart tomcat and IIS Site for the changes to take effect (then hope for the best).

Monday, July 01, 2013

Querying JSON using JSONPath

JSON query filter evaluate JSONPath expression

Dealing with code that's doing a fair bit of JSON processing (traversing object structures and collating the values) it leads to the question - is there a standard way to lookup or filter data from the JSON structure using a query or expression?

The same question has been already been asked on StackOverflow: Is there a query language for JSON? and the answer is essentially that there are many approaches. There's another list of approaches with examples in 8 ways to query json structures.

So if there's no 'standard' approach (for example, backed by an RFC as is the case with JSON), which approach should I pick?

JSONPath proposed by Stefan Goessner allows for XPath-like expressions to be evaluated against JSON. What's possible with the expression syntax seems to cover most of the traversal and collating scenarios I was interested in and the JavaScript implementation is remarkably lightweight (implemented as just one function jsonPath(<object>, <expression>, <optional arguments object>)).

Here some extra expression examples (beyond those shown in the JSONPath articles and jsonpath on Google Code):

Expression Result
$ The root element (the same JSON back out - i.e. an identity transform).
$.* All child elements of the root.
$ All book elements.
$[1] The second book element.
$..price The price of every element - result will be Numbers.
$[?(@.isbn=='0-553-21311-3')] The book element where isbn is '0-553-21311-3'.
$[?(@.category=='fiction' && @.price > 10)] The book element where category is 'fiction' and price is more than 10.

It seems like there's still some discussion around what's to be supported and there isn't exactly high activity on the project. So if using it, treat is as beta and evaluate if it does what you need. It's likely to be one of those things that once it gets past v0 there will be less (if any) changes to expect.

Some things to look out for:
  • Where's the latest version?
    Although the downloads page says the latest version is 0.8.0, this few years old (2007). You can access later versions (of the JavaScript source) either via the SVN path to the trunk or via the Google Code Browser. The javascript implementation lists the version as 0.8.5 - I'm not sure what the cycle is for getting this to 'release'.
  • No results returned as false
    I'd expect this to be simply an empty array [].
  • Results as Arrays of Arrays
    If querying a nested JSON structure of objects and the matching results match different parts of the tree, what you get back is a nested array structure indicating the relative location of the matches. I'd prefer to get this back as a flattened array. I guess the logic is that have the context of where the items were found and if you want, you can post-process (and flatten) the Arrays of Arrays - some useful approaches discussed on StackOverflow:Merge/flatten an Array of Arrays in JavaScript?.
  • the Object and it's properties in the results?
    In using the latest code mentioned above, it seems that - in some scenarios - where an object matches the expression the result includes the object as well as the properties as additional items in the result. For example:
    Instead of: You get:
            "category": "fiction",
            "author": "Herman Melville",
            "title": "Moby Dick",
            "isbn": "0-553-21311-3",
            "price": 8.99
            "category": "fiction",
            "author": "Herman Melville",
            "title": "Moby Dick",
            "isbn": "0-553-21311-3",
            "price": 8.99
        "Herman Melville",
        "Moby Dick",
    This wasn't the case with 0.8.0 and may change in future releases?

Tuesday, June 04, 2013

Stop Chrome from Prerending URLs

Google Chrome prefetch POST prerender random timeout speculative sockets preconnections browser learning predict network actions

Chrome has become my browser of choice - it seems faster and is less reliant on the disk which is critical if you're on machine that's continually doing disk-intensive work (such as builds).

That speed can come at a cost. Perhaps it was an update or perhaps because of the repetition of certain actions in a web-wizard style application in development but all of a sudden using Chrome was giving random unexpected results that weren't reproducible in other browsers.

Tracing the server-side, it seems that URLs are being called - with GET requests - independently of my actions in the browser. These phantom requests include my credentials (HTTP BASIC auth) and session information (as cookies), but on the client-side (tracking Network in Developer Mode) there's no explanation for how/why/when these URLs are being hit.

Is my browser possessed?

Short answer? YES, yes it is (kind of) possessed! What I'm seeing is Chrome predicting my next action in the web-application and hitting those URLs (on my behalf) in the interests of speed. It was initially hard to find good references for this because it seems to be referenced with various terms. Officially, it's "Prerender" and the Google Whitepaper "Web Developer's Guide to Prerendering in Chrome" describes it as:

Prerendering extends the concept of prefetching. Instead of just downloading the top-level resource, it does all of the work necessary to show the page to the user—without actually showing it until the user clicks. Prerendering behaves similarly to if a user middle-clicked on a link on a page (opening it in a background tab) and then later switched to that tab. However, in prerendering, that "background tab" is totally hidden from the user, and when the user clicks, its contents are seamlessly swapped into the same tab the user was viewing. From the user’s perspective, the page simply loads much faster than before.

As mentioned in the quote, it's separate to "Prefetching" - which is the mechanism where a page can explicitly flag a URL as safe/recommended to begin fetching.

If this was a chess program, it's like the browser is thinking a few moves ahead but instead of just anticipating moves, it's actually making them.

As raised (and debated) in Chromium Issue 85229 - this time under the term "speculative sockets" - there's no way for the server/application to opt-out or ask it not to do this.

You can view diagnostics for what URLs are being Prerendered by opening the following in a tab (if you're experiencing erroneous/random behaviour it can make for interesting reading):

Shouldn't GET requests be safe? As discussed in On Chrome and URL Pre-fetching, yes but surely not always 100% safe. If you have an app that has URLs that only ever get POST requests there's nothing stopping Chrome from hitting these with Prerender GETs - and if the URLs (for convenience) handle the GET requests you could be trouble. Which leads to some inconsistencies in the above mentioned whitepaper for "Situations in which prerendering is aborted" ("...Chrome may run into a situation that could potentially lead to user-visible behavior that is incorrect"):
    False: although in your usage of an application/site a URL may only ever be invoked via POST, you can't stop it attempting the same URL with a GET, just minus the parameters.
  • Developer Tools is open
    False: In my usage anyway, using the Developer tools (trying to track Network requests) will not stop the Prerender requests being made.

Which (finally) leads to the solution - turn off Prerending. The whitepaper has a section titled "Ensuring you have prerender turned on" (despite it being enabled by default). So disabling it involves following the reverse:
  1. Click the 'three bars' icon in the top right of Chrome.
  2. Click 'Settings'
  3. Click 'Show advanced settings...'
  4. Under 'Privacy'
    • un-check the option 'Predict network actions to improve page load performance'
    • un-check 'Use a prediction service to help complete searches and URLs typed in the address bar'
      (this specifically relates to prefetch/prerender of URLs typed into the 'Omnibox' address bar - Chrome will actually be calling up the suggested sites in the background if they happen to be URLs you commonly select)
  5. Open a new tab and paste in "chrome://net-internals/#prerender"
    Confirm that it lists:
    • Prerender Enabled: false
    • Prerender Omnibox Enabled: false

There is some confusion about the requests being made for Prerender and reports of Chrome making multiple (and unnecessary) background requests for the favicon - e.g. Issue 39402. At the server-side it will be clear that Prerender requests are for URLs that you tend to access, before you actually access them rather than URLs ending in /favicon.ico. There doesn't seem to be a way to disable favicon 'polling' but the above mentioned issue seems to indicate improvements were being considered to reduce the server load.

Sunday, April 07, 2013

Capture console output in ant - for any command or task - using Record

ant capture task console output stdout stderr javac jspc "Problems opening file using a recorder entry" absolute path

Many tasks include options to capture output. Each of them different, such as exec - with the redirector option; java - with the @output attribute. What if you want to capture the output (stderr and stdout) from a task that doesn't support output capture (such as javac) or you'd like more control - start|stop|start appending output from many commands to shared file(s)?

Record is the solution. Just BEWARE the @name attribute is treated as plain file path rather than a File attribute as in many other ant tasks. This means if supplied with a relative path it will be relative to where you're running ant from - a problem if you have ant scripts invoking other ant scripts. When path issues occur, you'll get this error:
Problems opening file using a recorder entry

Run ant with the -v option to get the full stack trace of the IOException. You'll see what path it's attempted - and failed - to use.

ALWAYS use absolute paths and you'll (hopefully) be fine.

To start recording:
<!-- (1) guarantee that you have an absolute path to the log file by setting a property with @location as the relative reference -->
<property name="build.logpath" location="${}/sub_folder/log_file.log"/>
<!-- (2) start recording -->
<record action="start" name="${build.logpath}" loglevel="verbose"/>

All tasks that follow will have the console output (and logging if you set the loglevel attribute) go to the capture file:
<jasper2 ... />
<javac ... />

Then to stop:
<record action="stop" name="${build.logpath}"/>

Easy as that.

Friday, March 29, 2013

Use javascript to set text (with newlines) into a textarea - for IE and Firefox

javascript jquery textarea DOM innerHTML newlines carriage return whitespace IE firefox chrome

Setting text with newlines into a textarea using the innerHTML attribute:
<textarea id="source" rows="4" cols="50">
<button onclick="copyText();return false;">copy -></button>
<textarea id="target" rows="4" cols="50">

<script type="text/javascript">
    function copyText() {
        var sourceField = document.getElementById('source');
        var targetField = document.getElementById('target');
        targetField.innerHTML = sourceField.value;

Works in "most" browsers but in IE the text put into the textarea has the newline characters stripped out (replaced with a single space character). Is there a way to make this work in IE? ... and most other browsers?

To make it work in IE, setting the text via inputField.setAttribute("value", [your text]); will preserve the newlines (i.e. the "\n" character in javascript)
var sourceField = document.getElementById('source');
var targetField = document.getElementById('target');

Only problem is that setting the attribute alone is not enough for Firefox (& Chrome - all WebKit?). For the above example, the target field will not appear to have the value from the source (though the DOM will have the value). To get them all to work? Contrive the order in which you set things:
var sourceField = document.getElementById('source');
var targetField = document.getElementById('target');
var text = sourceField.value;
targetField.innerHTML = text; // now Firefox (& Chrome) are happy - but IE has the text as one line
targetField.setAttribute("value", text); // now IE has the text as multiple lines - the change should be imperceptible 

The same code using jQuery (note that jQuery's html() function will still use innerHTML ultimately so is subject to same IE issue with newlines being stripped):
    var text = $(sourceField).val();

Use of innerHTML for managing textarea content - and keeping it in-synch with the DOM - was discussed in the previous post: Firefox does not reflect input form field values via innerHTML. I'll update that post accordingly ...