The DB2 Health Monitor is a real boon to the lazy dba. However, I wish it had been possible to filter out attention alerts for certain well-known states. As in the case of ts.ts_op_status, it would be beneficial to know if the tablespaces were in a state which prohibited or limited access, but when db2hmon shouts wolf everytime an online backup is run on a tablespace and for every tablespace in the database, important notices may easily drown in the spam.
It is possible to tailor specific actions to specific states, but to control the triggering of alert messages in the same way is not straightforward.
However, to simply turn off alerts for a state-based health indicator, in our case, the ts.ts_op_status indicator for all the tablespaces in all the databases in an instance, run:
db2 update alert cfg for tablespaces using ts.ts_op_status set thresholdschecked no
Good riddance.


PuTTY stored its settings in the registry, and there is no builtin import/export functionality. Thanks to Ryan’s Tech Blog I now have a simple solution to the problem.

  • On the old computer, open up a command prompt (not cygwin), and run:
    regedit /ea putty_saved_sessions.reg HKEY_CURRENT_USER\Software\SimonTatham\PuTTY
  • Copy putty_saved_sessions.reg onto the new computer
  • On the new computer, open up a command prompt (not cygwin), and run:
    regedit /s putty_saved_sessions.reg
  • In line with previous experiences from (failing in) setting up the Glassfish server on a zLinux system, the nuxeo system was not an ‘out-of-the-box’ experience on our redhat zLinux under z/VM system. The problem seems to be in large that the SUN JDK is not 100% compatible with the IBM JDK. And albeit the differences might be small, they are significant enough. And since the SUN java  is not ported to neither AIX nor zLinux, we are stuck in the mud.

    Nuxeo is bundled with a JBOSS ejb application server, and after downloading and unpacking nuxeo-dm, the JBOSS starts up just fine (way-to-go jboss!) , but deployment of the nuxeo-dm application fails.

    We are faced with a classNotFound Exception on an xml parser class / XMLSerializer. These classes are are part of an internal sun XML parser implementation. The compilation of the original sources actually gives quite explicit warnings that these com.sun.* packages should not be used, and points in direction of the org.apache.xml.* collections, which are part of the apache Xerces xml parser implementation.

    Refactoring the source code could have been trivial, but the source code management was based on Mercurial and the code project management was based on Maven, neither applications of which were directly available on Redhat Enterprise Linus 5 on the s380x architecture.

    Instead of starting two additional porting projects, I decided to trying to build the thing on my PC, possibly not the best idea, especially not since I only had sun jdk installed.

    I installed Mercurial in Cygwin, set the http_proxy environment variable and checked out the source forllowing the guidelines on for getting the 5.2 branch. 

    So far, so good. I edit the few java files containing* import statements, download the Xerces jars from apache and drop them into the lib/ext directory of the jre underneath the j2sdk v6 directory.

    I download and unpack Maven. I have to set some proxy settings in the %USERPROFILE%\.m2\settings.xml since maven downloads a lot of plugins during the build process. The classes compile, but the tests after compilation fails with javax.xml.parsers.FactoryConfigurationError: Provider org.apache.xerces.jaxp.DocumentBuilderFactoryImpl not found. This looks like a configuration mismatch, and it turns out the original sources fails with this error as well when the Xerces jars are present in the classpath.

    I ignore the errors and builds the jar-files myself. I transfer these to the zLinux box and replaces the appropriate jars with my own.

    Voila! The nuxeo webapp deploys without error messages.

    Now I want to access the webapp from my PC, and only port 80 is open in the firewall. Since it is holiday season, the chance for me getting a firewall change through before autumns is minimal. I figure I might as well do a port redirection on the server itself, so I use mod_proxy in the apache server to proxy and reverse-proxy requests for /nuxeo/ to http://localhost:8080/nuxeo/.

    /etc/httpd/conf.d/proxy_jboss.conf looks like this:

    LoadModule proxy_module modules/
    ProxyPass /jboss/ http://localhost:8080/
    ProxyPassReverse /jboss/ http://localhost:8080/
    ProxyPass /nuxeo/ http://localhost:8080/nuxeo/
    ProxyPassReverse /nuxeo/ http://localhost:8080/nuxeo/

    Reload the apache server and we are ready to start some testing.

    /sbin/services httpd reload

    I strongly suspect there might be a few sections more on this subject before nuxeo is production ready on zLinux. Time will tell.

    The sed command can be quite handy for doing global substitutions. The surrounding loop also handles filenames with spaces and odd characters.

    grep -l '' *.html | while read FILE ; \
    do mv "$FILE" "$FILE.bac" ; \
    sed -r -e 's/http:\/\/\/../g' "$FILE.bac" > "$FILE" ;\

  • Grep searches html files in current directory for matches on some web-address.
  • Only matching relative filenames are sent to sysout (for absolute filenames, ‘find’ might be a better choice).
  • Names are fed into a line-reader command.
  • Move matching file to .bac.
  • Replace all occurrences of web-address with relative address and overwrite original file.
  • Remind myself to look into this IBM technote for Recommended AIX Virtual Memory Manager settings for DB2. It is always a question of who is going to dig into each others internals, but it really can’t hurt to build some AIX muscle.

    From time to time we encounter the dreaded deadlock situation in production, and of course the developers scratch their head and wonder why this never turned up in their tests, and wonder what magic the dba can wield to untie the knots, or at least find out what the *¤% is going on.
    Although one might wonder sometimes where IBM is going with their overly pessimistic locking model, I can to some degree appreciate that it is better to be safe than sorry, and with good and consistent coding you will avoid getting entangled in deadlocks.
    Anyway, IBM has become very generous in supplying interfaces to the overwhelming amount of internal metrics of db2, and I will most probably return to variations on that subject many times in the future. This time, it is all about the deadlock event monitor.

    connect to db mydldb;
    create event monitor dlmon1 for deadlocks with details history
    write to file dlmon1 maxfiles 20 maxfilesize 4000 buffersize 1000
    nonblocked replace manualstart;
    set event monitor dlmon1 state 1;

    The above creates a deadlock monitor for the database and activates it. The binary trace data is written in case of a deadlock to the directory [DBPATH]/db2evmon/dlmon1/, where [DBPATH] is the database directory path, which you can deduce by reading the output from list active databases. The trace data produced can be parsed into neatly shaped reports simply by running the command

    db2evmon -db mydldb -evm dlmon1

    Now you can impress your developers with details, not only on the connections involved in the deadlock situation, but also on the table over which the connections are fighting. You are also presented with a list of active locks held by the transaction, and in case of dynamic sql, the actual sql statement text of both transactions. But the real treat comes from the history keyword. This produces in clear text the entire statement history of each of the involved transactions. Line the sequences of statements up beside each other, and you will hopefully get a pretty clear picture of what is going on. Still confused? Then you can even get the actual values from the involved statements and drill down to the actual rows involved in the lock by adding the keyword values to the create event monitor statement.

    Now for a simple real life example:

    This is starting up as a sort of technical notebook on my day-to-day work with IBM DB2 Universal Database, or IBM Data Server which the new branding is called. So I bid myself welcome, and who knows, maybe this will turn into something useful for others beside myself?