From IDS 11.10 onwards, stored procedures named “sysdbopen” are executed automatically when a user connects to a the database containing it, as documented here. There is a special interpretation of the owner of such a procedure: it’s invoked only for that user, except for “public” which applies to all who don’t have their own.
This makes it possible for us to save details to a file immediately after successful connection, using the following code which must be executed in each database to be monitored:
- The ON EXCEPTION block ensures that the calling session is unaffected if anything goes wrong.
- A placeholder is provided to insert a list of user’s names to be excluded as necessary.
- The DBINFO function and “syssessions” system view provide the details we want to record.
- The “program” value is likely to be empty for APIs compiled with Client SDK earlier than 3.70.
- Change the output location as preferred, or create a symbolic link pointing elsewhere.
- If separate logs per database are preferred, include “dbname” in the output file name.
Make sure the file exists and is accessible and writable for all users. For example, shell commands to create a connection log in the same location as the IDS message log are:
You will need to repeat the above to create empty writable log files on HDR/RSS/SDS replica servers. The “sysdbopen” procedure will already exist once created on the primary and will try to record connections to each replica instance in its local file system. An advantage of the solution in this article is that it will work just fine with UPDATABLE_SECONDARY 0 in the IDS configuration, as the data is not being written to a standard table.
Opening the database with “dbaccess” as user “informix” in Docker container “ifx” appended this line:
Note that the process ID will be “-1” if not applicable (typically Java) and the “program” field can be very long. For example, connecting with AGS Server Studio running on IP address 172.17.0.1 appended this line:
You can create an external table to make the log file accessible in SQL, perhaps in the “sysadmin” database:
For long term monitoring, you may need to cycle round a set of files to limit disk space usage. A complete solution using Linux “logrotate” follows:
That configuration keeps 9 previous files per week plus the current one. Resulting files might be:
- The above is based on an actual production system.
- The external table would need to include all 10 files in the DATAFILES clause to cover the whole history.
Once you have an external table, it can be combined with the SQLTRACE facility described here to give important information otherwise missing regarding SQL statements that may be of interest, but for which there is no longer any entry in “syssessions” if already disconnected. The main system pseudo-table “syssqltrace” contains columns listed here and doesn’t include the following which we do have in our connection log:
- User name
Note that “syssqltrace” only contains an integer user ID column, so you would have to look up the user name from /etc/passwd which will only work for local operating system accounts: the user ID is that of the “mapped user” when relevant as described here and does not distinguish between multiple names mapped to it. Our log has the more meaningful actual user name specified in the connection process.
Our external table adds the missing data. For example, the following lists recent failed SQL statements: