InformixHQ (HQ) is the administration Web GUI now provided by IBM with Informix Dynamic Server (IDS). The previous article InformixHQ Setup contains recommendations on getting started, including how to specify for each HQ agent (one per IDS instance) which Informix database should store sensor data.

This article follows on to describe in detail how this data is stored, looking at the InformixHQ Repository, so you can get what you need from these tables, either directly using BSON functions, or using views with BSON elements transformed to standard columns.


The Informix database you choose to contain HQ sensor data does not have to be dedicated to this purpose, although we recommend that it should, and preferably in a separate dbspace. It can be in the same instance that the agent is monitoring, or another entirely: for example, you could have three instances each with an HQ agent on different machines, but store tables for all of them in one database. This is possible because the table names include the Server ID (not $ONCONFIG SERVERNUM) automatically allocated sequentially by HQ for each IDS instance added.

For the first one you configure, the Server ID is 1. The following tables are created expressed as simplified and reformatted “dbschema” output:

Copy to Clipboard

The following generates SQL to sample the contents of the “data” column for all these tables:

Copy to Clipboard

That produces lines like this (indented for readability):

Copy to Clipboard

Note above that BSON columns are unreadable unless converted to JSON.

Example output produced by the whole of the generated SQL (with the final extraneous UNION ALL removed) is as follows (with spaces inserted after commas for word wrapping):

s_1_backups{“name”:”rootdbs”, “level0”:1572465887, “level1”:0, “level2”:0}
s_1_checkpoint{“intvl”:8822, “caller”:”CKPTINTVL”, “crit_time”:3.309509318604807e-05, “flush_time”:0.2151060843226551, “cp_time”:0.2464686880353708, “n_dirty_buffs”:115, “ckpt_logid”:728, “ckpt_logpos”:2842648, “physused”:139, “logused”:64, “n_crit_waits”:0, “tot_crit_wait”:0, “longest_crit_wait”:0, “block_time”:0}
s_1_chunkwrites{“chunkwrites_10K”:0, “chunkwrites_12K”:0, “chunkwrites_14K”:0, “chunkwrites_16K”:0, “chunkwrites_4K”:0, “chunkwrites_6K”:0, “chunkwrites_8K”:0}
s_1_dbspace_usage{“size”:102400000, “free_size”:0, “percent_free”:0, “percent_used”:1}
s_1_diskio{“dskreads_4K”:0, “dskreads_6K”:0, “dskreads_8K”:0, “dskreads_10K”:0, “dskreads_12K”:0, “dskreads_14K”:0, “dskreads_16K”:0, “dskwrites_4K”:0, “dskwrites_6K”:0, “dskwrites_8K”:0, “dskwrites_10K”:0, “dskwrites_12K”:0, “dskwrites_14K”:0, “dskwrites_16K”:0, “flushes_4K”:0, “flushes_6K”:0, “flushes_8K”:0, “flushes_10K”:0, “flushes_12K”:0, “flushes_14K”:0, “flushes_16K”:0, “bufreads_4K”:0, “bufreads_6K”:0, “bufreads_8K”:0, “bufreads_10K”:0, “bufreads_12K”:0, “bufreads_14K”:0, “bufreads_16K”:0, “bufwrites_4K”:0, “bufwrites_6K”:0, “bufwrites_8K”:0, “bufwrites_10K”:0, “bufwrites_12K”:0, “bufwrites_14K”:0, “bufwrites_16K”:0, “bufwaits_4K”:0, “bufwaits_6K”:0, “bufwaits_8K”:0, “bufwaits_10K”:0, “bufwaits_12K”:0, “bufwaits_14K”:0, “bufwaits_16K”:0}
s_1_fgwrites{“fgwrites_10K”:0, “fgwrites_12K”:0, “fgwrites_14K”:0, “fgwrites_16K”:0, “fgwrites_2K”:0, “fgwrites_4K”:0, “fgwrites_6K”:0, “fgwrites_8K”:0, “fgwrites_total”:0}
s_1_lruwrites{“lruwrites_10K”:0, “lruwrites_12K”:0, “lruwrites_14K”:0, “lruwrites_16K”:0, “lruwrites_2K”:0, “lruwrites_4K”:0, “lruwrites_6K”:0, “lruwrites_8K”:0, “lruwrites_total”:0}
s_1_memory{“allocated_mem”:252612608, “used_mem”:210223104, “free_mem”:42389504}
s_1_memory_segments{“seg_class”:1, “seg_size”:5107712, “seg_used”:5107712, “seg_free”:0}
s_1_online_log{“sequence”:1, “message”:”Thu Apr 30 09:20:55 2020″}
s_1_os_cpu{“user”:0.393, “system”:0.023, “idle”:0.581, “wait”:0.003}
s_1_os_disk_usage{“filesystem”:”/dev/sda1″, “size”:19195224064, “used”:16404992000, “free”:1775730688, “percent_used”:0.855}
s_1_os_diskio{“dk_reads”:0, “dk_rkb”:0, “dk_writes”:0, “dk_wkb”:0, “dk_busy”:0, “dk_busy_read”:0, “dk_busy_write”:0}
s_1_os_memory{“mem_total”:1044127744, “mem_used”:851374080, “mem_free”:192753664, “swap_total”:1201995776, “swap_used”:100368384, “swap_free”:1101627392}
s_1_os_networkio{“receive_bytes”:829.035, “receive_packets”:6.823, “transmit_bytes”:829.035, “transmit_packets”:6.823}
s_1_session_stats{“session_count”:8, “max_session_memory”:626688, “average_session_memory”:316416}
s_1_thread_counts{“running”:2, “ready”:1, “mutex_wait”:0, “join_wait”:0, “cond_wait”:1, “detach_terminated”:0, “terminated”:0, “sleeping”:29, “memsync_wait”:0}
s_1_vps{“count”:1, “usercpu”:5.23, “syscpu”:10.46, “totalcpu”:15.69}

The following generates SQL to sample the contents of the “pkey” column for all tables that have it:

Copy to Clipboard

That produces lines like this (indented for readability):

Copy to Clipboard

Note above that “pkey” is converted to VARCHAR to cope with mixed actual data types.

Example output produced by the whole of the generated SQL (with the final extraneous UNION ALL removed) is as follows:


We need to decode BIGINT milli-second timestamp columns into readable DATETIME values, which is done by this function:

Copy to Clipboard

The following shows this in use compared with standard DBINFO conversion:

Copy to Clipboard

Example results are:

15872255655072020-04-18 16:59:25.50715724658872019-10-30 20:04:47
15874541030562020-04-21 08:28:23.05615873962822020-04-20 16:24:42
15875405074032020-04-22 08:28:27.40315874826822020-04-21 16:24:42

The following SQL puts it all together and creates functions (including the one above) and views so that HQ tables can be read in traditional fashion:

Copy to Clipboard
Copy to Clipboard

Use this shell script to generate SQL for views with different names or server IDs other than 1:

Copy to Clipboard

For example, to create views for server ID 2 known as “two” with repository database name “hq”, use this shell command:

Copy to Clipboard

Views are created with the first letter of the name changed to “v” and the first row selected as a test. A typical example view is:

Copy to Clipboard

The original timestamp is passed through unchanged as well as the decoded DATETIME value. (Remaining fields depend on what was found in BSON content.) This makes it possible to perform indexes searches using another function which does the opposite of the one already described. Two solutions follow to select data for last month with a summary of the resulting query plan:

Solution 1
Copy to Clipboard
Query plan summary
Copy to Clipboard
Solution 2
Copy to Clipboard
Query plan summary
Copy to Clipboard

Rows from solution 2 will also be in date order, whereas solution 1 might need “ORDER BY”.

A sample of the output in both cases follows (memory figures in bytes):

18/04/2020 16:59:24.45325261260821022310442389504
18/04/2020 17:04:24.83025261260821360230439010304
18/04/2020 17:09:24.84825261260821199257640620032

Note additional logic in this view to decode the integer segment class:

Copy to Clipboard

Copy to Clipboard

2020-04-18 16:59:24.3641RResident510771251077120
2020-04-18 16:59:24.3642VVirtual1342177289170124842516480
2020-04-18 16:59:24.3644BBuffer1132871681132871680


The schema of InformixHQ repository tables allows for future expansion in scope by not having hard-coded column names, but instead holds data in BSON form which does not require tables to be altered should elements be added in future. We expect that the provided views will need extending over time to include new elements, but should continue to work as currently defined.


Armed with the information and code in this article, you should be able to select data directly from an InformixHQ sensor repository in a usable form with simple SQL statements. Techniques are provided to sample the BSON structures after future releases to include new elements.


Suggestions above are provided “as is” without warranty of any kind, either express or implied, including without limitation any implied warranties of condition, uninterrupted use, merchantability, fitness for a particular purpose, or non-infringement.

Contact us

If you have any questions about InformixHQ Repository or would like to find out more, simply contact us.

Contact us

About the Author: