This new command, documented here, was introduced with IBM Informix Dynamic Server (IDS) version 14.10.xC2 in September 2019. In theory, it reduces the effort involved in creating a new active-active replica instance to a single command. It uses Enterprise Replication (ER) which supports participating servers being on different Informix versions or operating systems, providing a unique route for platform migration with almost zero downtime. The complex work to achieve this with ER was previously a major disincentive. This article looks at the current reality of “cdr migrate server” (CMS), showing that it can achieve its goal if some pitfalls are avoided.
CMS should not be confused with the earlier ifxclone command to automate the creation of another node in a high-availability cluster, which must be on the same platform and version as the source server.
Links in this article not shown in full are for pages within the new IBM Documentation site:
When you first run a new container with that image, a single instance “informix” is created and started. This was subsequently customised with the following dbspaces (see onspaces commands and/or sysadmin:task equivalents):
Note that the IDS version is in fact special build 14.10.FC5X4DE: a patch was applied so that, if “ONSTAT_LEGACY=ON” is set in the environment, the timestamp inserted after the “onstat” heading from 14.10.xC5 onwards is suppressed. This will be in 14.10.xC6. See defect details here:
The standard “stores_demo” database was created in the “informix” instance as follows (with unbuffered logging, specifying the dbspace, without Time Series examples):
Copy to Clipboard
A second instance “informix2” was needed to demonstrate CMS which assumes the source and target database names will be the same, so they cannot be in the same instance. If you are evaluating CMS, you probably already have enough experience to create a new similarly configured instance. However, the outline steps were:
Copy “$ONCONFIG” to “$ONCONFIG.2” and edit ROOTPATH, MSGPATH, SERVERNUM, DBSERVERNAME, and DBSERVERALIASES.
Add new entries in the “sqlhosts” file similar to those already there but with different names and port numbers.
Create empty chunks in new directory “/opt/ibm/data/spaces.2” as user “informix” with same names and permissions (“chmod 660”).
Create shell aliases to switch between the settings of INFORMIXSERVER and ONCONFIG for the two instances.
Backup “informix” using “ontape -s -L 0 -d” and restore “informix2” as shown below.
Restore commands on the “informix2” instance:
Copy to Clipboard
File “chunk_map.0-2” contents:
The stated prerequisites for CMS are:
To auto create required storage spaces, storage pool is a requirement at source and target servers.
SQLHOSTS files at both source and target server must be already configured with ER group information.
Trusted host configuration must be already established between source and target servers.
Source server must be 11.70xC1 or higher version.
In a real scenario, dbspaces on the target server – which must have the same names as in the original instance – are likely to be more precisely tailored to your preferences and infrastructure if created manually. We have already assumed that in this article, and prerequisite 1 above is therefore not relevant. You will most likely want full control over sizes and names of chunks, particularly when using raw devices rather than file system chunks (“cooked” files). If you are not already using ER, you will also need to create two new dbspaces that it needs on both source and target servers, and update configuration parameters accordingly. The following was run on both test instances:
Dbspaces of the above sizes are created and set automatically by CMS if not already defined, but the file names are not pretty, and will not necessarily be in the intended location. CMS also runs the following on both servers if not already set:
The second prerequisite is to insert additional ER group information in the “sqlhosts” file as described here and shown in blue below:
group1 group - - i=1
informix onsoctcp *ifx 9088 g=group1group2 group - - i=2
informix2 onsoctcp *ifx 9090 g=group2
Each asterisk instructs IDS to listen on all IP addresses available when started, which is not an ER requirement, but is often useful.
The third prerequisite is that the servers running the source and target instances must allow trusted connections between each other. This is most often done by adding the host names to both “/etc/hosts.equiv” files, but using configuration parameters S6_USE_REMOTE_SERVER_CFG and/or REMOTE_SERVER_CFG is more secure. See Trusted-host information. You can test afterwards on each with:
Copy to Clipboard
The above is not an issue in our test as both instances are on the same server, so will trust each other by default.
The fourth remaining documented prerequisite is that the source server must be on IDS 11.70xC1 or later. Should you wish to use “cdr migrate server” when moving from a version earlier than that, the best option would be, as a first phase, to perform an in-place upgrade on the source server to the latest IDS version supported on that operating system.
A fifth and undocumented prerequisite is that you must not have SHMTOTAL too low. The Docker test instance had this set to 796000 (KB) which resulted in the following obscure error when “cdr define server” was run via CMS which then aborted:
The Informix message log gave more information:
After restarting Informix with SHMTOTAL changed to 2097152 (2GB), CMS ran fine. Not much more shared memory was actually allocated once ER had been started, but it apparently needs to know that it can allocate up to 655360 KB more memory for ER if necessary. This is unlikely to be a problem on a production system which would usually have a much larger value set anyway.
Once you have completed all the above or equivalent steps, you should now be ready to test with a command such as this on the IDS 14.10 target server (“export INFORMIXSERVER=informix2” for this test):
Those option letters can be spelt out in full with “–source”, “–target”, “–phase”, and “–database”. Without “-e” or “–exec”, it generates a shell script without actually running anything. The full syntax is fully explained here and listed by the program as follows:
Although we are including all phases in one run for this article, there may be some that would be easier run separately or in advance for real scenarios, as explained in context. Where possible, if any phase has already been run, CMS will detect that and skip those steps if “all” is specified.
If the database is not specified as we have done, CMS will attempt to migrate all non-system databases.
The shell script generated by the CMS command for all phases first runs any necessary “cdr change config” commands as already described. The next section defines ER in both instances for the first time if not already done:
Next is a “dbaccess” session on the source server to ADD ERKEY to any table without a unique index. For “stores_demo”, the SQL is:
Copy to Clipboard
Not surprisingly, some form of unambiguous ID is necessary for replication of individual rows. Three hidden shadow columns are added as described here. These are not “in-place” but “slow” alters that rebuild the whole table and its indexes while the table is exclusively locked. This is the only step in the migration that may well need applications stopped. You will probably want to run these statements separately in advance at convenient times, but CMS can still construct the SQL for you.
Next is a “dbschema” command such as the following:
Full paths are actually in front of every “dbschema” and “dbimport” occurrence but are not shown in this article for clarity.
Output files are named $INFORMIXDIR/tdbschm_PID_counter and only removed automatically if running with “–exec”.
It then creates the empty database(s) to be replicated in the target instance (“stores_demo” in our test) with the same dbspace and logging mode as in the source instance. If the database already exists, “Schema Phase 1” is assumed already to have happened and is skipped. It therefore cannot be created beforehand as an empty database (for example in a different dbspace), or the load phase fails.
Empty tables are then created in the target instance with “dbimport” using the “dbschema” output already saved above:
When the above is actually executed, the output per table will look like this:
The final steps generated are:
That contains one more undocumented option:
Referential constraints only
Much of the details above are only of interest if you wish to understand more about how CMS works. A quick guide is:
Create the new IDS 14.10 instance with the same dbspaces but without the databases to be migrated.
Add ER dbspaces and configuration settings to both old and new systems.
Add ER group information into both “sqlhosts” files.
Ensure connections are trusted both ways between the instances.
Extract “ADD ERKEY” statements from a dummy CMS run and apply in advance on the old instance.
Run and execute CMS.
Reconnect applications to the new instance.
Should you need to revert to the original system, simply reconnect applications accordingly: CMS configures ER to be active-active by default, so updates applied to either instance will have been replicated to the other.
IT36349: cdr migrate server: dbschema -noindex is not suppressing constraint
IT36350: documentation required for dbschema and dbimport arguments
Down time when migrating Informix to a different operating system can be almost eliminated using “cdr migrate server”. It hides the complexity of Enterprise Replication by generating the commands for you. There are some steps that either must be done in advance or are better done by hand, but everything you need is given in this article. There are some restrictions, most notably that contents of tables containing “smart blobs” need copying separately via flat files before synchronisation.
Suggestions above are provided “as is” without warranty of any kind, either express or implied, including without limitation any implied warranties of condition, uninterrupted use, merchantability, fitness for a particular purpose, or non-infringement.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.