This tool can be used to move data from various data sources to the DB2 in a pureScale environment.
Beginning with DB2 V9.7 for Linux, UNIX, and Windows, the Migration Toolkit (MTK) is not required in order to use applications from Oracle and Sybase (After Fixpack 3) on DB2 products. This tool replaces the MTK functionality with a greatly simplified workflow.
For all other scenarios, for example, moving data from a database to DB2 for z/OS, this tool supports the MTK particularly in the area of the high speed data movement. Using this tool, as much as 4TB of data have been moved in just three days.
A GUI provides an easy to use interface for the novice while the command line API is often preferred by the advanced user.
First, download the tool from the Download section to your target DB2 server. Additional steps are required to move data to DB2 for z/OS. (.)
Once you have downloaded the IBMDataMovementTool.zip file, extract the files into a directory called IBMDataMovementTool on your target DB2 server. A server side install (on DB2) is strongly recommended to achieve the best data movement performance.
- DB2 V9.7 should be installed on your target server if you are enabling an Oracle application to be run on DB2 for Linux, UNIX, and Windows.
- Java™ version 1.5 or higher must be installed on your target server. To verify your current Java version, run java -versioncommand. By default, Java is installed as part of DB2 for Linux, UNIX, and Windows in
\SQLLIB\java\jdk (Windows) or /opt/ibm/db2/V9.7/java/jdk (Linux).
Database | JDBC drivers |
---|---|
Oracle | ojdbc5.jar or ojdbc6.jar or ojdbc14.jar, xdb.jar, xmlparserv2.jar or classes12.jar or classes111.jar for Oracle 7 or 8i |
SQL Server | sqljdbc5.jar or sqljdbc.jar |
Sybase | jconn3.jar and antsjconn2.jar for DB2 SKIN feature |
MySQL | mysql-connector-java-5.0.8-bin.jar or latest driver |
PostgreSQL | postgresql-8.1-405.jdbc3.jar or latest driver |
DB2 for Linux, UNIX, and Windows | db2jcc.jar, db2jcc_license_cu.jar or db2jcc4.jar, db2jcc4_license_cu.jar |
DB2 for z | db2jcc.jar, db2jcc_license_cisuz.jar or db2jcc4.jar, db2jcc4_license_cisuz.jar |
DB2 for i | jt400.jar |
Teradatra | terajdbc4.jar and tdgssconfig.jar |
MS Access | Optional Access_JDBC30.jar |
- UNIX: Login to your server as DB2 instance owner.
- Windows: Launch a DB2 Command Window.
- Change to the IBMDataMovementTool directory. The tool is a JAR file with two driver scripts to run the tool.
IBMDataMovementTool.cmd - Command script to run the tool on Windows. IBMDataMovementTool.sh - Command script to run the tool on UNIX. IBMDataMovementTool.jar - JAR file of the tool. Pipe.dll - A DLL required on Windows if pipe option is used.
Since a database connection to the target is required to run the tool, the DB2 database must be created first. On DB2 V9.7, we recommended that you use the default automatic storage and choose a 32KB page size. When enabling applications to be run on DB2 V9.7, the instance and the database must be operating in compatibility mode. It is also recommended to adjust the rounding behavior to match that of Oracle. You can deploy objects out of dependency order by setting the revalidation semantics to deferred_force.
On UNIX systems
$ db2set DB2_COMPATIBILITY_VECTOR=ORA $ db2set DB2_DEFERRED_PREPARE_SEMANTICS=YES $ db2stop force $ db2start $ db2 "create db testdb automatic storage yes on /db2data1, /db2data2,/db2data3 DBPATH ON /db2system PAGESIZE 32 K" $ db2 update db cfg for testdb using auto_reval deferred_force $ db2 update db cfg for testdb using decflt_rounding round_half_up |
On Windows systems
C:\> db2set DB2_COMPATIBILITY_VECTOR=ORA C:\> db2set DB2_DEFERRED_PREPARE_SEMANTICS=YES C:\> db2stop force C:\> db2start C:\> db2 "create db testdb automatic storage yes on C:,D: DBPATH ON E: PAGESIZE 32 K" C:\> db2 update db cfg for testdb using auto_reval deferred_force C:\> db2 update db cfg for testdb using decflt_rounding round_half_up |
Before you run the tool, have the following information for your source and DB2 server ready:
- IP Address or Host Name of the source and DB2 servers
- Port numbers to connect
- Name of the databases, SID, sub-system name etc. as required
- A User ID with DBA privileges on the source database
- Password for that user
- Location of your source database and DB2 JDBC drivers
- Enough space or volume/mount point information where data will be stored
Run IBMDataMovementTool.cmd on Windows or ./IBMDataMovementTool.sh on UNIX. The tool will start a GUI if the server is capable of displaying graphics. Otherwise it will switch to the interactive command line mode to gather input.
On Windows: IBMDataMovementTool.cmd On UNIX: chmod +x IBMDataMovementTool.sh ./IBMDataMovementTool.sh |
You will now see a GUI window. Some messages should also appear in the shell window. Please look through these messages to ensure no errors were logged before you start using the GUI.
If you have not set DB2_COMPATIBILITY_VECTOR, the tool will report a warning. Please follow the steps to set the compatibility vector if you have not done so.
[2010-01-10 17.08.58.578] INPUT Directory = . [2010-01-10 17.08.58.578] Configuration file loaded: 'jdbcdriver.properties' [2010-01-10 17.08.58.593] Configuration file loaded: 'IBMExtract.properties' [2010-01-10 17.08.58.593] appJar : 'C:\IBMDataMovementTool\IBMDataMovementTool.jar' [2010-01-10 17.08.59.531] DB2 PATH is C:\Program Files\IBM\SQLLIB [2010-01-10 17.35.30.015] *** WARNING ***. The DB2_COMPATIBILITY_VECTOR is not set. [2010-01-10 17.35.30.015] To set compatibility mode, discontinue this program and run the following commands [2010-01-10 17.35.30.015] db2set DB2_COMPATIBILITY_VECTOR=FFF [2010-01-10 17.35.30.015] db2stop force [2010-01-10 17.35.30.015] db2start |
The GUI screen as shown in Figure 1 has fields for specifying the source and DB2 database connection information. The sequence of events in this screen are:
- Specify source and DB2 connection information.
- Click on Connect to Oracle to test the connection.
- Click on Connect to DB2 to test the connection.
- Specify the working directory where DDL and data are to be extracted to.
- Choose if you want DDL and/or DATA. If you only select DDL, an additional genddl script will be generated.
- Click on the Extract DDL/Data button. You can monitor progress in the console window.
- After the data extraction is completed successfully, go through the result output files for the status of the data movement, warnings, errors and other potential issues.
- Optionally, you can click on the View Script/Output button to check the generated scripts, DDL, data or the output log file.
- Click on the Deploy DDL/Data button to create tables, indexes in DB2 and load data that was extracted from the source database.
- You can use Execute DB2 Script to run the generated DB2 scripts instead of running it from the command line. The data movement is an interative exercise. If you need to drop all tables before you start fresh, you can select the drop table script and execute it. You can also use this button to execute the scripts in the order you want them to be executed.
After clicking on the Extract DDL/Data button, you will notice tool's messages in the View File tab, as shown in Figure 2:
After completing the extraction of DDL and DATA, you will notice several new files created in the working directory. These files can be used at the command line to run in DB2.
The following command scripts are regenerated each time you run the tool in GUI mode. However, you can use these scripts to perform all data movement steps without the GUI. This is helpful when you want to embed this tool as part of a batch processes to accomplish an automated data movement.
File name | Description |
---|---|
IBMExtract.properties | This file contains all input parameters that you specified through your GUI or command line input values. You can edit this file manually to modify or correct parameters. Note: This file is overwritten each time you run the GUI. |
unload | This script is created by the tool. It unloads data from the source database server to flat files, if you check DDL and Data options. The same script moves data from source database to DB2 using pipes, if you check the pipe option in the GUI to eliminate intermediate flat files. The pipe option is controlled through usePipe option in the IBMExtract.properties file. |
rowcount | This script is created by the tool, and you can run it after deploying data to verify rowcounts in source and DB2 database. |
You can run the tool using command line mode particularly when the GUI capability is not available. The tool switches modes automatically if it is not able to start GUI. If you want to force to run the tool in command line interactive mode, you can specify the-console option to the IBMDataMovementTool command.
On Windows: IBMDataMovementTool -console On UNIX: ./IBMDataMovementTool.sh -console |
You will be presented with interactive options to specify source and DB2 database connection parameters in step-by-step process. A sample output from the console window is shown as below:
[2010-01-10 20.08.05.390] INPUT Directory = . [2010-01-10 20.08.05.390] Configuration file loaded: 'jdbcdriver.properties' [2010-01-10 20.08.05.390] Configuration file loaded: 'IBMExtract.properties' [2010-01-10 20.08.05.390] appJar : 'C:\IBMDataMovementTool\IBMDataMovementTool.jar' Debug (Yes) : 1 Debug (No) : 2 Enter a number (Default=2) : IS TARGET DB2 LOCAL (YES) : 1 IS TARGET DB2 REMOTE (NO) : 2 Enter a number (Default=1) : Extract DDL (Yes) : 1 Extract DDL (No) : 2 Enter a number (Default=1) : Extract Data (Yes) : 1 Extract Data (No) : 2 Enter a number (Default=1) : Enter # of rows limit to extract. (Default=ALL) : Enter # of rows limit to load data in DB2. (Default=ALL) : Compress Table in DB2 (No) : 1 Compress Table in DB2 (YES) : 2 Enter a number (Default=1) : Compress Index in DB2 (No) : 1 Compress Index in DB2 (YES) : 2 Enter a number (Default=1) : ******* Source database information: ***** Oracle : 1 MS SQL Server : 2 Sybase : 3 MS Access Database : 4 MySQL : 5 PostgreSQL : 6 DB2 z/OS : 7 DB2 LUW : 8 Enter a number (Default 1) : DB2 Compatibility Feature (DB2 V9.7 or later) : 1 No Compatibility feature : 2 Enter compatibility feature (Default=1) : |
After extraction of the DDL and DATA, you have three different ways of deploying the extracted objects in DB2.
- Click the Deploy DDL/DATA button from the GUI screen
- Go to the Interactive Deploy tab and deploy objects in step-by-step process
- Deploy DDL/DATA using command line script db2gen
Which options to choose to deploy data are based upon the data and objects movement requirements. If you are migrating only non PL/SQL DDL objects and DATA, using the db2gen script or clicking the Deploy DDL/DATA button from the GUI will suffice.
The interactive deploy option is likely your better choice when you are also deploying PL/SQL objects such as triggers, functions, procedures, and PL/SQL packages.
The GUI screen, as shown in Figure 4, is used for interactive deployment of DDL and other database objects. The sequence of events in this screen is:
- Ensure you are connected to DB2 using the Extract/Deploy tab.
- Click on the Interactive Deploy tab.
- Use the Open Directory button to select the working directory containing the previously extracted objects. The objects are read and listed in a tree view.
- You can deploy all objects by pressing Deploy All Objects button on the toolbar. Most objects will deploy successfully while others may fail.
- When you click on an object which failed to deploy in the tree view, you can see the source of the object in the editor window. The reason for the failure is listed in the deployment log below.
- The Oracle compatibility mode generally allows deployment of objects as is. However, there may still be unsupported features that prevent successful deployment of some objects out of the box. Using the editor you can adjust the source code of these objects to work around any issues. When you deploy the changed object, the new source is saved with a backup of the old source.
- You can select one or more objects using the CTRL key and click Deploy Selected Objects button on the toolbar to deploy objects after they have been edited. Often deployment failures occur in a cascade which means that once one object is successfully deployed others which depend on it will also deploy.
- Repeat steps 5 through 7 until all objects have been successfully deployed.
- Go to the root directory of the data movement and run the rowcount script.
- You should see a report generated in the "
oracle : db2 "TESTCASE"."CALL_STACKS" : 123 "TESTCASE"."CALL_STACKS" : 123 "TESTCASE"."CLASSES" : 401 "TESTCASE"."CLASSES" : 401 "TESTCASE"."DESTINATION" : 513 "TESTCASE"."DESTINATION" : 513 |
When the source database size is too large and there is not enough space to hold intermediate data files, using pipe is the recommended way to move the data.
On Windows systems
The tool uses Pipe.dll to create Windows pipes and makes sure that this dll is placed in the same directory where IBMDataMovementTool.jar file is placed.
On UNIX systems
The tool creates UNIX pipes using the mkfifo command for use to move data from source to DB2.
Before you can use pipe between source and DB2 database, it is necessary to have table definition created. Follow this procedure:
- Specify # Extract Rows=1 in the GUI or set LimitExtractRows=1 in the IBMExtract.properties, if you're using the command line window.
- Click on the Extract DDL/Data button to unload the data, or run the unload script from the command line window.
- Click on the Deploy DDL/Data button, or run the db2gen script from the command line window.
- Select Use Pipe, or set usepipe=true in the IBMExtract.properties, if you're using the command line window.
- Click on the Extract / Deploy through Pipe Load button, or run the unload script from the command line window.
You can use this tool from z/OS to do the data movement from a source database to DB2 for z/OS. However, the following additional steps are required.
- and install JZOS.
- This zip file contains a file named jzos.pax. FTP this file using Unix System Services in binary mode to the directory where you would like JZOS installed.
- Change to the directory where you saved the .pax file.
- Run the command: pax -rvf. This will create a subdirectory called jzos in your current working directory. This subdirectory will be referred to as
- In the user's home directory, create a file named .profile based upon the template given below by making changes as per your z/OS DB2 installation.
export JZOS_HOME=$HOME/jzos export JAVA_PATH=/usr/lpp/java/J1.5 export PATH=$JAVA_HOME/bin:$PATH export CLPHOME=/usr/lpp/db2/db2910/db2910_base/lib/IBM export CLASSPATH=$CLASSPATH:/usr/lpp/db2/db2910/db2910_base/lib/clp.jar export CLPPROPERTIESFILE=$HOME/clp.properties export LIBPATH=$LIBPATH: - CLPHOME and CLASSPATH may have to be modified depending on your environment. Replace
with the appropriate directory. - In the user's home directory, create a file name clp.properties based upon template given below:
#Specify the value as ON/OFF or leave them blank DisplaySQLCA=ON AutoCommit=ON InputFilename= OutputFilename= DisplayOutput= StopOnError= TerminationChar= Echo= StripHeaders= MaxLinesFromSelect= MaxColumnWidth=20 IsolationLevel= = : / ,USER,PASSWD Replace items on the last line as appropriate. - Run the command chmod 777
/*.so - Run IBMDataMovementTool.sh -console command and specify values of the parameters through interactive user response.
- IBMExtract.properties, geninput and unload scripts are created for you.
- zdb2tableseries parameter in IBMExtract.properties is for specifying the name of the series for PS datasets. For example, if your TSO ID is DNET770, and this param is set to R, the name of the PS dataset created for first table will be DNET777.TBLDATA.R0000001
- The parameter znocopypend is used to add NOCOPYPEND parameter in LOAD statement. With this parameter, the z/OS DB2 DBA can perform the backup because the table will not be put in COPY pending mode.
- The parameter zoveralloc specifies by how much you want to oversize your file allocation requests. A value of 1 means that you are nor oversizing at all. In an environment with sufficient free storage, this might work. In a realistic environment, 15/11 (1.3636) will be a good estimate. It is recommended that you start at 1.3636 (15/11) and lower the value gradually until you get file write errors, and then increase it a little. If you know the value of SMS parameter REDUCE SPACE UP TO, you should be able to calculate the perfect value of overAlloc by setting it to 1 / (1 - (X/100)), where X is the value of REDUCE SPACE UP TO given as an integer between 0 - 100. Note that REDUCE SPACE UP TO represents a percentage.
- The parameter zsecondary is used to allocate fixed secondary extents. Start with a value of 0 and increase it slowly until file errors occur and then bring it back down
- Run geninput script to create an input file for the unload process.
- Run unload script to generate DDL and DATA.
- Run generated script to create the DDL and load data on z/OS DB2.
- The DSNUTILS will fail if you do not delete those datasets. The following java program can delete those intermediate datasets.
java -cp /u/dnet770/migr/IBMDataMovementTool.jar:$JZOS_HOME/ibmjzos.jar \ -Djava.ext.dirs=${JZOS_HOME}:${JAVA_HOME}/lib/ext ibm.Cleanup - After data loading is completed into DB2 tables on z/OS, you may find the datasets that you need to delete. Use the the following java program to delete those datasets as a part of cleanup.
Create a script jd as shown below:
JZOS_HOME=$HOME/jzos JAVA_HOME=/usr/lpp/java/J1.5 CLASSPATH=$HOME/migr/IBMDataMovementTool.jar:$JZOS_HOME/ibmjzos.jar LIBPATH=$LIBPATH:$JZOS_HOME $JAVA_HOME/bin/java -cp $CLASSPATH \ -Djava.ext.dirs=${JZOS_HOME}:${JAVA_HOME}/lib/ext ibm.Jd $1 Change file permission to 755 and run it and then you will get an output shown below:
DNET770:/u/dnet770/migr: >./jd USAGE: ibm.Jd USAGE: ibm.Jd "DNET770.TBLDATA.**" USAGE: ibm.Jd "DNET770.TBLDATA.**.CERR" USAGE: ibm.Jd "DNET770.TBLDATA.**.LERR" USAGE: ibm.Jd "DNET770.TBLDATA.**.DISC" So, if you want to delete all datasets under "DNET770.TBLDATA", use following command.
DNET770:/u/dnet770/migr: >./jd "DNET770.TBLDATA.**"
The strength of this tool is for large scale data movement. This tool has been used to move 4TB of Oracle data in just three days with good planning and procedures. Here are the tips and techniques that will help you to achieve large scale data movement in the time window constraint that you might have.
It is out of the scope of this article to discuss hardware requirements and database capacity planning but it is important to keep in mind following considerations for estimating time to complete large scale data movement.
- You need a good network connection between source and DB2 server, preferably of 1GBPS or higher. You will be limited by the network bandwidth for the time frame to complete the data movement.
- The number of CPUs on the source server will allow you to unload multiple tables in parallel. For database size greater than 1TB, you should have minimum 4 CPU on source server.
- The number of CPUs on the DB2 server will determine the speed of the LOAD process. As a rule of thumb, you will require 1/4 to 1/3 of the time to load data in data and rests will be consumed by the unload process.
- Plan ahead the DB2 database layout. Please consult IBM's best practice paperss for DB2
- Gain understanding of the tool in the command line mode. Use GUI to generate data movement scripts (geninput and unload) and practice data unload by running unload script from the command line.
- Extract only DDL from source by setting GENDDL=true and UNLOAD=false in the unload script. Use the generated DDL to plan for the table space and table mapping. Use a separate output directory to store generated DDL and data by specifying the target directory using -DOUTPUT_DIR parameter in the unload script. The generation of the DDL should be done ahead of the final data movement.
- Use geninput script to generate a list of tables to be moved from source to DB2. Use SRCSCHEMA=ALL and DSTSCHEMA=ALLparameter in the geninput script to generate a list of all tables. Edit the file to remove unwanted tables and split it into several input files to do a staggered movement approach where you perform unload from source and load to target in parallel.
- After breaking the table input file (generated from geninput script) into several files, copy the unload script into equivalent different files, change the name of the input file, and specify a different directory for each unload process. For example, you could create 10 unload scripts to unload 500 tables from each unload script, totalling 5000 tables.
- Make sure that you do DDL and DATA in separate steps. Do not mix these 2 into a single step for such large movement of data.
- The tool unloads data from the source tables in parallel controlled by NUM_THREADS parameter in the unload script. The default value is 5, and you can increase it to a level where CPU utilization on your source server is around 90%.
- Pay attention to the tables listed in the input tables file. The script geninput does not have intelligence to put the tables in a particular order, but you need to order the tables in such a way as to minimize unload time. The tables listed in the input files are fed to a pool of threads in a round robin fashion. It may so happen that all the threads have finished the unload process but one is still running. In order to keep all threads busy, organize the input file for the tables in the increasing numbers of rows.
- It may still so happen that all tables have unloaded and a few threads are still holding up unloading very large tables. You can unload the same table in multiple threads if you can specify the WHERE clause properly in the input file. For example:
"ACCOUNT"."T1":SELECT * FROM "ACCOUNT"."T1" WHERE id between 1 and 1000000 "ACCOUNT"."T1":SELECT * FROM "ACCOUNT"."T1" WHERE id between 1000001 and 2000000 "ACCOUNT"."T1":SELECT * FROM "ACCOUNT"."T1" WHERE id between 2000001 and 3000000 "ACCOUNT"."T1":SELECT * FROM "ACCOUNT"."T1" WHERE id between 3000001 and 4000000 Make sure that you use the right keys in the WHERE clause, which should preferrably be either the primary key or a unique index. The tool takes care of making proper DB2 LOAD scripts to load data from multiple files generated by the tool. There is no other setup required to unload the same table in multiple threads, except to add different WHERE clause as explained.
- After breaking your unload process in several steps, you can start putting data in DB2 simultaneously when a batch has finished unloading the data. The key here is the seperate output directory for each unload batch. All necessary files are generated to put data in DB2 in the output directory. For DDL, you will use generated db2ddl script to create table definitions. For data, you will use db2load script to load the data in DB2. If you combine DDL and data in a single step, the name of the script will be db2gen.
- Automate the whole process in your shell scripts so that the unload and load processes are synchronised. Each and every large data movement from Oracle or other databases to DB2 is unique. You will have your skills tested determining how to automate all of these jobs. Save the output of the jobs in a file by using the tee command, so that you can keep watching the progress, and the output is saved in a log file.
It is a bad idea to fail to do the mock movement to test your automation and validate the way you planned staggered unload from source and load in DB2. The level of customization is only in creating the shell scripts to run these tasks in the right order. Follow these steps to run the mock tests:
- Copy your data movement scripts and automation shell scripts to a mock directory.
- Estimate your time by unloading a few large tables in a few threads, and accordingly stagger the movement of the data.
- Add a WHERE clause to limit the number of rows to test the movement of data. For example, you can add a ROWNUM clause to limit the number of rows in Oracle or use the TOP clause for SQL Server.
"ACCOUNT"."T1":SELECT * FROM "ACCOUNT"."T1" WHERE rownum < 100 "ACCOUNT"."T2":SELECT * FROM "ACCOUNT"."T2" WHERE rownum < 100 "ACCOUNT"."T3":SELECT * FROM "ACCOUNT"."T3" WHERE rownum < 100 "ACCOUNT"."T4":SELECT * FROM "ACCOUNT"."T4" WHERE rownum < 100 - Practice your scripts and make changes as necessary, and prepare for the final run.
- You have already extracted DDL and made the required manual changes for the mapping between tables and tablespaces if required.
- Take a downtime for the movement of the data.
- Make sure your have around 10000 open cursors setting for the Oracle database if that is the source.
- Watch the output from the log file.
For large movement of data, it is much more about planning, discipline and the ability to automate jobs. The tool provides all the capability that you require for such movement. This little tool has moved very large databases from source to DB2.
This tool is not supported by the IBM support organization. However, you can report bugs, issues, suggestions, enhancement requests in the support forum.
Question/Issue | Answer/Solution |
---|---|
Do I need to install anything on my source database server in order for this tool to work? | You do not need to install anything on your source database for this tool. |
What are the supported platforms for this tool? | Windows, z/OS, AIX, Linux, UNIX, HP-UX, Solaris, Mac and any other platform that has a JVM on it. |
I am running this tool from a secure shell window on my Linux/Unix platform and I see few messages in the command line shell but I do not see GUI and it seems that tool has hung. | Depending upon your DISPLAY settings, the GUI window has opened on your display capable server. You need to properly export your DISPLAY settings. Consult your Unix system adminstrator. |
I am trying to move data from PostgreSQL and I do not see PostgreSQL JDBC driver attached with the tool. | There is no JDBC drivers provided with the tool due to licensing considerations. You should get your database JDBC driver from your licensed software. |
It is not possible to grant DBA to the user extracting data from Oracle database. How can I use the tool? | You will at least need SELECT_CATALOG_ROLE granted to the user and SELECT privileges on tables used for migration. |
What are the databases to which this tool can connect? | Any database that has a type-IV JDBC driver. So, you can connect to MySQL, PostgreSQL, Ingres, SQL Server, Sybase, Oracle, DB2 and others. It can also connect to a database that has a ODBC-JDBC connector so you can also move from Access database. |
What version of Java do I need to run this tool? | You need minimum Java 1.5 to run the tool. The dependency for Java 1.5 is basically due to the GUI portion of the tool. If you really need support for Java 1.4.2, send me a note and I will compile the tool for Java 1.4.2 but the GUI will not run to create the data movement driver scripts. You can determine the version of Java by running this command. $ java -version C:\>java -version |
How do I check the version of the tool? | Run IBMDataMovementTool -version on Windows or ./IBMDataMovementTool.sh -versionon Linux/UNIX |
I am get the error "Unsupported major.minor version 49.0" or "(.:15077): Gtk-WARNING **: cannot open display: " when I run the tool. What does it mean? | You are using a version of Java less than 1.5. Install Java higher than version 1.4.2 to overcome this problem. We prefer that you install IBM Java. |
What information do I need for a source and DB2 database servers in order to run this tool? | You need to know IP address, port number, database name, user id and password for the source and DB2 database. The user id for the source database should have DBA priviliges and SYSADM privilege for the DB2 database. |
I am running this tool from my Windows workstation and it is running extremely slow. What can I do? | The default memory allocated to this tool from IBMDataMovementTool.cmd or IBMDataMovementTool.sh command script is 990MB by using -Xmx switch for the JVM. Try reducing this memory as you might be having less memory on your workstation. |
I am doing a data movement from SQL Server to DB2. How do I get my TEXT field to go to VARCHAR in DB2. | Specify mssqltexttoclob=true in IBMExtract.properties file. |
I am doing a data movement from Sybase to DB2 and it did not move my T-SQL procedures to DB2. | The purpose of this tool is only DDL and DATA movement. You will have to use MTK for the purpose of procedure / triggers movement. |
I am doing a DDL movement from Sybase to DB2 and I have my Sybase objects in a file. I do not see a way to specify DDL file as a data source. | The purpose of this tool is the high speed data movement and that is why there is no capability to transform a DDL file from a database to DB2. You can however use to trasnform a DDL from a source database to a target. |
I am doing a data movement from MS Access to DB2 and I do not see all indexes etc in the DDL generated. | We use basic ODBC-JDBC connector to connect to MS Access database. You will need a different commercial JDBC driver to obtain complete set of DDLs. You can try JDBC driver for MS Access. If you use HXTT driver, you will have to specify DBVENDOR=hxtt in generated unload script instead of access. |
I am doing a data movement from Sybase to DB2 using this tool and I am getting tons of error. | It is quite possible that your Sybase database is not enabled for required JDBC support. Please consult your Sybase DBA to ensure that correct JDBC stored procedures are installed in your Sybase database. |
I am doing a data movement from MySQL to DB2 and I am running out of memory. | Try different values with FETCHSIZE=nnn in the generated unload script and run the data movement from command line. If you use GUI tool, it will overwrite unload script. |
I am doing a data movement from Oracle to DB2 and I notice that there are 3 jars files required for the data movement. My understanding is the we only need a JDBC driver for data movement. Why additional jar files? | The additional JAR files are mainly required for Oracle XML data types. You should get those files from your Oracle installation directory. |
I want Oracle data type of CLOB to go as DBCLOB in DB2. | Go to IBMExtract.properties file and set DBCLOB=true. |
I am using this tool to move data from Oracle to DB2 and I am getting many Oracle SQL error that a table was not found. | The user ID connecting to Oracle should have SELECT_CATALOG_ROLE granted to it and SELECT privileges on the tables. |
I do not want NCHAR and NVARCHAR2 to go as GRAPHIC or VARGRAPHIC in DB2. I want them to go as CHAR and VARCHAR2 since I created DB2 database as UTF-8. | Go to IBMExtract.properties file and set GRAPHIC=false. |
Can I do data movement from Oracle database to DB2 version less than V9.7/V9.5? | Yes, go to IBMExtract.properties and set db2_compatibility=false |
I noticed that your tool moved Oracle's NUMBER(38) to NUMBER(31) and I understand that DB2 supports only up to 31. I do not want to round down and I want to convert this to DOUBLE. | Go to IBMExtract.properties and set roundDown_31=false. |
I am getting lots of data rejected. How do I get that rejected data in a file so that I can analyze the reason of rejection. | Go to IBMExtract.properties and set dumpfile=true. |
I am trying to load data from a workstation to a DB2 server and I am getting erros. Do I have to run the tool from server only? | It is preferable to run this tool from the DB2 server to extract data from the source database and avoid an intermediate server. However if you want to run this tool from an intermediate server, you can specify REMOTELOAD=TRUE in the generated script unload. Please remember that DB2 LOAD utility requires for BLOBS/CLOBS/XML data to be available on server. You will need to mount those directories with same naming convention on the target DB2 server. |
I can only login to my DB2 server through a SSH shell and we do not allow X-Windows to run on DB2 server. How do I run this GUI tool to move DDL and DATA? | Run IBMDataMovementTool.sh from your SSH and if there is no graphics support, the tool will switch to command line input automatically. If it does not switch for some reason, specify -console option to the IBMDataMovementTool.sh command and it will force to run the tool in the interactive command line mode. The command line mode is just a way to gather the input and to generate necessary scripts for data movement. The use of GUI is just a way to generate the scripts and the actual works is done through the scripts only. |
Why did you not create DB2 database through your script since you ask the name of the database. | DBAs normally like to create their database as per their storage paths information. We do however create necessary table spaces so that tables are put automatically in right table space by DB2. You should consider reading IBM's best practice papers to carefully plan for your database. It is recommended that you create DB2 database with 32K page size as default. |
Why do I need xdb.jar and xmlparserv2.jar in addition to Oracle JDBC driver? | The xdb.jar and xmlparserv2.jar will be required if your Oracle data contains XML data. You can locate xdb.jar in folder server/RDBMS/jlib and xmlparserv2.jar in lib folder. If you are unable to locate these, you can download Oracle XDK for Java. |
I am getting java.lang.UnsatisfiedLinkError: Pipe Pipe.dll is not a valid Win32 application. How do I fix this? | This error comes if you are running the tool on Windows 64-bit platform using Java 32-bit JVM. Install Java 64-bit JVM on your Windows platform, and rerun the tool. |
Many IBMers from around the world provided valuable feedback to the tool and without their feedback, the tool in this shape would not have been possible. I acknowledge significant help, feedback, suggestions and guidance from following people.
- Jason A Arnold
- Serge Rielau
- Marina Greenstein
- Maria N Schwenger
- Patrick Dantressangle
- Sam Lightstome
- Barry Faust
- Vince Lee
- Connie Tsui
- Raanon Reutlinger
- Antonio Maranhao
- Max Petrenko
- Kenneth Chen
- Masafumi Otsuki
- Neal Finkelstein
This article contains a tool. IBM grants you ("Licensee") a non-exclusive, royalty free, license to use this tool. However, the tool is provided as-is and without any warranties, whether EXPRESS OR IMPLIED, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT. IBM AND ITS LICENSORS SHALL NOT BE LIABLE FOR ANY DAMAGES SUFFERED BY LICENSEE THAT RESULT FROM YOUR USE OF THE SOFTWARE. IN NO EVENT WILL IBM OR ITS LICENSORS BE LIABLE FOR ANY LOST REVENUE, PROFIT OR DATA, OR FOR DIRECT, INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL OR PUNITIVE DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF THE USE OF OR INABILITY TO USE SOFTWARE, EVEN IF IBM HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
- 1
Note
- A new build of the tool is uploaded very frequently, after bug fixes and new enhancements. Click on Help > Check New Version from the GUI or enter the command ./IBMDataMovementTool.sh -check to check if a new build is available for download. You can find the Tool's build number from the Help > About menu option or by entering the ./IBMDataMovementTool.sh -version command. This tool uses JGoodies Forms 1.2.1, JGoodies Look 2.2.2, and JSyntaxPane 0.9.4 packages for the GUI interface.
Learn
- "Migrate from MySQL or PostgreSQL to DB2 Express-C" (developerWorks, June 2006) was the first article written for this tool.
- "DB2 Viper 2 compatibility features" (developerWorks, July 2007) is the article that explains compatibility features.
- You can also use , for the migration of data and procedures.
Get products and technologies
- Download DB2 Express-C 9.7, a no-charge version of DB2 Express database server for the community.
- Download a free trial version of DB2 9.7 for Linux, UNIX, and Windows..
- Download IBM product evaluation versions and get your hands on application development tools and middleware products from DB2, Lotus®, Rational®, Tivoli®, and WebSphere®.
Discuss
- Participate in the discussion forum.
- Check out developerWorks blogs and get involved in the developerWorks community.
Vikram S Khatri works for IBM in the Sales and Distribution Division and is a member of the DB2 Migration team. Vikram has 24 years of IT experience and specializes in enabling non-DB2 applications to DB2. Vikram supports the DB2 technical sales organization by assisting with complex database migration projects as well as with database performance benchmark testing.