• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 767
  • Last Modified:

Migrate ArcSDE 9.2 SP6 database from Oracle 10gR2 to DB2 v8 or DB2 v9 on z/os

Hello all, I am tying to replace the current ArcSDE 9.2 database backend from Oracle 10gr2 to DB2 on z/os (mainframe). Is there any steps I need to follow ? I used the IBM Migration Toolkit but I had to replace and clean a lot in a way I doubt ArcSDE 9.2 will pass the connection test. Is there any other mean to migrate the backend database from Oracle to DB2 on z/os using ESRI tools?
0
CuteDBA
Asked:
CuteDBA
  • 5
  • 3
1 Solution
 
momi_sabagCommented:
this is one hard migration process
but, the ibm migration toolkit should get you in the right direction
i don't know esri tools, but if you will have specific problems with the installation just post your questions here
i believe ibm representitves would be able to help you with such issues
0
 
Kent OlsenData Warehouse Architect / DBACommented:
Hi Cute,

Changing an application's connection from Oracle to DB2 is completely different from convert a database from Oracle to DB2.

The ToolKit should make moving the data a lot easier than any brute force method.  If the underlying database isn't too large I'd think seriously about moving it to DB2/LUW, but since you appear to already have a mainframe, that move makes a lot of sense.

Moving the application connection may prove more challenging.  When connecting to Oracle, the application connects over Net*8 or OracleNet to the Oracle back end.  Depending on how deeply either protocol is engrained in the application it might not move to DB2 easily.

That said, I would hope that you can easily connect the application to DB2 using the DB2 client or an appropriate ODBC/JDBC driver.

Test the basic connectivity by connecting to the DB2 database from where ArcSDE will connect to the database using most any DB2 compatible application.  (The DB2 client, or any GUI development tool will do nicely and should connect with little or no trouble)  Once you've confirmed connectivity, then try configuring ArcSDE.


Good Luck,
Kent
0
 
CuteDBAAuthor Commented:
Hello,
Thank you all, I tried to migrate the data using the IBM Migration toolkit ( database structure only) , then with a Oracle database link between the two databases I can Insert as select statement. The conversion took a lot to modify like table data types and character set.

I am being testing database structure migration only. I am a database administrator not a GIS specialist, But the ArcSDE (spatial database engine)  have the ability to export and import spatial data from the SDE database. I am wondering, if I installed ArcSDE engine on one server to connect to DB2 backend, then use the export, import utility shipped with ArcSDE would it work out?

I am trying to migrate the structure first, then data, later test application connectivity with the new migrated database.

Regards,
0
Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

 
Kent OlsenData Warehouse Architect / DBACommented:
Hi Cute,

How much data is there?

If you can federate the systems, you can "drive" from the DB2 host and a lot of things can be made a lot cleaner.

  CREATE TABLE db2database.tablename AS (SELECT * FROM oracledatabase.tablename) DEFINITION ONLY;

DB2 understands the Oracle types and will select appropriate types on the DB2 side.  The only issue that I've ever had with this is that Oracle NUMBER types typically convert to DECIMAL types instead of integers.  I prefer integers because they're smaller and faster.


Kent
0
 
CuteDBAAuthor Commented:
Hello Kent,

Some tables reaches 7000+ records,
I am going to try this SQL statement and see what possible results could I get. The thing happened during the migration  is that I had to convert the varchar(4000) to CLOB, and nvarchar to vargraphic. Also numbers data types are mapped as decimal. I got errors when using IBM Migration ToolKit and did not recognize the varchar defenition as varchar(205 bytes) (the byte word), so I had to clean a lot of table definitions from byte word. I haven't yet started with other database objects like index and referential constraints.
Kindly advice.

Regards,
CuteDBA
0
 
Kent OlsenData Warehouse Architect / DBACommented:
Hi Cute,

Those are probably not the data types that you want to use.

VARCHAR is the generic character type.  Both Oracle and DB2 support it, though they manage the contents slightly differently.  I suspect that you had to convert varchar(4000) to CLOB because you're using the default block size of 4K on the DB2 side.  (DB2 requires that a row fit into a single block, except the xLOB obects.  Since the block is 4096 bytes, DB2 uses some of them for its own use, and you've got a 4K VARCHAR item, there's not much room for anything else.  Try creating the database with 16K blocks.  That should solve that issue.

NVARCHAR is used to store unicode data.  DB2 doesn't have a special type to do that.  It uses the VARCHAR objects for unicode data and controls the contents via the characterset / unicode settings.



Kent
0
 
CuteDBAAuthor Commented:
Hello Kent,

I am sorry to keep off these days, I was really trying all possibilities. I am taking your advice on the block size that need to be increased to 16k to work out for us, and it worked like a charm. With heterogeneous database connectivity, copy statement as select from the source database work out, most of the data has been migrated.

 For the Geo-spatial Data and ArcSDE, I am thinking of installing the software and connect the DB2 database as a back end, then to use the client gis application (arcCatalog) to update the data and simply will go to the mainframe.

Thank you very much.
CuteDBA
0
 
Kent OlsenData Warehouse Architect / DBACommented:

Glad to be able to help, and that it's a pretty easy fix.  :)


Kent
0
 
Kent OlsenData Warehouse Architect / DBACommented:
Hi Cute,

One more thing about DB2 block sizes.  I chose 16K but that may not be "best" for your application.  Several factors need to be considered when selecting the block size.

You've obviously got some large rows as they won't fit in a 4K block.  Are the rows slightly more than 4K?  6K? 8K? etc.  When the data is read (not necessarily the rows returned via a SELECT statement) are multiple rows often required to complete the query?

Start with the row size.  If it's up to about 4,500 characters, you can fit 3 rows in a 16K block.  But you can fit 7 in a 32K block.  Using a 32K block is a 15% reduction in the total storage required to hold the table, and a 57% reduction in the number of blocks needed to hold the table.  If you've got a lot of data, there could be a significant performance improvement in using a 32K block.

The other side of the discussion starts by examining how the data is read.  When you select a single row from the table using an indexed column DB2 will locate the row from the index and read only 1 block from the table.  If this is the way that the table is normally accessed, 32K blocks can be a slight performance degradation since 32K is read from disk every time a row is read when an 8K or 16K block will suffice.  If you do a lot of updates or inserts the smaller block size means you write a smaller footprint to disk, which can be a big performance help.

And of course, blocks are stored in the DB2 buffer pool (memory cache) so on a well tuned system that is used primarily for reporting (a lot more queries than updates/inserts) the large block is probably the better choice as it will mean more rows are in the buffer pool.

And you thought this was simple, huh?  :)


Good Luck,
Kent
0

Featured Post

Prep for the ITIL® Foundation Certification Exam

December’s Course of the Month is now available! Enroll to learn ITIL® Foundation best practices for delivering IT services effectively and efficiently.

  • 5
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now