Solved

DB2 UBD Loads / Backup / Restore

Posted on 2007-03-28
8
882 Views
Last Modified: 2008-09-14
I'm well versed with DB2 z/OS. I'm new to UDB. I'm working with DB2 UDB in a J2EE environment.
a) The table i'm referring to needs to be online 24X7. Every nite a load is to be done to the table. The itself contains a large # of rows. Nitely @ 9 millions rows need to be updated/inserted as follows...If the record exists it needs to be updated; if not there it needs to be inserted. Is there any way to accomplish this while the table is still online and can be queried. Currently, the 'diff' unix based utility is used to read a CSV file and update the table. It is too lengthy a procedure. Is there any load utility to do this?
b) What is the normal backup & restore procedure in DB2 UDB. Are there any examples, or places i can this from?

An early response will be appreciated.

Thanking you in advance!!!
0
Comment
Question by:hellomdp
8 Comments
 
LVL 37

Expert Comment

by:momi_sabag
Comment Utility
Hi

first of all, you can find all the documentation here
http://publib.boulder.ibm.com/infocenter/db2luw/v8//index.jsp

the utility that is used to load data into tables from csv file is called - load
unfortunatlly you can not ask it to implement such - upsert (update or insert) logic as you want
you can have the data read accessible during the load, but it can not update
a possible solution is to delete the rows that will be updated in advance and then use the load command, but i think that the best way for you to acomplish your need is to write a program that will do that

momi
0
 
LVL 37

Expert Comment

by:momi_sabag
Comment Utility
come to think about it
maybe the best solution for you will be -
1) unload the source table
2) sort both files (the unloaded table and the file to load)
3) run some sort of merge program that will merge the files in a way that will provide a new file that contains all the data - all the rows from the file to load, and all the rows from the source table that did not appear in the file to load
4) define a new table with a temporary name
5) load that table
6) rename the source table to a different name
7) rename the new table to the original name

the best way to acomplish the last steps is to use a synonym
you maintain 2 tables, each day you load one of them and then redirect the synonym to the new table
all you need to make sure is to put the table to read only access when you start the unload so you won't have any lossed updates
this is actually a method that we use here for 5 tables in the mainframe environment

if you have furthur question, i'll be gald to answer
0
 
LVL 37

Accepted Solution

by:
momi_sabag earned 125 total points
Comment Utility
regarding the backup and restore procedure

you have 2 commands -  backup database and resotore database
backup / restore can be done at the tablespace / database level
you can find all about them in the first link i sent you
just like the mainframe, you can have a share level change backup but the difference is that by default udb uses circular logging, and in order to use share level change (online) backups, you have to switch the database to use mainfram like logging (with archiving) and then take one time offline (sharelevel none) backup
you will find in the link i gave you some examples of using the command
i recommend to use the compress parameter when you backup, i got some serious disk savings when i started to use that parameter

do you have any specific questions regarding backup and restore ?

momi
do you have any
0
How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

 
LVL 13

Expert Comment

by:ghp7000
Comment Utility
your best bet is to do exactly what you are doing, that is to say, read the data file from a c type program line by line and compare that line to the database, then implement your insert/update logic. Since you want concurrent access to the table, you may still run into lock contention on a specific row, but there isnt much you can do about that.  If the table in question is read only for all users, you dont have to worry about lock contention. You will find that a well written c program that 'chases a cursor' will perform very well for 9 million rows, I would estimate about two to three hours to get it done.

Regarding the backup and restore, there isnt really any such thing as a 'normal' backup and restore. Since you are 24X7, you have no choice but to use online backups, that means taking care of the archive logs and deciding upon your backup schedule. The restore procedure for online backups is quite straightforward, you restore the last known good backup and then rollforward through all or some of the archive logs to bring the database to a consistent point in time.
The only gotcha with online restore is that you must be careful when rolling forward to a point in time that is not at the end of the logs-because if you do so and then start using the database, the previous archive log files will be overwritten by the new onles with the same archive log number. That makes restoring to the original end of logs impossible (if you later discover that the point in time you rolled forward to is the wrong one!)
Whatever you do, make sure you thouroughly test out your plan and understand all its implications.
0
 
LVL 5

Expert Comment

by:ocgstyles
Comment Utility
Just a thought...  and probably a bad one ;)

You could use the IMPORT command using the INSERT_UPDATE method.  If there's a primary key match, record is updated, otherwise inserted.  That would of course depend on how you determine to update or delete.  

Only problem is.... performance.  You'll have additional logging, triggers may fire when you don't want them to, index updates will be slow...

0
 
LVL 50

Assisted Solution

by:Lowfatspread
Lowfatspread earned 125 total points
Comment Utility
i'd suggest that you load the csv file into a staging table
and then you may be able to use UPSERT/MERGE logic to actually perform the desired updates on the main table..

alternatively you can update via stored procedures and cursors...

(still from the staging table view point,,,)
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Recursive SQL in UDB/LUW (you can use 'recursive' and 'SQL' in the same sentence) A growing number of database queries lend themselves to recursive solutions.  It's not always easy to spot when recursion is called for, especially for people una…
Recursive SQL in UDB/LUW (it really isn't that hard to do) Recursive SQL is most often used to convert columns to rows or rows to columns.  A previous article described the process of converting rows to columns.  This article will build off of th…
This video discusses moving either the default database or any database to a new volume.
This tutorial demonstrates a quick way of adding group price to multiple Magento products.

728 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now