Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win


DB2 UBD Loads / Backup / Restore

Posted on 2007-03-28
Medium Priority
Last Modified: 2008-09-14
I'm well versed with DB2 z/OS. I'm new to UDB. I'm working with DB2 UDB in a J2EE environment.
a) The table i'm referring to needs to be online 24X7. Every nite a load is to be done to the table. The itself contains a large # of rows. Nitely @ 9 millions rows need to be updated/inserted as follows...If the record exists it needs to be updated; if not there it needs to be inserted. Is there any way to accomplish this while the table is still online and can be queried. Currently, the 'diff' unix based utility is used to read a CSV file and update the table. It is too lengthy a procedure. Is there any load utility to do this?
b) What is the normal backup & restore procedure in DB2 UDB. Are there any examples, or places i can this from?

An early response will be appreciated.

Thanking you in advance!!!
Question by:hellomdp
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
LVL 37

Expert Comment

ID: 18814409

first of all, you can find all the documentation here

the utility that is used to load data into tables from csv file is called - load
unfortunatlly you can not ask it to implement such - upsert (update or insert) logic as you want
you can have the data read accessible during the load, but it can not update
a possible solution is to delete the rows that will be updated in advance and then use the load command, but i think that the best way for you to acomplish your need is to write a program that will do that

LVL 37

Expert Comment

ID: 18814453
come to think about it
maybe the best solution for you will be -
1) unload the source table
2) sort both files (the unloaded table and the file to load)
3) run some sort of merge program that will merge the files in a way that will provide a new file that contains all the data - all the rows from the file to load, and all the rows from the source table that did not appear in the file to load
4) define a new table with a temporary name
5) load that table
6) rename the source table to a different name
7) rename the new table to the original name

the best way to acomplish the last steps is to use a synonym
you maintain 2 tables, each day you load one of them and then redirect the synonym to the new table
all you need to make sure is to put the table to read only access when you start the unload so you won't have any lossed updates
this is actually a method that we use here for 5 tables in the mainframe environment

if you have furthur question, i'll be gald to answer
LVL 37

Accepted Solution

momi_sabag earned 500 total points
ID: 18814493
regarding the backup and restore procedure

you have 2 commands -  backup database and resotore database
backup / restore can be done at the tablespace / database level
you can find all about them in the first link i sent you
just like the mainframe, you can have a share level change backup but the difference is that by default udb uses circular logging, and in order to use share level change (online) backups, you have to switch the database to use mainfram like logging (with archiving) and then take one time offline (sharelevel none) backup
you will find in the link i gave you some examples of using the command
i recommend to use the compress parameter when you backup, i got some serious disk savings when i started to use that parameter

do you have any specific questions regarding backup and restore ?

do you have any
Nothing ever in the clear!

This technical paper will help you implement VMware’s VM encryption as well as implement Veeam encryption which together will achieve the nothing ever in the clear goal. If a bad guy steals VMs, backups or traffic they get nothing.

LVL 13

Expert Comment

ID: 18816318
your best bet is to do exactly what you are doing, that is to say, read the data file from a c type program line by line and compare that line to the database, then implement your insert/update logic. Since you want concurrent access to the table, you may still run into lock contention on a specific row, but there isnt much you can do about that.  If the table in question is read only for all users, you dont have to worry about lock contention. You will find that a well written c program that 'chases a cursor' will perform very well for 9 million rows, I would estimate about two to three hours to get it done.

Regarding the backup and restore, there isnt really any such thing as a 'normal' backup and restore. Since you are 24X7, you have no choice but to use online backups, that means taking care of the archive logs and deciding upon your backup schedule. The restore procedure for online backups is quite straightforward, you restore the last known good backup and then rollforward through all or some of the archive logs to bring the database to a consistent point in time.
The only gotcha with online restore is that you must be careful when rolling forward to a point in time that is not at the end of the logs-because if you do so and then start using the database, the previous archive log files will be overwritten by the new onles with the same archive log number. That makes restoring to the original end of logs impossible (if you later discover that the point in time you rolled forward to is the wrong one!)
Whatever you do, make sure you thouroughly test out your plan and understand all its implications.

Expert Comment

ID: 18816487
Just a thought...  and probably a bad one ;)

You could use the IMPORT command using the INSERT_UPDATE method.  If there's a primary key match, record is updated, otherwise inserted.  That would of course depend on how you determine to update or delete.  

Only problem is.... performance.  You'll have additional logging, triggers may fire when you don't want them to, index updates will be slow...

LVL 50

Assisted Solution

Lowfatspread earned 500 total points
ID: 18817327
i'd suggest that you load the csv file into a staging table
and then you may be able to use UPSERT/MERGE logic to actually perform the desired updates on the main table..

alternatively you can update via stored procedures and cursors...

(still from the staging table view point,,,)

Featured Post

NFR key for Veeam Agent for Linux

Veeam is happy to provide a free NFR license for one year.  It allows for the non‑production use and valid for five workstations and two servers. Veeam Agent for Linux is a simple backup tool for your Linux installations, both on‑premises and in the public cloud.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Recursive SQL in UDB/LUW (you can use 'recursive' and 'SQL' in the same sentence) A growing number of database queries lend themselves to recursive solutions.  It's not always easy to spot when recursion is called for, especially for people una…
Recursive SQL in UDB/LUW (it really isn't that hard to do) Recursive SQL is most often used to convert columns to rows or rows to columns.  A previous article described the process of converting rows to columns.  This article will build off of th…
Is your data getting by on basic protection measures? In today’s climate of debilitating malware and ransomware—like WannaCry—that may not be enough. You need to establish more than basics, like a recovery plan that protects both data and endpoints.…
This lesson discusses how to use a Mainform + Subforms in Microsoft Access to find and enter data for payments on orders. The sample data comes from a custom shop that builds and sells movable storage structures that are delivered to your property. …
Suggested Courses

618 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question