DB2 UBD Loads / Backup / Restore

I'm well versed with DB2 z/OS. I'm new to UDB. I'm working with DB2 UDB in a J2EE environment.
a) The table i'm referring to needs to be online 24X7. Every nite a load is to be done to the table. The itself contains a large # of rows. Nitely @ 9 millions rows need to be updated/inserted as follows...If the record exists it needs to be updated; if not there it needs to be inserted. Is there any way to accomplish this while the table is still online and can be queried. Currently, the 'diff' unix based utility is used to read a CSV file and update the table. It is too lengthy a procedure. Is there any load utility to do this?
b) What is the normal backup & restore procedure in DB2 UDB. Are there any examples, or places i can this from?

An early response will be appreciated.

Thanking you in advance!!!
hellomdpAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

momi_sabagCommented:
Hi

first of all, you can find all the documentation here
http://publib.boulder.ibm.com/infocenter/db2luw/v8//index.jsp

the utility that is used to load data into tables from csv file is called - load
unfortunatlly you can not ask it to implement such - upsert (update or insert) logic as you want
you can have the data read accessible during the load, but it can not update
a possible solution is to delete the rows that will be updated in advance and then use the load command, but i think that the best way for you to acomplish your need is to write a program that will do that

momi
0
momi_sabagCommented:
come to think about it
maybe the best solution for you will be -
1) unload the source table
2) sort both files (the unloaded table and the file to load)
3) run some sort of merge program that will merge the files in a way that will provide a new file that contains all the data - all the rows from the file to load, and all the rows from the source table that did not appear in the file to load
4) define a new table with a temporary name
5) load that table
6) rename the source table to a different name
7) rename the new table to the original name

the best way to acomplish the last steps is to use a synonym
you maintain 2 tables, each day you load one of them and then redirect the synonym to the new table
all you need to make sure is to put the table to read only access when you start the unload so you won't have any lossed updates
this is actually a method that we use here for 5 tables in the mainframe environment

if you have furthur question, i'll be gald to answer
0
momi_sabagCommented:
regarding the backup and restore procedure

you have 2 commands -  backup database and resotore database
backup / restore can be done at the tablespace / database level
you can find all about them in the first link i sent you
just like the mainframe, you can have a share level change backup but the difference is that by default udb uses circular logging, and in order to use share level change (online) backups, you have to switch the database to use mainfram like logging (with archiving) and then take one time offline (sharelevel none) backup
you will find in the link i gave you some examples of using the command
i recommend to use the compress parameter when you backup, i got some serious disk savings when i started to use that parameter

do you have any specific questions regarding backup and restore ?

momi
do you have any
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Introducing Cloud Class® training courses

Tech changes fast. You can learn faster. That’s why we’re bringing professional training courses to Experts Exchange. With a subscription, you can access all the Cloud Class® courses to expand your education, prep for certifications, and get top-notch instructions.

ghp7000Commented:
your best bet is to do exactly what you are doing, that is to say, read the data file from a c type program line by line and compare that line to the database, then implement your insert/update logic. Since you want concurrent access to the table, you may still run into lock contention on a specific row, but there isnt much you can do about that.  If the table in question is read only for all users, you dont have to worry about lock contention. You will find that a well written c program that 'chases a cursor' will perform very well for 9 million rows, I would estimate about two to three hours to get it done.

Regarding the backup and restore, there isnt really any such thing as a 'normal' backup and restore. Since you are 24X7, you have no choice but to use online backups, that means taking care of the archive logs and deciding upon your backup schedule. The restore procedure for online backups is quite straightforward, you restore the last known good backup and then rollforward through all or some of the archive logs to bring the database to a consistent point in time.
The only gotcha with online restore is that you must be careful when rolling forward to a point in time that is not at the end of the logs-because if you do so and then start using the database, the previous archive log files will be overwritten by the new onles with the same archive log number. That makes restoring to the original end of logs impossible (if you later discover that the point in time you rolled forward to is the wrong one!)
Whatever you do, make sure you thouroughly test out your plan and understand all its implications.
0
ocgstylesCommented:
Just a thought...  and probably a bad one ;)

You could use the IMPORT command using the INSERT_UPDATE method.  If there's a primary key match, record is updated, otherwise inserted.  That would of course depend on how you determine to update or delete.  

Only problem is.... performance.  You'll have additional logging, triggers may fire when you don't want them to, index updates will be slow...

0
LowfatspreadCommented:
i'd suggest that you load the csv file into a staging table
and then you may be able to use UPSERT/MERGE logic to actually perform the desired updates on the main table..

alternatively you can update via stored procedures and cursors...

(still from the staging table view point,,,)
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
DB2

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.