Link to home
Start Free TrialLog in
Avatar of williamwlk
williamwlk

asked on

Oracle Rownum Dump

Hello,

In oracle, I would like to achieve this

- I have one million records:
select count(bookid) from books where location='W01' and backuplocation='B01';
1,000,000
- I would like change any 150,000  bookid's 'location'='W01' to 'W02', where 'backuplocation'='B01'. [sql1]
- i commit the change.
- In case of problems, I would like to change those bookid's 'location'='W02' back to 'location'='W01 where 'backuplocation'='B01'. In another words, I would like to rollback [sql1].

Help, Please.

W


Avatar of OP_Zaharin
OP_Zaharin
Flag of Malaysia image

- you can use rownum to limit to 15000 records

UPDATE books SET location = 'W02'
WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000
Avatar of williamwlk
williamwlk

ASKER

Yes ! OK.

I commit your SQL. after that, I don't want that change and I want to rollback.

I want to change back those bookids changed.
FYI: the table is changing all the time. many bookids added every minute.

W
- sorry i'm late in posting this. once you commit, you cannot rollback. to get back the data before committed you can look into using oracle flashback. you can google for the doc or steps to implement oracle flashback. i listed a few:

http://www.oracle.com/technetwork/database/features/availability/flashback-overview-082751.html
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm
http://www.orafaq.com/node/50
http://psoug.org/reference/flash_query.html
- and for 150000 the query should use "AND ROWNUM <= 150000"
Let's assume i can't implement oracle flashback.

how about this?

dump bookids to a file WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000

then, i change the record 1 by 1

commit

if problem,

change back/roll back 1 by 1


stupid but it should work


but from experienced oracle DBA point of view, how does it look?

or better idea yet?

Dear,
In situation when your tables is being updated and changed you cannpt flashback database, tablespace even it fears lost of your new data. You have make available your backup on some offline schema and compare both tables for difference of location, then loop through the to restore origional location.
akramryk,

how about this?

dump bookids to a file WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000

then, i change the record 1 by 1

commit

if problem,

change back/roll back 1 by 1


stupid but it should work
but from experienced oracle DBA point of view, how does it look?
or better idea yet?
- rather than dump it to a file, you can create a temp table to store those 150000 records.

CREATE TABLE books_tmp
AS SELECT * FROM books
WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000

- then work on the data need to be updated on the temp table instead. or use that temp table to match back the data that has been changed in books table.
SOLUTION
Avatar of OP_Zaharin
OP_Zaharin
Flag of Malaysia image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I need to check with my oracle back end team if I am allowed to create a temp table on my schema.

Let's say, what if I can't also create a temp table?

W
Avatar of slightwv (䄆 Netminder)
You likely not be able to use rownum to undo the changes you accidentally committed.

rownum is a virtual column assigned to the result set.  I assume there are new rows being added all the time.  Since there was no order to the update, there is no guarantee you will un-change the same 150,000 rows you initially changed.

If you cannot restore from a backup or flashback the change, you will need to manually verify ALL the rows, not just 150,000, and their appropriate values.
yes that is the point: slightwv:

so, how can we do this:

backup to a table and change the values at the same time for consistency ? using rownum or whatever.

then, rollback according to the backup table

?

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Ha! slightwv!
We are slowly getting there...

<quote>
Below is a sample that shows how you can preserve original values using a backup table but it does not account for the legitimate updates that happen outside your 'test' updates.
<quote>

I have one good condition here. Even though, records are written every second, once they are written, no one will change except me. And I just want to undo/rollback something I changed and committed.

No other legitimate updates that happen outside my updates.

So, your sample fits the profile?

W
>>And I just want to undo/rollback something I changed and committed.

Please clarify:  You have ALREADY updated and committed the changes by accident or are you wanting to do this in the future and are looking on how to preserve the original values?

As long as you have a backup table with the original values and Primary Key values then the approach I posted 'should' work.  I cannot say with 100% accuracy that it will since I don't know the situation you are in.

- the steps explain by slightwv in ID: 37076049 is almost the same as i explain earlier in ID: 37075415 and ID: 37075434 by using a temp table. however in ID: 37075446 you require alternative not to create a temp table isnt it?
<quote>
Please clarify:  You have ALREADY updated and committed the changes by accident or are you wanting to do this in the future and are looking on how to preserve the original values?
<quote>

I have not done anything yet or by accident. I wan to do it in the future. That's why i am consulting here.

I am planning to change some records (150,000 out of 1 Mil++) from ever expanding table. In the future, after the update, in case of issues or problems, I need to change those 150,000 back to original values.

Hope that helps.

W
OP Zaharin,

Yes. If possible, I want alternative not to create a temp table. You are right.

But then, I am learning here the backup table approach.

My ultimate goal is without creating a backup table. But if it is not possible, I might have to approach backup table option.

W
"I am planning to change some records (150,000 out of 1 Mil++) from ever expanding table. In the future, after the update, in case of issues or problems, I need to change those 150,000 back to original values."

- then i would suggest you create a temp table as we had explain and shows example of doing so. then you will have a copy/backup of the actual table incase at anytime you need to rollback the data.
- you can either create a temp table based on <= 150000 records (in  ID: 37075415) or create a temp table based on all data available in books as shown in slightwv sample (in  ID: 37076049)
>>My ultimate goal is without creating a backup table

Not feasible if you want to commit the changes.

>>in case of issues or problems, I need to change those 150,000 back to original values.

To be honest, this whole plan sounds a little fishy.  I can see preserving the initial state of an important table for a large rollout or major change just in case the new code has issues.

But I'm not following why you would want to change a small subset of a large table then 'look for problems'.

You normally iron out these problems in testing.

This plan is along the lines of:  I want to change a small number of peoples address country from the United States to Cananda leaving the states along then see what happens.

slightwv:

:) Let me explain.

I am a Linux Engineer. I am going to propose a technical purely commercial solution (not political or geopolitical or governmental or unlawful).

I have everything in place regarding Oracle.  But the perspective I am seeing this problem is from a non-DBA but generic Engineering point of view for a solution.

While proposing my solution or my ideas, II have to follow "Change Management Procedure".  And my solution aka my change will not make it or will not be approved if I do something and If I don't have a 'rollback' plan.

My solution is simple. just a one-line SQL. But my rollback is a concern.

From Oracle Layer,
I have everything in place if something goes wrong to rollback from Oracle Level.

From my capacity,
I only look at my rollback SQL to undo things that I proposed to change.

I am consulting here as my first attempt to my problem. I have ideas but have limited knowledge on various areas and I consult here for straight forward answers.

I'd be more than happy to respond to any type of concerns or questions you may have further.


Meanwhile, I thank you all for your comments and suggestions.

W
Thanks for that.

I was just curious what change 150,000 rows of a 1 million row table would provide.

The 150,001st row could blow things up?
Exactly.

Actually, i wanted to say 25% or 15% or some % like that of my current payload. But it was too generic.
So i came up with figures such as 1 Million and 150,000.

I am trying to reduce the current payload by 25% or 15% or something like that from every system. But if I don't have a rollback plan, Change Management and my managers will kill me and cannot cover me especially if things don't work as expected.

So, just following a process strictly while brainstorming on an idea.



W

- why not schedule a a daily backup on the tables - using EXP or EXPDP then at any time when you need the old data back, import the table back either to the same database (need to be carefull with this) or a new database schema or to a staging database (on your laptop/development database if available) then insert/update the old data back to the production table.
OP Z,

I am confined into my job scope.

that I don't have to do anything.

My focus is purely _purely_ my solution and my rollback.

Nothing else matters.

W
- since you cant implement flashback, what about the temp/backup table solution as proposed? i cant think of any other method to update then 'rollback' at later stage other than the suggested approach.
What it boils down to is a way to uniquely identify the 150,000 updated rows and know their original values.

Before the test update are there already locations with 'W02' and backuplocations of 'B01'?  Are all the backuplocation values the same or can they be for the 'test'?

If your update will create unique combinations with W01 and B01 then it would be easy to target those rows to update them back if ALL the backuplocations were the same values as well.

Is there possibly another insignificant (for what you are testing) column that you can update to uniquely identify these rows?

Maybe a single character flag column like gender or some Y/N column?  If so it might be possible to update that column as well.

The 150,000 test rows would be all 'Y' flagged rows.  Then you might be able to update them to a 'Z'?  Since the app will NEVER, I hope, put a 'Z' into a Y/N column you can easily target the rows you modified.

Again, this assumes all the updated columns were the exact same 'before' values.

If the old values of backuplocation are different then I think the temp table with a duplicate primary key as the main table is the way to go here since it is the only way to preserve the 'before' values.
From a change management perspective, we always allowed these backup tables for changes.  The would have a specific naming convention and would be exported and dropped after a period of time.  The export was for just in case someone came along a year later and complained (believe me it happened).
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
slight,

I will provide a sample data.

johnsone:

You are right. if nothing happens, nothing happens. if something happens, then, somebody is to be held accountable for. I want to prove that I follow the process and that I have data proof.

W
Really, the temporary table is the safest.  It gets backed up with the database and everything you need to restore the previous values is there.
>>I will provide a sample data.

Provide sample data to whom?
to you slightwv: and to all so that we are on the same page.
>>to you slightwv

I'm not sure how sample data will help me or others here.

The data by itself without a complete understanding of your system and applications won't really provide much insight.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thank you for your "time" and experts opinions. Appreciate it.
I did not notice this before.  Your #2 solution is incorrect.  Your backup SQL and your update statement are not guaranteed to generate the same data set.  ROWNUM is assigned as rows are retrieved and is not guaranteed to be the same if you run the query twice.  Especially if new data has been inserted that would fit your criteria.

Most likely the data sets would be the same, but it is not guaranteed.  Your senior should have told you that.

If you want to generate the backup SQL, I would create a temporary table or put the processing into a PL/SQL block.
>> ROWNUM is assigned as rows are retrieved and is not guaranteed to be the same if you run the query twice

Agreed.  re: http:#a37075745

>>update into books values

This is NOT Oracle syntax and will not run.