We help IT Professionals succeed at work.

Oracle Rownum Dump

williamwlk
williamwlk asked
on
Medium Priority
447 Views
Last Modified: 2012-08-13
Hello,

In oracle, I would like to achieve this

- I have one million records:
select count(bookid) from books where location='W01' and backuplocation='B01';
1,000,000
- I would like change any 150,000  bookid's 'location'='W01' to 'W02', where 'backuplocation'='B01'. [sql1]
- i commit the change.
- In case of problems, I would like to change those bookid's 'location'='W02' back to 'location'='W01 where 'backuplocation'='B01'. In another words, I would like to rollback [sql1].

Help, Please.

W


Comment
Watch Question

Top Expert 2011

Commented:
- you can use rownum to limit to 15000 records

UPDATE books SET location = 'W02'
WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000

Author

Commented:
Yes ! OK.

I commit your SQL. after that, I don't want that change and I want to rollback.

I want to change back those bookids changed.
FYI: the table is changing all the time. many bookids added every minute.

W
Top Expert 2011

Commented:
- sorry i'm late in posting this. once you commit, you cannot rollback. to get back the data before committed you can look into using oracle flashback. you can google for the doc or steps to implement oracle flashback. i listed a few:

http://www.oracle.com/technetwork/database/features/availability/flashback-overview-082751.html
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm
http://www.orafaq.com/node/50
http://psoug.org/reference/flash_query.html
Top Expert 2011

Commented:
- and for 150000 the query should use "AND ROWNUM <= 150000"

Author

Commented:
Let's assume i can't implement oracle flashback.

how about this?

dump bookids to a file WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000

then, i change the record 1 by 1

commit

if problem,

change back/roll back 1 by 1


stupid but it should work


but from experienced oracle DBA point of view, how does it look?

or better idea yet?

akramrykProject Manager

Commented:
Dear,
In situation when your tables is being updated and changed you cannpt flashback database, tablespace even it fears lost of your new data. You have make available your backup on some offline schema and compare both tables for difference of location, then loop through the to restore origional location.

Author

Commented:
akramryk,

how about this?

dump bookids to a file WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000

then, i change the record 1 by 1

commit

if problem,

change back/roll back 1 by 1


stupid but it should work
but from experienced oracle DBA point of view, how does it look?
or better idea yet?
Top Expert 2011

Commented:
- rather than dump it to a file, you can create a temp table to store those 150000 records.

CREATE TABLE books_tmp
AS SELECT * FROM books
WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000

- then work on the data need to be updated on the temp table instead. or use that temp table to match back the data that has been changed in books table.
Top Expert 2011
Commented:
- assuming you have books_id as the primary key in that table, using the temp table i've mention above, you can update back/rollback using the following:

UPDATE books a SET location = 'W01'
WHERE EXISTS (SELECT * FROM books_temp b WHERE b.books_id = a.books_id)

Author

Commented:
I need to check with my oracle back end team if I am allowed to create a temp table on my schema.

Let's say, what if I can't also create a temp table?

W
CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019

Commented:
You likely not be able to use rownum to undo the changes you accidentally committed.

rownum is a virtual column assigned to the result set.  I assume there are new rows being added all the time.  Since there was no order to the update, there is no guarantee you will un-change the same 150,000 rows you initially changed.

If you cannot restore from a backup or flashback the change, you will need to manually verify ALL the rows, not just 150,000, and their appropriate values.

Author

Commented:
yes that is the point: slightwv:

so, how can we do this:

backup to a table and change the values at the same time for consistency ? using rownum or whatever.

then, rollback according to the backup table

?

CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019
Commented:
>>so, how can we do this:

So this question is about how you want to do something NOT about how do you undo something you already id?

----

I would hope the table has a primary key (PK).

As long as you aren't changing PK values you can create a backup copy of the table, do whatever you want to it then change the values back from the backup table.

The problem you run into here is if legitimate updates happen between the time you make your backup table and the time you need to restore.

Below is a sample that shows how you can preserve original values using a backup table but it does not account for the legitimate updates that happen outside your 'test' updates.
drop table tab1 purge;

create table tab1 (
	col1	char(1),
	col2	char(1),
	col3	char(1),
	constraint tab1_pk primary key (col1)
)
/


insert into tab1 values('a','a','a');
insert into tab1 values('b','b','b');
insert into tab1 values('c','c','c');
commit;

--back it up
drop table tab1_bkup purge;
create table tab1_bkup as (select * from tab1);
alter table tab1_bkup add constraint tab1_bkup_pk primary key (col1);

--mess with tab1
update tab1 set col2='1' where col1='a';
update tab1 set col3='2' where col1='b';
update tab1 set col2='3', col3='3' where col1='c';
commit;

select * from tab1;

--put the values back
update ( select old_vals.col2 old_col2, old_vals.col3 old_col3, new_vals.col2 new_col2, new_vals.col3 new_col3
	from tab1_bkup old_vals, tab1 new_vals where old_vals.col1=new_vals.col1)
set new_col2=old_col2 , new_col3=old_col3
/

select * from tab1;

Open in new window

Author

Commented:
Ha! slightwv!
We are slowly getting there...

<quote>
Below is a sample that shows how you can preserve original values using a backup table but it does not account for the legitimate updates that happen outside your 'test' updates.
<quote>

I have one good condition here. Even though, records are written every second, once they are written, no one will change except me. And I just want to undo/rollback something I changed and committed.

No other legitimate updates that happen outside my updates.

So, your sample fits the profile?

W
CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019

Commented:
>>And I just want to undo/rollback something I changed and committed.

Please clarify:  You have ALREADY updated and committed the changes by accident or are you wanting to do this in the future and are looking on how to preserve the original values?

As long as you have a backup table with the original values and Primary Key values then the approach I posted 'should' work.  I cannot say with 100% accuracy that it will since I don't know the situation you are in.

Top Expert 2011

Commented:
- the steps explain by slightwv in ID: 37076049 is almost the same as i explain earlier in ID: 37075415 and ID: 37075434 by using a temp table. however in ID: 37075446 you require alternative not to create a temp table isnt it?

Author

Commented:
<quote>
Please clarify:  You have ALREADY updated and committed the changes by accident or are you wanting to do this in the future and are looking on how to preserve the original values?
<quote>

I have not done anything yet or by accident. I wan to do it in the future. That's why i am consulting here.

I am planning to change some records (150,000 out of 1 Mil++) from ever expanding table. In the future, after the update, in case of issues or problems, I need to change those 150,000 back to original values.

Hope that helps.

W

Author

Commented:
OP Zaharin,

Yes. If possible, I want alternative not to create a temp table. You are right.

But then, I am learning here the backup table approach.

My ultimate goal is without creating a backup table. But if it is not possible, I might have to approach backup table option.

W
Top Expert 2011

Commented:
"I am planning to change some records (150,000 out of 1 Mil++) from ever expanding table. In the future, after the update, in case of issues or problems, I need to change those 150,000 back to original values."

- then i would suggest you create a temp table as we had explain and shows example of doing so. then you will have a copy/backup of the actual table incase at anytime you need to rollback the data.
Top Expert 2011

Commented:
- you can either create a temp table based on <= 150000 records (in  ID: 37075415) or create a temp table based on all data available in books as shown in slightwv sample (in  ID: 37076049)
CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019

Commented:
>>My ultimate goal is without creating a backup table

Not feasible if you want to commit the changes.

>>in case of issues or problems, I need to change those 150,000 back to original values.

To be honest, this whole plan sounds a little fishy.  I can see preserving the initial state of an important table for a large rollout or major change just in case the new code has issues.

But I'm not following why you would want to change a small subset of a large table then 'look for problems'.

You normally iron out these problems in testing.

This plan is along the lines of:  I want to change a small number of peoples address country from the United States to Cananda leaving the states along then see what happens.

Author

Commented:
slightwv:

:) Let me explain.

I am a Linux Engineer. I am going to propose a technical purely commercial solution (not political or geopolitical or governmental or unlawful).

I have everything in place regarding Oracle.  But the perspective I am seeing this problem is from a non-DBA but generic Engineering point of view for a solution.

While proposing my solution or my ideas, II have to follow "Change Management Procedure".  And my solution aka my change will not make it or will not be approved if I do something and If I don't have a 'rollback' plan.

My solution is simple. just a one-line SQL. But my rollback is a concern.

From Oracle Layer,
I have everything in place if something goes wrong to rollback from Oracle Level.

From my capacity,
I only look at my rollback SQL to undo things that I proposed to change.

I am consulting here as my first attempt to my problem. I have ideas but have limited knowledge on various areas and I consult here for straight forward answers.

I'd be more than happy to respond to any type of concerns or questions you may have further.


Meanwhile, I thank you all for your comments and suggestions.

W
CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019

Commented:
Thanks for that.

I was just curious what change 150,000 rows of a 1 million row table would provide.

The 150,001st row could blow things up?

Author

Commented:
Exactly.

Actually, i wanted to say 25% or 15% or some % like that of my current payload. But it was too generic.
So i came up with figures such as 1 Million and 150,000.

I am trying to reduce the current payload by 25% or 15% or something like that from every system. But if I don't have a rollback plan, Change Management and my managers will kill me and cannot cover me especially if things don't work as expected.

So, just following a process strictly while brainstorming on an idea.



W

Top Expert 2011

Commented:
- why not schedule a a daily backup on the tables - using EXP or EXPDP then at any time when you need the old data back, import the table back either to the same database (need to be carefull with this) or a new database schema or to a staging database (on your laptop/development database if available) then insert/update the old data back to the production table.

Author

Commented:
OP Z,

I am confined into my job scope.

that I don't have to do anything.

My focus is purely _purely_ my solution and my rollback.

Nothing else matters.

W
Top Expert 2011

Commented:
- since you cant implement flashback, what about the temp/backup table solution as proposed? i cant think of any other method to update then 'rollback' at later stage other than the suggested approach.
CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019

Commented:
What it boils down to is a way to uniquely identify the 150,000 updated rows and know their original values.

Before the test update are there already locations with 'W02' and backuplocations of 'B01'?  Are all the backuplocation values the same or can they be for the 'test'?

If your update will create unique combinations with W01 and B01 then it would be easy to target those rows to update them back if ALL the backuplocations were the same values as well.

Is there possibly another insignificant (for what you are testing) column that you can update to uniquely identify these rows?

Maybe a single character flag column like gender or some Y/N column?  If so it might be possible to update that column as well.

The 150,000 test rows would be all 'Y' flagged rows.  Then you might be able to update them to a 'Z'?  Since the app will NEVER, I hope, put a 'Z' into a Y/N column you can easily target the rows you modified.

Again, this assumes all the updated columns were the exact same 'before' values.

If the old values of backuplocation are different then I think the temp table with a duplicate primary key as the main table is the way to go here since it is the only way to preserve the 'before' values.
johnsoneSenior Oracle DBA
CERTIFIED EXPERT

Commented:
From a change management perspective, we always allowed these backup tables for changes.  The would have a specific naming convention and would be exported and dropped after a period of time.  The export was for just in case someone came along a year later and complained (believe me it happened).
johnsoneSenior Oracle DBA
CERTIFIED EXPERT
Commented:
Also, we tended to use ROWIDs in our temporary tables.  As long as you understand them and their limitations, they are faster than pure ids.

CREATE TABLE books_tmp
AS SELECT rowid rid, books.* FROM books
WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM < 150000;

update books
set location = 'W02'
where rowid in (select rid from books_tmp;

Author

Commented:
slight,

I will provide a sample data.

johnsone:

You are right. if nothing happens, nothing happens. if something happens, then, somebody is to be held accountable for. I want to prove that I follow the process and that I have data proof.

W
johnsoneSenior Oracle DBA
CERTIFIED EXPERT

Commented:
Really, the temporary table is the safest.  It gets backed up with the database and everything you need to restore the previous values is there.
CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019

Commented:
>>I will provide a sample data.

Provide sample data to whom?

Author

Commented:
to you slightwv: and to all so that we are on the same page.
CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019

Commented:
>>to you slightwv

I'm not sure how sample data will help me or others here.

The data by itself without a complete understanding of your system and applications won't really provide much insight.
Problem solved! Thank you Experts. We have found two solutions.


1. Backup Table Solution

My Oracle team will create a backup table for me in case i go with backup table option


2.Backup SQL Solution
 
My Senior has showed me to do backup SQL.

- create a backup SQL.
select 'update into books values ('''||bookid||''','''||location||''','''||backuplocation||''');' from books where location = 'W01' AND backuplocation = 'B01' AND ROWNUM =< 150000;
- Then do the following update.
UPDATE books SET location = 'W02'
WHERE location = 'W01'
AND backuplocation = 'B01'
AND ROWNUM =< 150000


I will close this case.

Author

Commented:
Thank you for your "time" and experts opinions. Appreciate it.
johnsoneSenior Oracle DBA
CERTIFIED EXPERT

Commented:
I did not notice this before.  Your #2 solution is incorrect.  Your backup SQL and your update statement are not guaranteed to generate the same data set.  ROWNUM is assigned as rows are retrieved and is not guaranteed to be the same if you run the query twice.  Especially if new data has been inserted that would fit your criteria.

Most likely the data sets would be the same, but it is not guaranteed.  Your senior should have told you that.

If you want to generate the backup SQL, I would create a temporary table or put the processing into a PL/SQL block.
CERTIFIED EXPERT
Most Valuable Expert 2012
Distinguished Expert 2019

Commented:
>> ROWNUM is assigned as rows are retrieved and is not guaranteed to be the same if you run the query twice

Agreed.  re: http:#a37075745

>>update into books values

This is NOT Oracle syntax and will not run.

Explore More ContentExplore courses, solutions, and other research materials related to this topic.