[Last Call] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 8551
  • Last Modified:

Insert into table , Sybase

Hello Experts,

I need to copy the data from Table1 to Table2

INSERT INTO <Table2>
SELECT * FROM <Table1>

What will be the best way to do?? Insert into small batches? Table 1 contains a million records.

Thanks
Roop
DECLARE @count int 
SELECT @Count = 0
SET  @@rowcount = 10000
SELECT @Count = select count(*) from <Table1>
 
WHILE (@count <= 0)
 
BEGIN
 
INSERT INTO <Table2>
SELECT * FROM <Table1>
 
@count = @count - 1 
 
END

Open in new window

0
rbhargaw
Asked:
rbhargaw
  • 6
  • 4
  • 2
  • +1
1 Solution
 
reb73Commented:
Is there an identity column or a numeric PK column in this table?

If yes you could use split it into batches of about 50K-100K and insert within in a loop using ranges..
0
 
rbhargawAuthor Commented:
Table2 will be just the backup of Table1.
0
 
reb73Commented:
Even so, is there an identity column or a numeric PK column in table1?
0
Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

 
rbhargawAuthor Commented:
No, it does not have any identity column or PK column
0
 
grant300Commented:
In that case, you can do it all in one statement including creating the table itself by using

SELECT *
   INTO BackupTable
  FROM SourceTable

Select/Into is a minimally logged operation (only the metadata and space allocation are logged) so you don't have to worry about having enough log space.  It is also very fast since it uses the same mechanism internally as a BCP.

Remember to delete the table you created before you run this as it does not work into an existing table.

Regards,
Bill
0
 
rbhargawAuthor Commented:
What does identity column or PK column has to do with it?
0
 
grant300Commented:
Well, a PK is not really a problem in either case, however, it will add overhead to the process of getting data into the new table.

An identity column is another issue all together.  The system assigns a value to an identity column as the row is inserted into the table.  Normally, you have no control over it and, in fact, if you attempt to insert or update an identity column you will get an error.

Depending on the version of Sybase you are running, there are various methods for dealing with identity columns.  For instance, you can turn the IDENTITY INSERT option on which will allow you to insert values into an identity column.  You can check the doc on ASE and identity columns for the version you are using.

Regards,
Bill
0
 
rbhargawAuthor Commented:
Thanks Bill for the explaination.

Can you also review the code <Table1> contains 1200 rows and running this query gives <Table2> now contains 12000.

I just want the those 1200 rows to be copied to <Table2> but in chunks of 100.

Can you let me know where I am going wrong?
DECLARE @max int
 
SELECT @max = 0
SELECT @max = count(*) from <Table1>
 
SET ROWCOUNT 100
WHILE (@max > 0)
 
BEGIN 
    
    INSERT INTO <Table2> 
    SELECT * FROM <Table1>
    select @max = @max - 1 
  
END
 
SET ROWCOUNT 0

Open in new window

0
 
grant300Commented:
What you are doing will not work at all.  In a nut shell, how does you code know which 100 rows to get/insert each time through the loop.

Chances are you are getting the same 100 rows 1200 times for a total of 120,000 (instead of 12,000).  Even if you were going to make this work, SELECT @MAX = @MAX -1 should be @MAX - 100

Unless there is something you haven't told us, drop Table2 and issue a SELECT/INTO and be done with it.

Regards,
Bill
0
 
rbhargawAuthor Commented:
Actually we have  drop Table2 and a SELECT/INTO in the existing code.

But the table 1 contains CC# and table 2 is backup table which is taken every month. Table 2 just sit back containing CC# , no one is using them. So i was being asked to use "TRUNCATE" instead of dropping and then insert into in small batches not one big transaction. so thats why i am looking for solution.
0
 
grant300Commented:
You have been sent on a wild goose chase.  INSERT/SELECT with chunking of the source table will have substantially more impact than one SELECT/INTO.  That plus the fact that you will probably have to add an identity column to the source table in order to get the chunking to work at all.  Unless they will let you change the table and then find all the places in the application code that break, I would abandon that idea altogether.

Drop does a truncate as well as removing the metadata entries from the system tables.  No big deal at all.

Doing the SELECT/INTO even for the whole thing is much lower overhead than INSERT/SELECT, even in small batches.  You have all the logging to contend with as well as the inefficiency of trying to "chunk" the source table.

Also, since SELECT/INTO is minimally logged, there really is not much to the transaction.   If you are holding a lock on the source table that is getting in the way, we can probably fix that by setting the transaction isolation level to 0.  That and the speed is so much better that it will actually lock things up for less time anyway.

Regards,
Bill
0
 
rbhargawAuthor Commented:
Thanks Bill..We will be following select /into query
0
 
erlendurhCommented:
Just so you know if you don't want to drop the table you can use the EXISTING TABLE keywords.
Then you would have to delete entries from it first since you are inserting all the data from the source table.

select *
into EXISTING TABLE [backup_database..]backup_table
from [source_database..]source_table
go

regards
Elli
0

Featured Post

Prep for the ITIL® Foundation Certification Exam

December’s Course of the Month is now available! Enroll to learn ITIL® Foundation best practices for delivering IT services effectively and efficiently.

  • 6
  • 4
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now