How can I take the 900 records from file and pass into the select statement ?

Hello Experts,
I have a file which contains the Serial number .The total number of serial number may be 10 or 100 or 10000 or more.
So I am using this serial number and fetching the data from sql server.There is simple select statement and in that IN clause I am passing these serial number.BUt there is limitation of IN clause. We can pass 1000 record only.

So I have to capture 1000's record from file and pass into SQL statement and then again pass next 1000 serial number.

How can I do it in Unix Shell script ?

Could you please help.
digs developerAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

digs developerAuthor Commented:
Please help experts ..
nemws1Database AdministratorCommented:
What scripting/programming languages do you have access to or are familiar with?  Obviously, we have to do something client-side.

Here's something in bash.  It will generate select statements, each with CHUNK_SIZE number of values in the IN() section, reading from "bigfile" which has each number on a separate line.



SQL='SELECT f1,f2,f3,f4
FROM sometable
WHERE serialnumber IN (INPUT_TEMPLATE);'


ISIZE=`wc -l $TMP1 | awk '{print $1}'`
echo $ISIZE
while [ $ISIZE -gt 0 ]
    SERNOS=$(echo `head -$CHUNK_SIZE $TMP1 | sed 's/$/,/'` | sed 's/,$//')
    NEW_SQL=`echo $SQL | sed "s/INPUT_TEMPLATE/$SERNOS/"`
    echo $NEW_SQL
    OFFSET=`expr $CHUNK_SIZE + 1`
    tail -n +$OFFSET $TMP1 > $TMP2
    cp $TMP2 $TMP1
    ISIZE=`wc -l $TMP1 | awk '{print $1}'`

Open in new window

digs developerAuthor Commented:
Thank you so much !!!!

I will try with the help of above code.

Thank you!!
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

digs developerAuthor Commented:
File data is below


So above data I have to take 1st 1000 record do the query and then again take next 1000 upto last.
nemws1Database AdministratorCommented:
Yup. My script does that. I add the commas for you, though.

Change CHUNK_SIZE TO 1000, although I'd suggest a smaller number. You could run into an SQL size limit by selecting 1000 at a time.
nemws1Database AdministratorCommented:
If you already have commas in your file, change this line from this:
    SERNOS=$(echo `head -$CHUNK_SIZE $TMP1 | sed 's/$/,/'` | sed 's/,$//')

Open in new window

to this:
    SERNOS=$(echo `head -$CHUNK_SIZE $TMP1`)

Open in new window

johnsoneSenior Oracle DBACommented:
What I would do is to use an external table for you list of ids.  Then you can query it like a table and get your results with one query rather than having to run multiple queries.  An inlist is limited to 1000 entries, but if the list is a subquery there is no limit (essentially it is a join at that point).  You can eliminate the subquery and inlist by writing the query as a join.

I don't have a test case right now, but if you want to explore this, let me know, I will see if I can come up with a quick test case.
digs developerAuthor Commented:
Thank you !!

Here  there are using sqlldr only.
 I am following yur steps.
Thank you !
digs developerAuthor Commented:
Hello experts,

I am not getting

and SQL statement is in my separate file like
SELECT f1,f2,f3,f4
FROM sometable
WHERE serialnumber IN (

I have to pass sr.numberes in IN clause.

So my question is I want to fetch 1st 1000 record and place it in separate file.Then I am executing the sql .Then again next 1000 record. And exit when all the records are finished.

Please help.
johnsoneSenior Oracle DBACommented:
If you don't want to use an external table, which is essentially a sqlldr into an in memory table, then I would suggest the split command.

Here is the man page for it ->

What you would do is a command like this:

split -l 1000 yourfile yourfile.

This will give you a set of files yourfile.aa, yourfile.ab,, ...  Each file will have 1000 lines.  Any partial set of 1000 left over will be in the last file.  Then just loop through all the files yourfile.* and that will give you a set at a time.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Unix OS

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.