• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 638
  • Last Modified:

How can I take the 900 records from file and pass into the select statement ?

Hello Experts,
I have a file which contains the Serial number .The total number of serial number may be 10 or 100 or 10000 or more.
So I am using this serial number and fetching the data from sql server.There is simple select statement and in that IN clause I am passing these serial number.BUt there is limitation of IN clause. We can pass 1000 record only.

So I have to capture 1000's record from file and pass into SQL statement and then again pass next 1000 serial number.

How can I do it in Unix Shell script ?

Could you please help.
0
digs developer
Asked:
digs developer
  • 5
  • 3
  • 2
1 Solution
 
digs developerAuthor Commented:
Please help experts ..
0
 
nemws1Commented:
What scripting/programming languages do you have access to or are familiar with?  Obviously, we have to do something client-side.

Here's something in bash.  It will generate select statements, each with CHUNK_SIZE number of values in the IN() section, reading from "bigfile" which has each number on a separate line.

#!/bin/bash

INPUT="bigfile"
CHUNK_SIZE=500

SQL='SELECT f1,f2,f3,f4
FROM sometable
WHERE serialnumber IN (INPUT_TEMPLATE);'


TMP1=/tmp/dosplitsql1.$$
TMP2=/tmp/dosplitsql2.$$
cp $INPUT $TMP1

ISIZE=`wc -l $TMP1 | awk '{print $1}'`
echo $ISIZE
while [ $ISIZE -gt 0 ]
do
    SERNOS=$(echo `head -$CHUNK_SIZE $TMP1 | sed 's/$/,/'` | sed 's/,$//')
    NEW_SQL=`echo $SQL | sed "s/INPUT_TEMPLATE/$SERNOS/"`
    echo $NEW_SQL
    OFFSET=`expr $CHUNK_SIZE + 1`
    tail -n +$OFFSET $TMP1 > $TMP2
    cp $TMP2 $TMP1
    ISIZE=`wc -l $TMP1 | awk '{print $1}'`
done

Open in new window

0
 
digs developerAuthor Commented:
Thank you so much !!!!

I will try with the help of above code.


Thank you!!
0
Configuration Guide and Best Practices

Read the guide to learn how to orchestrate Data ONTAP, create application-consistent backups and enable fast recovery from NetApp storage snapshots. Version 9.5 also contains performance and scalability enhancements to meet the needs of the largest enterprise environments.

 
digs developerAuthor Commented:
File data is below

1001,
1110,
1111,
2121,
2335,
5000,
.......

So above data I have to take 1st 1000 record do the query and then again take next 1000 upto last.
0
 
nemws1Commented:
Yup. My script does that. I add the commas for you, though.

Change CHUNK_SIZE TO 1000, although I'd suggest a smaller number. You could run into an SQL size limit by selecting 1000 at a time.
0
 
nemws1Commented:
If you already have commas in your file, change this line from this:
    SERNOS=$(echo `head -$CHUNK_SIZE $TMP1 | sed 's/$/,/'` | sed 's/,$//')

Open in new window


to this:
    SERNOS=$(echo `head -$CHUNK_SIZE $TMP1`)

Open in new window

0
 
johnsoneSenior Oracle DBACommented:
What I would do is to use an external table for you list of ids.  Then you can query it like a table and get your results with one query rather than having to run multiple queries.  An inlist is limited to 1000 entries, but if the list is a subquery there is no limit (essentially it is a join at that point).  You can eliminate the subquery and inlist by writing the query as a join.

I don't have a test case right now, but if you want to explore this, let me know, I will see if I can come up with a quick test case.
0
 
digs developerAuthor Commented:
Thank you !!

Here  there are using sqlldr only.
 I am following yur steps.
Thank you !
0
 
digs developerAuthor Commented:
Hello experts,

I am not getting
TMP1=/tmp/dosplitsql1.$$
TMP2=/tmp/dosplitsql2.$$

and SQL statement is in my separate file like
SELECT f1,f2,f3,f4
FROM sometable
WHERE serialnumber IN (

I have to pass sr.numberes in IN clause.

So my question is I want to fetch 1st 1000 record and place it in separate file.Then I am executing the sql .Then again next 1000 record. And exit when all the records are finished.

Please help.
0
 
johnsoneSenior Oracle DBACommented:
If you don't want to use an external table, which is essentially a sqlldr into an in memory table, then I would suggest the split command.

Here is the man page for it -> http://unixhelp.ed.ac.uk/CGI/man-cgi?split

What you would do is a command like this:

split -l 1000 yourfile yourfile.

This will give you a set of files yourfile.aa, yourfile.ab, yourfile.ac, ...  Each file will have 1000 lines.  Any partial set of 1000 left over will be in the last file.  Then just loop through all the files yourfile.* and that will give you a set at a time.
0

Featured Post

Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

  • 5
  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now