YZlat
asked on
Need help modifying ksh script
I have the following script to read database names from a file in a loop, login to the sqlplus, run a select statement against the database and write the results to a file.
grep "^[a-zA-Z]" $FILE1 | while read LINE
do
OUTFILE=/u01/test/logs/out put_`date +"%Y%m%d%H%M"`.log
sqlplus -s "/ as sysdba" <<! >> $OUTFILE
COLUMN field1 format 999999.90
COLUMN field2 format a20
COLUMN field3 format 999999.90
set lines 132
SELECT field1, field2, field3 FROM TABLE1;
exit;
!
done
The output in my output file looks like this:
field1 field2 field3
1 test 34
2 test2 56
field1 field2 field3
5 test4 6778
6 test5 332
field1 field2 field3
8 test5 346
9 test6 123
Is there any way I could modify my script to have only one header line with column names and data from all the databases unser it?
Something like:
field1 field2 field3
1 test 34
2 test2 56
5 test4 6778
6 test5 332
8 test5 346
9 test6 123
grep "^[a-zA-Z]" $FILE1 | while read LINE
do
OUTFILE=/u01/test/logs/out
sqlplus -s "/ as sysdba" <<! >> $OUTFILE
COLUMN field1 format 999999.90
COLUMN field2 format a20
COLUMN field3 format 999999.90
set lines 132
SELECT field1, field2, field3 FROM TABLE1;
exit;
!
done
The output in my output file looks like this:
field1 field2 field3
1 test 34
2 test2 56
field1 field2 field3
5 test4 6778
6 test5 332
field1 field2 field3
8 test5 346
9 test6 123
Is there any way I could modify my script to have only one header line with column names and data from all the databases unser it?
Something like:
field1 field2 field3
1 test 34
2 test2 56
5 test4 6778
6 test5 332
8 test5 346
9 test6 123
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thanks! I will test it out
ASKER
woolmilkporc, in example here I used field1, field2 as column titles but in reality each one consists of several words. For example "this is fiest column" "this is second". How do I need to modify your code to accomodate this change?
You need to specify a string which starts in the first column and is of sufficient length to make it unique.
We can place the definiton of this string outside of the awk statement to makeit it better readable/customizeable.
The code for the example in your last post could look like this:
If the string in question does not start in column 1 omit the caret "^":
... -v S="unique string" ...
We can place the definiton of this string outside of the awk statement to makeit it better readable/customizeable.
The code for the example in your last post could look like this:
awk -v S="^This is first column" '!/^$/ {if ($0~S && NR==1) print; else if ($0~S && NR>1) next; else print}' $OUTFILE > ${OUTFILE}.clean
All you have to customize is the unique string value in the -v S="^unique string" part. If the string in question does not start in column 1 omit the caret "^":
... -v S="unique string" ...
ASKER
I have changed my sql, so now I need to make sure the second column is unique and the column name is TITLE. I tried the code below but it didn't chaneg anything in my fiel at all. What am I doing wrong?
awk '!/^$/ {if ( $1~"^TITLE" && NR==1) print $1; else if ($1~"^TITLE" && NR>1) next; else print}' $OUTFILE > ${OUTFILE}.clean
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
so $0 is the whole input line, then the first column is $1 and second column is $2?
I have tried:
and that returned
field1 TITLE field3
------- -------- --------
1 test 34
field1 TITLE field3
------- -------- --------
2 test2 56
field1 TITLE field3
------- -------- --------
5 test4 6778
field1 TITLE field3
------- -------- --------
6 test5 332
field1 TITLE field3
------- -------- --------
8 test5 346
field1 TITLE field3
------- -------- --------
9 test6 123
Then I tried
and that returned
---- -------- --------
1 test 34
---- -------- --------
2 test2 56
---- -------- --------
5 test4 6778
---- -------- --------
6 test5 332
---- -------- --------
8 test5 346
---- -------- --------
9 test6 123
Neither is the desired result.
I also tried
But that returned the same as the second result set here
I have tried:
awk '!/^$/ {if ( $0~"^TITLE" && NR==1) print $0; else if ($0~"^TITLE" && NR>1) next; else print}' $OUTFILE > ${OUTFILE}.clean
and that returned
field1 TITLE field3
------- -------- --------
1 test 34
field1 TITLE field3
------- -------- --------
2 test2 56
field1 TITLE field3
------- -------- --------
5 test4 6778
field1 TITLE field3
------- -------- --------
6 test5 332
field1 TITLE field3
------- -------- --------
8 test5 346
field1 TITLE field3
------- -------- --------
9 test6 123
Then I tried
awk '!/^$/ {if ( $2~"^TITLE" && NR==1) print $2; else if ($2~"^TITLE" && NR>1) next; else print}' $OUTFILE > ${OUTFILE}.clean
and that returned
---- -------- --------
1 test 34
---- -------- --------
2 test2 56
---- -------- --------
5 test4 6778
---- -------- --------
6 test5 332
---- -------- --------
8 test5 346
---- -------- --------
9 test6 123
Neither is the desired result.
I also tried
awk '!/^$/ {if ( $2~"^TITLE" && NR==1) print $0; else if ($2~"^TITLE" && NR>1) next; else print}' $OUTFILE > ${OUTFILE}.clean
But that returned the same as the second result set here
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
I guess I underestimated the dashes.
The last code you posted returns only the last line of data
The last code you posted returns only the last line of data
I think we can't continue without a real life example of your output file.
ASKER
by the way, when I replaced $2 with $0 I got all the lines but without dashes in between and with column names over every line
ASKER
Here are the output file contents:
SERVER TITLE FIRST VALUE SECOND VALUE BACKUP SECOND VALUE
-------------------- --------- -------------------- ------------------ --------------------
test TITLE1 11.165 9.212 1.952
SERVER TITLE FIRST VALUE SECOND VALUE BACKUP SECOND VALUE
-------------------- --------- -------------------- ------------------ --------------------
test2 TITLE2 964.578 884.578 79.997
SERVER TITLE FIRST VALUE SECOND VALUE BACKUP SECOND VALUE
-------------------- --------- -------------------- ------------------ --------------------
test3 TITLE3 171.753 168.253 3.499
SERVER TITLE FIRST VALUE SECOND VALUE BACKUP SECOND VALUE
-------------------- --------- -------------------- ------------------ --------------------
test TITLE1 11.165 9.212 1.952
SERVER TITLE FIRST VALUE SECOND VALUE BACKUP SECOND VALUE
-------------------- --------- -------------------- ------------------ --------------------
test2 TITLE2 964.578 884.578 79.997
SERVER TITLE FIRST VALUE SECOND VALUE BACKUP SECOND VALUE
-------------------- --------- -------------------- ------------------ --------------------
test3 TITLE3 171.753 168.253 3.499
ASKER
btw, coudl you tell me what does this piece do, that immediately follows awk command?
!/^$/
!/^$/
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Personally, I would look at what I suggested in the first post:
Manually add the headers to the output file and turn them off in the sqlplus output.
Then all you need to worry about is blank lines and that is easy.
I realize your actual ksh script has changed from what you posted in the original question but I'll use it as an example:
Manually add the headers to the output file and turn them off in the sqlplus output.
Then all you need to worry about is blank lines and that is easy.
I realize your actual ksh script has changed from what you posted in the original question but I'll use it as an example:
OUTFILE=/u01/test/logs/output_`date +"%Y%m%d%H%M"`.log
echo "SERVER TITLE FIRST VALUE SECOND VALUE BACKUP SECOND VALUE" >>$OUTFILE
echo "-------------------- --------- -------------------- ------------------ --------------------" >>$OUTFILE
grep "^[a-zA-Z]" $FILE1 | while read LINE
do
sqlplus -s "/ as sysdba" <<! >> $OUTFILE
COLUMN field1 format 999999.90
COLUMN field2 format a20
COLUMN field3 format 999999.90
set lines 132
set pages 0
set feedback off
SELECT field1, field2, field3 FROM TABLE1;
exit;
!
done
# remove blank lines
grep -v "^$" $OUTFILE >> $OUTFILE
ASKER
No, the titles are not really title1, title2, and title3, I just don't want to post the actual titles. So using TITLE to uniquely identify the row would be fine
I can't continue this way. Do you really want me to guess what your data might look like?
>>No, the titles are not really title1, title2, and title3
To add to what woolmilkporc posted:
If you post sample data, you get sample scripts. With sample scripts, it is then up to you to learn what the script is doing then make the necessary changes on your own to get it working.
If you don't know how to make the necessary changes, we cannot provide actual scripts without actual data.
To add to what woolmilkporc posted:
If you post sample data, you get sample scripts. With sample scripts, it is then up to you to learn what the script is doing then make the necessary changes on your own to get it working.
If you don't know how to make the necessary changes, we cannot provide actual scripts without actual data.
Why cant you modify your output file ?
What ever your header has , it will take care.
try this
BR,
Magento
What ever your header has , it will take care.
try this
grep "^[a-zA-Z]" $FILE1 | while read LINE
do
OUTFILE=/u01/test/logs/output_`date +"%Y%m%d%H%M"`.log
sqlplus -s "/ as sysdba" <<! >> $OUTFILE
COLUMN field1 format 999999.90
COLUMN field2 format a20
COLUMN field3 format 999999.90
set lines 132
SELECT field1, field2, field3 FROM TABLE1;
exit;
!
done
header=`head -1 $OUTPUT`
temp=/u01/test/logs/temp.log
sed -e '/--------/d' -e '/$header/d' -e '1i "<$header>"' $OUTFILE >> $temp
mv $temp $OUTPUT
BR,
Magento
ASKER
the data in the column TITLE could be anything, it cahnges, the only thing consistent is the column name, which is TITLE
ASKER
Finally got it to work!
Here is what worked:
Thanks to all those trying to help!
Here is what worked:
awk '!/^$/ {if ($2~"^TITLE" && NR>2 ) next; else print}' $OUTFILE > ${OUTFILE}.clean
Thanks to all those trying to help!
ASKER
This is what worked:
awk '!/^$/ {if ($2~"^TITLE" && NR>2 ) next; else print}' $OUTFILE > ${OUTFILE}.clean
awk '!/^$/ {if ($2~"^TITLE" && NR>2 ) next; else print}' $OUTFILE > ${OUTFILE}.clean
This leaves the dashes in the report.
Thx for the points.
Thx for the points.
ASKER
Yes, but I have no issue with the dashes, all i wanted is to only have column names listed once. I know how to remove the dashes now, but I prefer to keep them in
Have you tried my comment - 39262012 ?
Its not working as u expected?
Its not working as u expected?
The other way I can think of is have the do loop generate a series of UNION ALL select statements and use that to call sqlplus once using the generated select.