Link to home
Start Free TrialLog in
Avatar of sunshine737
sunshine737

asked on

How to remove header and footr from a file in Unix?

How to remove header and footer from a file in Unix?

eg:file f1.txt has

header|Test|Test
Record1
Record2
Record3
Footer|Test|Date


output of f1.txt should be:
Record1
Record2
Record3


With using a single statement at the Unix Prompt,
what is the fastest method to get the resulted output?
ASKER CERTIFIED SOLUTION
Avatar of amit_g
amit_g
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sunshine737
sunshine737

ASKER

Amit,

can u plz get me more info about this statement:

sed -e '1d;$d' f1.txt > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt


'-e 1d and $d' means: deleting first and last line.
can u explain other stuff..

Thanks
sed is stream editor. -e option passes the command/script that sed executes. 1d deletes first line and $d deletes last line. That is what you want. By default the sed reads from given file and produces output on screen (stdout). So you save that output to a temp file /tmp/f1_$$.txtand later move that file back to the original one. Alternatively if your sed supports in place editing (i.e. the commands run on the same file and the same file is edited instead of outputed to screen) you could use that and not need > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt. I would try the last one first

sed -i.bak -e '1d;$d' f1.txt

and if that works, you have edited file f1.txt and a backup of original file as f1.txt.bak
due to some constraints,i cant use "sed" command.

i have a file of 10 million rows in a file, and need to delete the header and footer.
i guess, performance wise,is i use sed command.

what do u say?

am i right?

Thanks


Well, I have never used these tools on such huge file myself so I can't say how it would behave. I would suggest to try it on one and see how it works. Whatever tool you use, it is going to read whole file and write it to another one unless you want to write your own. Do you have to do this on a regular basis or just once?
perl -MTie::File -e 'tie @a, 'Tie::File', "f1.txt" or die $!; pop @a; shift @a'

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
.. I guess that sed will crash on most systems for such huge files ..
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thank you very much for your inputs.

 Tinton,
      file consists of pipe symbols
  ozo,ahoffmann
       constraint is,i need to use only Unix commands.
  Amit
     i need to use this on regular basis(every day).

rockiroads:
    i will try ur command and let u know.

Thanks





What OS are you using?
Have you tried using sed?  Does it handle the amount of data?
We can keep on speculating but what would work for you is going to be based on the fact what kind of machine you have and what kind of load it has. All solution given above would work but since your file is huge, you need to try and see which one works best for you.

I tested some on my own. Again the real test is your own as I don't know how wide is your file. If it is 10 bytes wide you have 100MB file while if you have 100 bytes wide file, you have 1GB file. I took a 100 byte files with 10M records and it took about 6 minutes on my poor machine with sed. I am not sure if that kind of performance is acceptable to you but at least it did not crash. Also the machine performance during those 6 minutes was slow but not very bad. Perl solution on the other hand took over 12 minutes and caused the system to dramatically slow down because of huge memory usage. This is understandable as the perl solution seems to be reading the whole file in memory. Other solutions like sed and awk work line by like and whatever time they take is mainly due to disk IO. Since the file is huge, whatever solution you use, it will need to do that much disk IO and so you would get similar performance.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Pardon my ignorance about perl. My comment was based on the memory usage that I saw in my system and you are right, the last solution does take about the same time (~6 minutes). So whatever tool we use, we are going to be limited by disk IO.

Vihar, the time you see in my posts is only indicative as first I don't have real data and second your machine could be a lot better (or a lot worse) than mine. So you have to do your own tests and use whatever works best for you. My guess is that all tools would give similar performance becuase disk IO is going to be the bottleneck.
> .. i need to use only Unix commands.
I don't see anything else than Unix commands here!

another try, just plain old awk (which should be installed on any Unix):
  awk '(NR==1){next;}(NR==2){x=$0;next;}{print x;} your-file