We help IT Professionals succeed at work.

How to remove header and footr from a file in Unix?

vihar123
vihar123 asked
on
Medium Priority
6,150 Views
Last Modified: 2013-12-26
How to remove header and footer from a file in Unix?

eg:file f1.txt has

header|Test|Test
Record1
Record2
Record3
Footer|Test|Date


output of f1.txt should be:
Record1
Record2
Record3


With using a single statement at the Unix Prompt,
what is the fastest method to get the resulted output?
Comment
Watch Question

CERTIFIED EXPERT
Top Expert 2006
Commented:
sed -e '1d;$d' f1.txt > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt

If you sed supports inplace editing, you can do

sed -i -e '1d;$d' f1.txt

if you want to take a backup also

sed -i.bak -e '1d;$d' f1.txt

Not the solution you were looking for? Getting a personalized solution is easy.

Ask the Experts

Author

Commented:
Amit,

can u plz get me more info about this statement:

sed -e '1d;$d' f1.txt > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt


'-e 1d and $d' means: deleting first and last line.
can u explain other stuff..

Thanks
CERTIFIED EXPERT
Top Expert 2006

Commented:
sed is stream editor. -e option passes the command/script that sed executes. 1d deletes first line and $d deletes last line. That is what you want. By default the sed reads from given file and produces output on screen (stdout). So you save that output to a temp file /tmp/f1_$$.txtand later move that file back to the original one. Alternatively if your sed supports in place editing (i.e. the commands run on the same file and the same file is edited instead of outputed to screen) you could use that and not need > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt. I would try the last one first

sed -i.bak -e '1d;$d' f1.txt

and if that works, you have edited file f1.txt and a backup of original file as f1.txt.bak

Author

Commented:
due to some constraints,i cant use "sed" command.

i have a file of 10 million rows in a file, and need to delete the header and footer.
i guess, performance wise,is i use sed command.

what do u say?

am i right?

Thanks


CERTIFIED EXPERT
Top Expert 2006

Commented:
Well, I have never used these tools on such huge file myself so I can't say how it would behave. I would suggest to try it on one and see how it works. Whatever tool you use, it is going to read whole file and write it to another one unless you want to write your own. Do you have to do this on a regular basis or just once?
ozo
CERTIFIED EXPERT
Most Valuable Expert 2014
Top Expert 2015

Commented:
perl -MTie::File -e 'tie @a, 'Tie::File', "f1.txt" or die $!; pop @a; shift @a'

> .. i have a file of 10 million rows
you need to use perl or write your own filter in a language which compiles to native binary

if you have gawk you can try:
gawk '(NR==1){next;}(getline==1){print}' your file
(but it also removes empty lines)
CERTIFIED EXPERT
Top Expert 2007
Commented:
To test performance between the various solutions, use the time command,e g:

time sed -e '1d;$d' f1.txt > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt

Another solution (although you'd need to test the performance) is to use grep.  Assuming your records don't contain pipe symbols, then do

time grep -v '|' f1.txt >/tmp/$$ && mv /tmp/$$ f1.txt

.. I guess that sed will crash on most systems for such huge files ..
CERTIFIED EXPERT
Top Expert 2006
Commented:
An alternative way to sed


grep -v `head -1 f1.txt` f1.txt | grep -v `tail -1 f1.txt`

Author

Commented:
Thank you very much for your inputs.

 Tinton,
      file consists of pipe symbols
  ozo,ahoffmann
       constraint is,i need to use only Unix commands.
  Amit
     i need to use this on regular basis(every day).

rockiroads:
    i will try ur command and let u know.

Thanks





CERTIFIED EXPERT
Top Expert 2007

Commented:
What OS are you using?
Have you tried using sed?  Does it handle the amount of data?
CERTIFIED EXPERT
Top Expert 2006

Commented:
We can keep on speculating but what would work for you is going to be based on the fact what kind of machine you have and what kind of load it has. All solution given above would work but since your file is huge, you need to try and see which one works best for you.

I tested some on my own. Again the real test is your own as I don't know how wide is your file. If it is 10 bytes wide you have 100MB file while if you have 100 bytes wide file, you have 1GB file. I took a 100 byte files with 10M records and it took about 6 minutes on my poor machine with sed. I am not sure if that kind of performance is acceptable to you but at least it did not crash. Also the machine performance during those 6 minutes was slow but not very bad. Perl solution on the other hand took over 12 minutes and caused the system to dramatically slow down because of huge memory usage. This is understandable as the perl solution seems to be reading the whole file in memory. Other solutions like sed and awk work line by like and whatever time they take is mainly due to disk IO. Since the file is huge, whatever solution you use, it will need to do that much disk IO and so you would get similar performance.
ozo
CERTIFIED EXPERT
Most Valuable Expert 2014
Top Expert 2015
Commented:
Tie::File does not read the whole file into memory, although it does keep a configurable sized memory cache if it decides it's more efficient to deffer some writes to do several updates at once.
The memory you found it using was probably for its internal list of byte offsets for the records.
Since you are updating only two records, most of this is unecessary, so
perl -i -ne 'print if (2..0)and!eof' f1.txt
should be faster, and just as bound by disk IO to the file as the sed version.
CERTIFIED EXPERT
Top Expert 2006

Commented:
Pardon my ignorance about perl. My comment was based on the memory usage that I saw in my system and you are right, the last solution does take about the same time (~6 minutes). So whatever tool we use, we are going to be limited by disk IO.

Vihar, the time you see in my posts is only indicative as first I don't have real data and second your machine could be a lot better (or a lot worse) than mine. So you have to do your own tests and use whatever works best for you. My guess is that all tools would give similar performance becuase disk IO is going to be the bottleneck.
> .. i need to use only Unix commands.
I don't see anything else than Unix commands here!

another try, just plain old awk (which should be installed on any Unix):
  awk '(NR==1){next;}(NR==2){x=$0;next;}{print x;} your-file
Access more of Experts Exchange with a free account
Thanks for using Experts Exchange.

Create a free account to continue.

Limited access with a free account allows you to:

  • View three pieces of content (articles, solutions, posts, and videos)
  • Ask the experts questions (counted toward content limit)
  • Customize your dashboard and profile

*This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

OR

Please enter a first name

Please enter a last name

8+ characters (letters, numbers, and a symbol)

By clicking, you agree to the Terms of Use and Privacy Policy.