Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 4273
  • Last Modified:

How to remove header and footr from a file in Unix?

How to remove header and footer from a file in Unix?

eg:file f1.txt has

header|Test|Test
Record1
Record2
Record3
Footer|Test|Date


output of f1.txt should be:
Record1
Record2
Record3


With using a single statement at the Unix Prompt,
what is the fastest method to get the resulted output?
0
vihar123
Asked:
vihar123
  • 5
  • 3
  • 3
  • +3
5 Solutions
 
amit_gCommented:
sed -e '1d;$d' f1.txt > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt

If you sed supports inplace editing, you can do

sed -i -e '1d;$d' f1.txt

if you want to take a backup also

sed -i.bak -e '1d;$d' f1.txt
0
 
vihar123Author Commented:
Amit,

can u plz get me more info about this statement:

sed -e '1d;$d' f1.txt > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt


'-e 1d and $d' means: deleting first and last line.
can u explain other stuff..

Thanks
0
 
amit_gCommented:
sed is stream editor. -e option passes the command/script that sed executes. 1d deletes first line and $d deletes last line. That is what you want. By default the sed reads from given file and produces output on screen (stdout). So you save that output to a temp file /tmp/f1_$$.txtand later move that file back to the original one. Alternatively if your sed supports in place editing (i.e. the commands run on the same file and the same file is edited instead of outputed to screen) you could use that and not need > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt. I would try the last one first

sed -i.bak -e '1d;$d' f1.txt

and if that works, you have edited file f1.txt and a backup of original file as f1.txt.bak
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
vihar123Author Commented:
due to some constraints,i cant use "sed" command.

i have a file of 10 million rows in a file, and need to delete the header and footer.
i guess, performance wise,is i use sed command.

what do u say?

am i right?

Thanks


0
 
amit_gCommented:
Well, I have never used these tools on such huge file myself so I can't say how it would behave. I would suggest to try it on one and see how it works. Whatever tool you use, it is going to read whole file and write it to another one unless you want to write your own. Do you have to do this on a regular basis or just once?
0
 
ozoCommented:
perl -MTie::File -e 'tie @a, 'Tie::File', "f1.txt" or die $!; pop @a; shift @a'

0
 
ahoffmannCommented:
> .. i have a file of 10 million rows
you need to use perl or write your own filter in a language which compiles to native binary

if you have gawk you can try:
gawk '(NR==1){next;}(getline==1){print}' your file
(but it also removes empty lines)
0
 
TintinCommented:
To test performance between the various solutions, use the time command,e g:

time sed -e '1d;$d' f1.txt > /tmp/f1_$$.txt && mv /tmp/f1_$$.txt f1.txt

Another solution (although you'd need to test the performance) is to use grep.  Assuming your records don't contain pipe symbols, then do

time grep -v '|' f1.txt >/tmp/$$ && mv /tmp/$$ f1.txt

0
 
ahoffmannCommented:
.. I guess that sed will crash on most systems for such huge files ..
0
 
rockiroadsCommented:
An alternative way to sed


grep -v `head -1 f1.txt` f1.txt | grep -v `tail -1 f1.txt`
0
 
vihar123Author Commented:
Thank you very much for your inputs.

 Tinton,
      file consists of pipe symbols
  ozo,ahoffmann
       constraint is,i need to use only Unix commands.
  Amit
     i need to use this on regular basis(every day).

rockiroads:
    i will try ur command and let u know.

Thanks





0
 
TintinCommented:
What OS are you using?
Have you tried using sed?  Does it handle the amount of data?
0
 
amit_gCommented:
We can keep on speculating but what would work for you is going to be based on the fact what kind of machine you have and what kind of load it has. All solution given above would work but since your file is huge, you need to try and see which one works best for you.

I tested some on my own. Again the real test is your own as I don't know how wide is your file. If it is 10 bytes wide you have 100MB file while if you have 100 bytes wide file, you have 1GB file. I took a 100 byte files with 10M records and it took about 6 minutes on my poor machine with sed. I am not sure if that kind of performance is acceptable to you but at least it did not crash. Also the machine performance during those 6 minutes was slow but not very bad. Perl solution on the other hand took over 12 minutes and caused the system to dramatically slow down because of huge memory usage. This is understandable as the perl solution seems to be reading the whole file in memory. Other solutions like sed and awk work line by like and whatever time they take is mainly due to disk IO. Since the file is huge, whatever solution you use, it will need to do that much disk IO and so you would get similar performance.
0
 
ozoCommented:
Tie::File does not read the whole file into memory, although it does keep a configurable sized memory cache if it decides it's more efficient to deffer some writes to do several updates at once.
The memory you found it using was probably for its internal list of byte offsets for the records.
Since you are updating only two records, most of this is unecessary, so
perl -i -ne 'print if (2..0)and!eof' f1.txt
should be faster, and just as bound by disk IO to the file as the sed version.
0
 
amit_gCommented:
Pardon my ignorance about perl. My comment was based on the memory usage that I saw in my system and you are right, the last solution does take about the same time (~6 minutes). So whatever tool we use, we are going to be limited by disk IO.

Vihar, the time you see in my posts is only indicative as first I don't have real data and second your machine could be a lot better (or a lot worse) than mine. So you have to do your own tests and use whatever works best for you. My guess is that all tools would give similar performance becuase disk IO is going to be the bottleneck.
0
 
ahoffmannCommented:
> .. i need to use only Unix commands.
I don't see anything else than Unix commands here!

another try, just plain old awk (which should be installed on any Unix):
  awk '(NR==1){next;}(NR==2){x=$0;next;}{print x;} your-file
0

Featured Post

Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

  • 5
  • 3
  • 3
  • +3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now