Easy-peasy: Script to delete duplicate lines in a file
Posted on 2006-11-21
This should be really simple, i know, but i'm quite new to scripting and i just don't have any idea how to do it.
Imagine a process that returns a list of paths, with the form:
what script will reduce that to a file that just has:
and nothing else?
Currrently i've tried this:
# FileList is the file that's being created.
: > FileList
<first process> |
while read "b"
d=`sed -n '/"$c"/p' FileList`
if [ "$d" = "$c" ]
echo "$c" >> List
I've tried a few other variations, all based around this theme, but can't seem to get it to work. Any and all ideas are welcome, and wordy explanations are preferred. ;-)