Easy-peasy: Script to delete duplicate lines in a file

This should be really simple, i know, but i'm quite new to scripting and i just don't have any idea how to do it.

Imagine a process that returns a list of paths, with the form:

aaa/bbb/ccc
aaa/bbb/ccc
aaa/bbb/ccc
ddd/eee/fff
ddd/eee/fff
ddd/eee/fff
bbb/ccc/ddd
bbb/ccc/ddd
.
.
.

what script will reduce that to a file that just has:

aaa/bbb/ccc
ddd/eee/fff
bbb/ccc/ddd

and nothing else?

Currrently i've tried this:

# FileList is the file that's being created.

: > FileList
<first process> |
while read "b"
        do
        c=`dirname "$b"`
        d=`sed -n '/"$c"/p' FileList`
        if [ "$d" = "$c" ]
                then
                        continue
                else
                        echo "$c" >> List
                fi
        done

I've tried a few other variations, all based around this theme, but can't seem to get it to work.  Any and all ideas are welcome, and wordy explanations are preferred.  ;-)
LVL 5
kyle_in_taiwanAsked:
Who is Participating?
 
ravenplConnect With a Mentor Commented:
try: uniq original > uniqued.txt
Note: uniq only deletes duplicates that follows one by another. ie if Your file have
aaa/bbb/aaa
aaa/ccc/aaa
aaa/bbb/aaa

uniq will do nothing.
0
 
kyle_in_taiwanAuthor Commented:
Cool.  I've been able to wrangle a solution out of that one already.  Thanks.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.