Go Premium for a chance to win a PS4. Enter to Win

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 406
  • Last Modified:

Linux find command script

I have 200 users that place files in the /tmp directory. These users all have a username that starts with user1. So for example:

user101
user102
.....
user130
user131
.... etc

There are other users that also write files to this directory, so I only want to this script to run on files owned by users who's username starts with user1.

The files user1* users create all start with DBF. I need a script that will check to see if any DBF* files exist by the same user in the /tmp directory, and delete all but the most recent one. So if user123 has three files: DBF1A modified at 12:55, DBF1B modified at 12:57, and DBF1C modofied at 12:59, the script needs to delete all except the DBF1C file because it is newest.

What I have so far is this (thanks to an EE expert "dnb"):

find /tmp -name 'DBF*' -printf '%u:%f\n' | grep '^user1' | cut -d: -f2- | xargs rm

The problem is that this script will delete ALL the DBF files that it finds. How can I make it keep only the most recent version?
0
bfilipek
Asked:
bfilipek
  • 4
  • 4
  • 2
  • +1
1 Solution
 
ravenplCommented:
Do the script is about too keep newest DBF for each user? I assume so.

grep "^user1" /etc/passwd | while read line; do # do it for each user from /etc/passwd
 UUID=$( echo -n "$line" | cut -d: -f3 )
 find /tmp -type f -uid $UUID -name 'DBF*' -printf '%T@:%p\n' | sort -n | head -n -1 | cut -d: -f2-
done

If it's fine, add '| xargs rm -fv' at the end of already long line
0
 
bfilipekAuthor Commented:
It comes back with a bunch of lines that say:

head: -1: invalid number of lines
head: -1: invalid number of lines
head: -1: invalid number of lines
head: -1: invalid number of lines
0
 
amit_gCommented:
Run

ls -l -t /tmp/DBF* | grep ' user1' | awk 'NR > 2 {print NR $9}'

does it give correct result? If so change it to

ls -l -t /tmp/DBF* | grep ' user1' | awk 'NR > 2 {print NR $9}' | xargs rm -f

0
Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

 
ravenplCommented:
> head: -1: invalid number of lines
What linux do You use where head supports no '-n -1' ??
0
 
bfilipekAuthor Commented:
amit_q,
Your script worked great. I just had to change the NR> 2 to NR > 1 to return the correct results.

Now I have another similar question posted here: http://www.experts-exchange.com/Operating_Systems/Linux/Q_22070085.html 

0
 
bfilipekAuthor Commented:
amit_q,

One last thing on this script. I do not want it to delete a DBF* file if it is the only one for the user. Most users have multiple DBF files, but a few only have one. Right now the script will delete the one. How can I prevent this?
0
 
amit_gCommented:
I think the one you are using now is not going to work. Try this one...

ls -l -t /tmp/DBF* | grep ' user1' | awk '{if (Store[$3] != 1) {Store[$3] = 1} else {print $9}}'

If that gives correct results, add | xargs rm -f to the command.
0
 
amit_gCommented:
you might want to test that by

ls -l -t /tmp/DBF* | grep ' user1' | awk '{if (Store[$3] != 1) {Store[$3] = 1} else {print}}'

So that you know what is happening.
0
 
bfilipekAuthor Commented:
Woked perfect.

One LAST thing, I swear :). Instead of deleting the files, can I move them to a subdirectoy at /tmp/move? So instead of:

| xargs rm -f

would I use something like

| xargs mv * /tmp/move/
0
 
ozoCommented:
xargs -J % mv % /tmp/move/
0
 
amit_gCommented:
Thanks ozo. BTW, just wondering, how did you get to this question as this was already closded. You can't possibly be reading each and every posted question weather open or closed :)
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 4
  • 4
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now