Solved

Linux, deleting files in a directory in C

Posted on 2004-08-14
11
239 Views
Last Modified: 2010-04-22
I am trying to delete all files in a directory in C. I have code that looks like this:

        DIR *d;
      struct dirent *pdirent;
      char absPath[100] = "";
      char command[103] = "rm ./";

      d = opendir(logdir);
      while ( (pdirent = readdir(d)) != NULL ){
            if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
            strcpy(absPath, logdir);
            strcat(absPath, pdirent->d_name);

                  strcat(command, absPath);
                  printf("%s\n", command);
                  system(command);                  //syscall to remove file
                  command[5] = '\0';
      }


This code will list files just fine but when I add the delete code via the system() call the function hangs and only the first file I list gets deleted. I'm basically just passing 'rm filename' to the syscall. What is the proper way to delete all files in a directory?


-ryan
0
Comment
Question by:dignified
11 Comments
 
LVL 23

Expert Comment

by:brettmjohnson
ID: 11802995
If you're going to use system(), why not just do
system("rm -rf logdir/*");

If you are going to implement it in C, use unlink() rather than system().

0
 

Author Comment

by:dignified
ID: 11803009
can't do rm -rf because I transfer the files using libcurl and I only delete up successful transfer. unlink.... i'll look into that call, thanks!
0
 

Author Comment

by:dignified
ID: 11803020
same thing happens when I use unlink. first file gets transferred and deleted but then it hangs....
0
 
LVL 23

Expert Comment

by:Mysidia
ID: 11803325
I wonder if the process of erasing causes the readdir() from the next position to stop
short.. Perhaps try rewinding:

     DIR *d;
     struct dirent *pdirent;
     char absPath[FILENAME_MAX] = "";

     d = opendir(logdir);
     while ( (pdirent = readdir(d)) != NULL ){
          if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
          sprintf(absPath, "%s/%s", logdir, pdirent->d_name);
          if (unlink(absPath) == 0) {
              rewinddir(d);
              continue;
          }
     }


Or queuing the contents of the directory and unlink() each one after you have
already read the whole thing
0
 

Author Comment

by:dignified
ID: 11803355
it doesn't hang but this doesn't do the trick either. Think I'll have to make a mini-queue, pretty lame.
0
Is Your Active Directory as Secure as You Think?

More than 75% of all records are compromised because of the loss or theft of a privileged credential. Experts have been exploring Active Directory infrastructure to identify key threats and establish best practices for keeping data safe. Attend this month’s webinar to learn more.

 

Author Comment

by:dignified
ID: 11803377
here is the code I'm actually using. I have a queue of length 1. so on the next iteration of the loop I delete the old saved file we just read.


      DIR *d;
      struct dirent *pdirent;
      char absPath[100] = "";
      char newPath[103] = "./";
      int retVal;
      char firstTime = 1;
      
      //now curl it to a remote server
      struct curl_httppost* post = NULL;
      struct curl_httppost* last = NULL;
      CURL *hCURL = curl_easy_init();
      
      d = opendir(logdir);
      while ( (pdirent = readdir(d)) != NULL ){
            if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
            if(!strcmp(filename, pdirent->d_name)) continue;
            //printf("%s\n", pdirent->d_name);
            strcpy(absPath, logdir);
            strcat(absPath, pdirent->d_name);

            curl_easy_setopt(hCURL, CURLOPT_URL, "http://stuff.com/test.php");
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "user", CURLFORM_COPYCONTENTS, login, CURLFORM_END);
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "pass", CURLFORM_COPYCONTENTS, pword, CURLFORM_END);
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "tracefile", CURLFORM_FILE, absPath, CURLFORM_END);
            curl_easy_setopt(hCURL, CURLOPT_HTTPPOST, post);
            retVal = curl_easy_perform(hCURL);
            
            if( retVal == 0 )
            {
                  if(!firstTime)
                  {
                        unlink(newPath);
                  }else{
                        firstTime = 0;
                  }
                  newPath[2] = '\0';
                  strcat(newPath, absPath);
            }
      }
      unlink(newPath);

      curl_easy_cleanup(hCURL);
0
 

Author Comment

by:dignified
ID: 11803420
jeez, this doesn't seem to work either. It deletes 2 files and then moves on. I don't know if it is just deleting n-1 files or not. it doesn't hang though.
0
 

Author Comment

by:dignified
ID: 11807471
Actually, it seems that libcurl is what is causing the problems. it doesn't like me uploading and then deleting. I have gotten my code to either transfer all files or delete all files, but not both.
0
 
LVL 22

Expert Comment

by:NovaDenizen
ID: 11810113
You can't rely on the directory entries sitting still when you are creating or deleting files.  readdir() is only guaranteed to work the way you expect when nobody is screwing with the directory.  The only reliable way to do what you want to do is to read all the filenames in advance of performing file creations or deletions in that directory.
0
 
LVL 23

Accepted Solution

by:
Mysidia earned 250 total points
ID: 11812186
Perhaps defer the deletions until after you've done the      curl_easy_cleanup(hCURL);

(Sigh)

static char** list_files;
static int list_nfiles = 0;

int list_size() { return list_nfiles; }
char* list_top() {
   if (list_nfiles <= 0) { return NULL; }
   return list_files[list_nfiles - 1];
}

void list_add_file(char* name) {
     if (list_nfiles == 0)
         list_files = (char **)malloc(sizeof(char *) * 2);
     else {
         char **temp = (char **)realloc(list_files, sizeof(char *) * (list_nfiles + 1));

          if (temp == NULL) { abort(); } /* Unable to resize */
          list_files = temp;
     }
     if ( (list_files[list_nfiles++] = strdup(name)) == NULL ) { abort(); }
}

void list_pop() {
        char **temp;
     if (list_nfiles <= 0) { abort(); }

     free(list_files[--list_nfiles]);

     if (list_nfiles > 0) {
         temp = (char **)realloc(list_files, sizeof(char *) * (list_nfiles + 1));

         if (temp == NULL) { abort(); } /* Unable to resize */
         list_files = temp;
     } else {
        free(list_files);
        list_files = NULL;
     }
}


....
....
  DIR *d;
     char* filename;
     struct dirent *pdirent;
     char absPath[FILENAME_MAX] = "";

     d = opendir(logdir);
     while ( (pdirent = readdir(d)) != NULL ){
          if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
          sprintf(absPath, "%s/%s", logdir, pdirent->d_name);
          list_add_file(absPath);
     }

.... other stuff ...

... curl cleanup ...

    while((filename = list_top())) {
          unlink(filename);
          list_pop();
    }
0
 

Author Comment

by:dignified
ID: 11818626
I actually got things to work. Turns out that for libcurl you need to set post = last = NULL for EACH iteration of the loop. after I did this, everything worked. Otherwise I would have had to have queued things up i suppose. Fortunately, with my implementation, I don't need to worry about files being tampered with. But even if the files are open when I delete them, unlink should still unlink them and then delete them for good once all file handles are closed.

Thanks a lot for the Code Mysidia.
0

Featured Post

Is Your Active Directory as Secure as You Think?

More than 75% of all records are compromised because of the loss or theft of a privileged credential. Experts have been exploring Active Directory infrastructure to identify key threats and establish best practices for keeping data safe. Attend this month’s webinar to learn more.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
why should double free() crash? 7 32
qos on linux 3 96
idM Identity Management in Red Hat 6 - setting up 9 263
monitor and log every file access on redhat/linux 4 116
Have you ever been frustrated by having to click seven times in order to retrieve a small bit of information from the web, always the same seven clicks, scrolling down and down until you reach your target? When you know the benefits of the command l…
The purpose of this article is to fix the unknown display problem in Linux Mint operating system. After installing the OS if you see Display monitor is not recognized then we can install "MESA" utilities to fix this problem or we can install additio…
Migrating to Microsoft Office 365 is becoming increasingly popular for organizations both large and small. If you have made the leap to Microsoft’s cloud platform, you know that you will need to create a corporate email signature for your Office 365…
With the power of JIRA, there's an unlimited number of ways you can customize it, use it and benefit from it. With that in mind, there's bound to be things that I wasn't able to cover in this course. With this summary we'll look at some places to go…

920 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

16 Experts available now in Live!

Get 1:1 Help Now