Solved

Linux, deleting files in a directory in C

Posted on 2004-08-14
11
242 Views
Last Modified: 2010-04-22
I am trying to delete all files in a directory in C. I have code that looks like this:

        DIR *d;
      struct dirent *pdirent;
      char absPath[100] = "";
      char command[103] = "rm ./";

      d = opendir(logdir);
      while ( (pdirent = readdir(d)) != NULL ){
            if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
            strcpy(absPath, logdir);
            strcat(absPath, pdirent->d_name);

                  strcat(command, absPath);
                  printf("%s\n", command);
                  system(command);                  //syscall to remove file
                  command[5] = '\0';
      }


This code will list files just fine but when I add the delete code via the system() call the function hangs and only the first file I list gets deleted. I'm basically just passing 'rm filename' to the syscall. What is the proper way to delete all files in a directory?


-ryan
0
Comment
Question by:dignified
11 Comments
 
LVL 23

Expert Comment

by:brettmjohnson
ID: 11802995
If you're going to use system(), why not just do
system("rm -rf logdir/*");

If you are going to implement it in C, use unlink() rather than system().

0
 

Author Comment

by:dignified
ID: 11803009
can't do rm -rf because I transfer the files using libcurl and I only delete up successful transfer. unlink.... i'll look into that call, thanks!
0
 

Author Comment

by:dignified
ID: 11803020
same thing happens when I use unlink. first file gets transferred and deleted but then it hangs....
0
Master Your Team's Linux and Cloud Stack

Come see why top tech companies like Mailchimp and Media Temple use Linux Academy to build their employee training programs.

 
LVL 23

Expert Comment

by:Mysidia
ID: 11803325
I wonder if the process of erasing causes the readdir() from the next position to stop
short.. Perhaps try rewinding:

     DIR *d;
     struct dirent *pdirent;
     char absPath[FILENAME_MAX] = "";

     d = opendir(logdir);
     while ( (pdirent = readdir(d)) != NULL ){
          if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
          sprintf(absPath, "%s/%s", logdir, pdirent->d_name);
          if (unlink(absPath) == 0) {
              rewinddir(d);
              continue;
          }
     }


Or queuing the contents of the directory and unlink() each one after you have
already read the whole thing
0
 

Author Comment

by:dignified
ID: 11803355
it doesn't hang but this doesn't do the trick either. Think I'll have to make a mini-queue, pretty lame.
0
 

Author Comment

by:dignified
ID: 11803377
here is the code I'm actually using. I have a queue of length 1. so on the next iteration of the loop I delete the old saved file we just read.


      DIR *d;
      struct dirent *pdirent;
      char absPath[100] = "";
      char newPath[103] = "./";
      int retVal;
      char firstTime = 1;
      
      //now curl it to a remote server
      struct curl_httppost* post = NULL;
      struct curl_httppost* last = NULL;
      CURL *hCURL = curl_easy_init();
      
      d = opendir(logdir);
      while ( (pdirent = readdir(d)) != NULL ){
            if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
            if(!strcmp(filename, pdirent->d_name)) continue;
            //printf("%s\n", pdirent->d_name);
            strcpy(absPath, logdir);
            strcat(absPath, pdirent->d_name);

            curl_easy_setopt(hCURL, CURLOPT_URL, "http://stuff.com/test.php");
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "user", CURLFORM_COPYCONTENTS, login, CURLFORM_END);
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "pass", CURLFORM_COPYCONTENTS, pword, CURLFORM_END);
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "tracefile", CURLFORM_FILE, absPath, CURLFORM_END);
            curl_easy_setopt(hCURL, CURLOPT_HTTPPOST, post);
            retVal = curl_easy_perform(hCURL);
            
            if( retVal == 0 )
            {
                  if(!firstTime)
                  {
                        unlink(newPath);
                  }else{
                        firstTime = 0;
                  }
                  newPath[2] = '\0';
                  strcat(newPath, absPath);
            }
      }
      unlink(newPath);

      curl_easy_cleanup(hCURL);
0
 

Author Comment

by:dignified
ID: 11803420
jeez, this doesn't seem to work either. It deletes 2 files and then moves on. I don't know if it is just deleting n-1 files or not. it doesn't hang though.
0
 

Author Comment

by:dignified
ID: 11807471
Actually, it seems that libcurl is what is causing the problems. it doesn't like me uploading and then deleting. I have gotten my code to either transfer all files or delete all files, but not both.
0
 
LVL 22

Expert Comment

by:NovaDenizen
ID: 11810113
You can't rely on the directory entries sitting still when you are creating or deleting files.  readdir() is only guaranteed to work the way you expect when nobody is screwing with the directory.  The only reliable way to do what you want to do is to read all the filenames in advance of performing file creations or deletions in that directory.
0
 
LVL 23

Accepted Solution

by:
Mysidia earned 250 total points
ID: 11812186
Perhaps defer the deletions until after you've done the      curl_easy_cleanup(hCURL);

(Sigh)

static char** list_files;
static int list_nfiles = 0;

int list_size() { return list_nfiles; }
char* list_top() {
   if (list_nfiles <= 0) { return NULL; }
   return list_files[list_nfiles - 1];
}

void list_add_file(char* name) {
     if (list_nfiles == 0)
         list_files = (char **)malloc(sizeof(char *) * 2);
     else {
         char **temp = (char **)realloc(list_files, sizeof(char *) * (list_nfiles + 1));

          if (temp == NULL) { abort(); } /* Unable to resize */
          list_files = temp;
     }
     if ( (list_files[list_nfiles++] = strdup(name)) == NULL ) { abort(); }
}

void list_pop() {
        char **temp;
     if (list_nfiles <= 0) { abort(); }

     free(list_files[--list_nfiles]);

     if (list_nfiles > 0) {
         temp = (char **)realloc(list_files, sizeof(char *) * (list_nfiles + 1));

         if (temp == NULL) { abort(); } /* Unable to resize */
         list_files = temp;
     } else {
        free(list_files);
        list_files = NULL;
     }
}


....
....
  DIR *d;
     char* filename;
     struct dirent *pdirent;
     char absPath[FILENAME_MAX] = "";

     d = opendir(logdir);
     while ( (pdirent = readdir(d)) != NULL ){
          if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
          sprintf(absPath, "%s/%s", logdir, pdirent->d_name);
          list_add_file(absPath);
     }

.... other stuff ...

... curl cleanup ...

    while((filename = list_top())) {
          unlink(filename);
          list_pop();
    }
0
 

Author Comment

by:dignified
ID: 11818626
I actually got things to work. Turns out that for libcurl you need to set post = last = NULL for EACH iteration of the loop. after I did this, everything worked. Otherwise I would have had to have queued things up i suppose. Fortunately, with my implementation, I don't need to worry about files being tampered with. But even if the files are open when I delete them, unlink should still unlink them and then delete them for good once all file handles are closed.

Thanks a lot for the Code Mysidia.
0

Featured Post

Active Directory Webinar

We all know we need to protect and secure our privileges, but where to start? Join Experts Exchange and ManageEngine on Tuesday, April 11, 2017 10:00 AM PDT to learn how to track and secure privileged users in Active Directory.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Have you ever been frustrated by having to click seven times in order to retrieve a small bit of information from the web, always the same seven clicks, scrolling down and down until you reach your target? When you know the benefits of the command l…
The purpose of this article is to fix the unknown display problem in Linux Mint operating system. After installing the OS if you see Display monitor is not recognized then we can install "MESA" utilities to fix this problem or we can install additio…
Two types of users will appreciate AOMEI Backupper Pro: 1 - Those with PCIe drives (and haven't found cloning software that works on them). 2 - Those who want a fast clone of their boot drive (no re-boots needed) and it can clone your drive wh…
With Secure Portal Encryption, the recipient is sent a link to their email address directing them to the email laundry delivery page. From there, the recipient will be required to enter a user name and password to enter the page. Once the recipient …

820 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question