Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 251
  • Last Modified:

Linux, deleting files in a directory in C

I am trying to delete all files in a directory in C. I have code that looks like this:

        DIR *d;
      struct dirent *pdirent;
      char absPath[100] = "";
      char command[103] = "rm ./";

      d = opendir(logdir);
      while ( (pdirent = readdir(d)) != NULL ){
            if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
            strcpy(absPath, logdir);
            strcat(absPath, pdirent->d_name);

                  strcat(command, absPath);
                  printf("%s\n", command);
                  system(command);                  //syscall to remove file
                  command[5] = '\0';
      }


This code will list files just fine but when I add the delete code via the system() call the function hangs and only the first file I list gets deleted. I'm basically just passing 'rm filename' to the syscall. What is the proper way to delete all files in a directory?


-ryan
0
dignified
Asked:
dignified
1 Solution
 
brettmjohnsonCommented:
If you're going to use system(), why not just do
system("rm -rf logdir/*");

If you are going to implement it in C, use unlink() rather than system().

0
 
dignifiedAuthor Commented:
can't do rm -rf because I transfer the files using libcurl and I only delete up successful transfer. unlink.... i'll look into that call, thanks!
0
 
dignifiedAuthor Commented:
same thing happens when I use unlink. first file gets transferred and deleted but then it hangs....
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
MysidiaCommented:
I wonder if the process of erasing causes the readdir() from the next position to stop
short.. Perhaps try rewinding:

     DIR *d;
     struct dirent *pdirent;
     char absPath[FILENAME_MAX] = "";

     d = opendir(logdir);
     while ( (pdirent = readdir(d)) != NULL ){
          if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
          sprintf(absPath, "%s/%s", logdir, pdirent->d_name);
          if (unlink(absPath) == 0) {
              rewinddir(d);
              continue;
          }
     }


Or queuing the contents of the directory and unlink() each one after you have
already read the whole thing
0
 
dignifiedAuthor Commented:
it doesn't hang but this doesn't do the trick either. Think I'll have to make a mini-queue, pretty lame.
0
 
dignifiedAuthor Commented:
here is the code I'm actually using. I have a queue of length 1. so on the next iteration of the loop I delete the old saved file we just read.


      DIR *d;
      struct dirent *pdirent;
      char absPath[100] = "";
      char newPath[103] = "./";
      int retVal;
      char firstTime = 1;
      
      //now curl it to a remote server
      struct curl_httppost* post = NULL;
      struct curl_httppost* last = NULL;
      CURL *hCURL = curl_easy_init();
      
      d = opendir(logdir);
      while ( (pdirent = readdir(d)) != NULL ){
            if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
            if(!strcmp(filename, pdirent->d_name)) continue;
            //printf("%s\n", pdirent->d_name);
            strcpy(absPath, logdir);
            strcat(absPath, pdirent->d_name);

            curl_easy_setopt(hCURL, CURLOPT_URL, "http://stuff.com/test.php");
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "user", CURLFORM_COPYCONTENTS, login, CURLFORM_END);
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "pass", CURLFORM_COPYCONTENTS, pword, CURLFORM_END);
            curl_formadd(&post, &last, CURLFORM_COPYNAME, "tracefile", CURLFORM_FILE, absPath, CURLFORM_END);
            curl_easy_setopt(hCURL, CURLOPT_HTTPPOST, post);
            retVal = curl_easy_perform(hCURL);
            
            if( retVal == 0 )
            {
                  if(!firstTime)
                  {
                        unlink(newPath);
                  }else{
                        firstTime = 0;
                  }
                  newPath[2] = '\0';
                  strcat(newPath, absPath);
            }
      }
      unlink(newPath);

      curl_easy_cleanup(hCURL);
0
 
dignifiedAuthor Commented:
jeez, this doesn't seem to work either. It deletes 2 files and then moves on. I don't know if it is just deleting n-1 files or not. it doesn't hang though.
0
 
dignifiedAuthor Commented:
Actually, it seems that libcurl is what is causing the problems. it doesn't like me uploading and then deleting. I have gotten my code to either transfer all files or delete all files, but not both.
0
 
NovaDenizenCommented:
You can't rely on the directory entries sitting still when you are creating or deleting files.  readdir() is only guaranteed to work the way you expect when nobody is screwing with the directory.  The only reliable way to do what you want to do is to read all the filenames in advance of performing file creations or deletions in that directory.
0
 
MysidiaCommented:
Perhaps defer the deletions until after you've done the      curl_easy_cleanup(hCURL);

(Sigh)

static char** list_files;
static int list_nfiles = 0;

int list_size() { return list_nfiles; }
char* list_top() {
   if (list_nfiles <= 0) { return NULL; }
   return list_files[list_nfiles - 1];
}

void list_add_file(char* name) {
     if (list_nfiles == 0)
         list_files = (char **)malloc(sizeof(char *) * 2);
     else {
         char **temp = (char **)realloc(list_files, sizeof(char *) * (list_nfiles + 1));

          if (temp == NULL) { abort(); } /* Unable to resize */
          list_files = temp;
     }
     if ( (list_files[list_nfiles++] = strdup(name)) == NULL ) { abort(); }
}

void list_pop() {
        char **temp;
     if (list_nfiles <= 0) { abort(); }

     free(list_files[--list_nfiles]);

     if (list_nfiles > 0) {
         temp = (char **)realloc(list_files, sizeof(char *) * (list_nfiles + 1));

         if (temp == NULL) { abort(); } /* Unable to resize */
         list_files = temp;
     } else {
        free(list_files);
        list_files = NULL;
     }
}


....
....
  DIR *d;
     char* filename;
     struct dirent *pdirent;
     char absPath[FILENAME_MAX] = "";

     d = opendir(logdir);
     while ( (pdirent = readdir(d)) != NULL ){
          if(!strcmp(pdirent->d_name, ".") || !strcmp(pdirent->d_name, "..")) continue;
          sprintf(absPath, "%s/%s", logdir, pdirent->d_name);
          list_add_file(absPath);
     }

.... other stuff ...

... curl cleanup ...

    while((filename = list_top())) {
          unlink(filename);
          list_pop();
    }
0
 
dignifiedAuthor Commented:
I actually got things to work. Turns out that for libcurl you need to set post = last = NULL for EACH iteration of the loop. after I did this, everything worked. Otherwise I would have had to have queued things up i suppose. Fortunately, with my implementation, I don't need to worry about files being tampered with. But even if the files are open when I delete them, unlink should still unlink them and then delete them for good once all file handles are closed.

Thanks a lot for the Code Mysidia.
0

Featured Post

Free Tool: Path Explorer

An intuitive utility to help find the CSS path to UI elements on a webpage. These paths are used frequently in a variety of front-end development and QA automation tasks.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now