Solved

Restore deleted file which is still open.

Posted on 2009-03-31
4
709 Views
Last Modified: 2013-12-06
We deleted (rm) a file that was (is) open by a 24/7 process.
We have a backup of this file and want to copy it over to the original location.

Scenario:
1) Process P has file F open.
2) cp F /tmp/F
3) rm F

We cannot "stop" process "P", but would like to restore the "F" file to it's original location.
What would be the consequences to the "P" process? would it fail? would it take the file as nothing had happened?

0
Comment
Question by:MikeOM_DBA
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 2
  • 2
4 Comments
 
LVL 40

Expert Comment

by:omarfarid
ID: 24032871
if process P has the file open then it has a file descriptor that points to an inode which is deleted, so it should see an error when it writes to it. Then it depends on the code of the process if it attempts to reopen the file for write again.

You may avoid accidental delete of file by having another hard link to it, use ln command
0
 
LVL 29

Author Comment

by:MikeOM_DBA
ID: 24039203

Thanks for your reply.
I have to admit I have limited knowlege of Unix.

As I understand it, when a process is executing an has a file open, if the file is removed (rm) unix will not actually release the space, but allow the process to continue using the (now deleted) file.

That is the situation here: the file was removed and the process still believes it's available. Now we need to actually place the backup in the right location so the process may actually find it if it attempts to close/reopen this file.

Would this work?


0
 
LVL 40

Accepted Solution

by:
omarfarid earned 500 total points
ID: 24042746
From my understanding, the file space will not be released till that process is killed / stopped.

When a process opens a file it get a file descriptor which is pointing to a vnode (virtual node) in memory which points to inode on disk.

A dir is nothing but a special file that contains a table (in my opinion) which has rows of entries, each entry is a file name and an inode number. inode is a structure that contains info about the file on the disk. a file is deleted from a dir by removing that entry in the table, and is removed from a hard disk or file system when the number of dirs references to it becomes 0.

 When you delete a file from a dir you are simply decrementing the number of references to it. The file could have other references in the same dir or some other dir (this is in the context of the same file system and is called a hard link).

Now, when you copy / restore / recreate a file with the same name in the same dir, it will get a different inode number and hence has nothing to do with the previous one of the same name.

Placing the file in the same location with the same name, in your case, will help if the process closes and reopen the file. The data written between the removal, the file restore from backup, and file reopen, is going somewhere else and not to this restored file.

I hope that I clarified to you more confused you :)
0
 
LVL 29

Author Comment

by:MikeOM_DBA
ID: 24049864
It's clear, that is what I suspected.
Seems that the only way to fix it is to have the process close/open the file.
Thanks.
 
0

Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

When you do backups in the Solaris Operating System, the file system must be inactive. Otherwise, the output may be inconsistent. A file system is inactive when it's unmounted or it's write-locked by the operating system. Although the fssnap utility…
Java performance on Solaris - Managing CPUs There are various resource controls in operating system which directly/indirectly influence the performance of application. one of the most important resource controls is "CPU".   In a multithreaded‚Ķ
Learn how to find files with the shell using the find and locate commands. Use locate to find a needle in a haystack.: With locate, check if the file still exists.: Use find to get the actual location of the file.:
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.

726 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question