Link to home
Start Free TrialLog in
Avatar of Adam Morton
Adam MortonFlag for United States of America

asked on

How to repair a corrupt FPT file

I have a corrupt FPT file that I cannot repair. The DBF that is associated with it has 52 records of information. Ive pin-pointed where the corrupted fields are. I have a memo field with the name mplacement. There are exactly 4 so called corrupted records in the mplacement field.  The corruption starts at record 18. Records 18  21 seem to be the corrupted ones. Records 1  17 seem to display information correctly. When I export the information from this DBF into an XLS, it only goes to record 17 and then quits, leaving the rest of the records out. When I open the FPT with a Hex Editor, the data displayed ends with record 17, like there is no more data.

Heres the tricky part. When I open the application and go into 4 records (other than the 4 corrupted ones) and enter data into that mplacement memo field, it magically fixes the FPT and I dont get the Error 41, Memo file is missing or invalid anymore.  When I open the FPT back up in a Hex Editor, it shows me records 1  17 and my newly added data for the 4 entries at the bottom. I still cant seem to recover all of the data from the memo field from records 18  52. Ive got backups (at the time of corruption) of the DBF and FPT so I can try anything. Any help is much appreciated.
Avatar of Adam Morton
Adam Morton
Flag of United States of America image

ASKER

I'm taking the records from 18 - 52 as a loss. Since they do not show up in Hex Editor, they're gone. I have another question for you guys.

Is there a way I can write the data of the FPTs to another format (TXT, XLS, CSV, etc...) programatically? Maybe include a trigger some how that it will write the data out to another format every time the FPT is updated? So if the FPT gets corrupted, I can restore the information back from the most recent backed up ghost file so no data will be lost and a restore from backup won't be necessary.
Avatar of Olaf Doschke
In DBCs you can define Insert/Update/Delete Trigers that run beforehand and you can already access the new values and so this could be a solution. But it's a bit overacting to double store memo texts in the fear of file corruption. You may migrate data to a more solid server database. Actually I'm working on some kind of replication via triggers which would help storing everything double, but it's taking twice the time to write then. I opt for creating a log table without indexes and have a secondary process reading in from the log into replication dbfs. But that's going too far here. This sample code will write some memo field to a text file and can be put into a tables insert and update trigger as logmemo("tablename","memofieldname",tablename.ID).

tablename.ID should be the primary key value, which is used in the file name to identify into which record the memo.txt backup file belongs. You might change the strtofile() to append to a single file instead of generating new ones for every insert/update.
#Define ccBackupPath "\\server\data\backup\"
Procedure logmemo()
   Lparameters tcTable, tcMemo, tuID
 
   Local lcBackupFilename
   lcBackupFilename = ccBackupPath+Transform(tuID)+" "+tcTable+;
       " "+tcMemo+DTOS(Datetime())+Sys(2015)+".txt"
 
   Strtofile(Eval(tcTable+"."+tcMemo),lcBackupFilename,.f.)
Endproc

Open in new window

To repair files you can check out CMRepair or FoxFix http://www.xitech-europe.co.uk/foxfix.php.

You can create triggers on a VFP database. Check out modify procedure to create a procedure. and in modify structure you can set the triggers.
Olaf,

Thank you for the code snippet. We have tested this code and it appears to only work for ONE memo field in ONE record as is. Is there a way to dump the entire contents of the memo file to another format (text, csv, xml, etc...)?

Thanks again!
ASKER CERTIFIED SOLUTION
Avatar of Olaf Doschke
Olaf Doschke
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I providede code to replicate a memo field. This can be applied to more than one field, by calling it several times with each memo content you want saved seperately.

And to answer the final question: "Is there a way to dump the entire contents of the memo file to another format (text, csv, xml, etc...)?" sure you can scan through all records and save data to any other file. But what good would that be, if the fpt file is getting very large, say hundreds of megabyte, and you do that with every insert or update.

The questions were all answered as far as could be. I don't think HBugProject  has a sufficient reason for a refund of points. The general problem of file corruption of file based databases is not a VFP problem only. If he experiences corrupt fpt files and lost data often, then he's quite alone with this. Such reports are rather seldom. In my own 10 years of VFP experience I did never ever loose FPT contents. I had corrupted files, also fpt, but they didn't lost records even in the hex editor. There's of course nothing you can do about lost data than have a backup.

Bye, Olaf.
Olaf's considered solution is slowing the performance of our software down drastically when trying to make a call to EACH memo field during inserts and update triggers considering we have a lot of memo fields on various forms.
I am aware that network performance contributes a lot to whether data gets corrupted or not. Our software is distributed state wide to hundreds of different locations. It is impossible for me to know which sites have slower performing networks. Since we can't keep the data from getting corrupted on a poor network we have introduced automatic backups of the database to help reduce data loss.
I hope the Admin of this forum considers this a valid response.
Hello HbugProject,

okay, accepted. You could have said so earlier.

This is just some demo, not optimized for performance. you could easily also modify it to save many memos at once. Of course I already said that double storing data does take performance, quite drastically. There would be many ways to improve it, be it storing the backup locally and let a parallel process copy that to the server.

You already made your choice. Nevertheless you could spend these 500 points. You're expecting a bit much, if you only would accept a full blown solution, this is just a few forum posts, not a job.

Bye, Olaf.
The provided answer will slow the program down drastically if you have a lot of memo files. For just a couple of memo files, this would work great.