fmoore0001
asked on
Foxpro 2 gig dbf limit
Guys, is there a way to overcome the 2 gig limitation on Foxpro dbf files. I have a client adding 100 meg. a month to one file and could reach the file limit size in about a year. One way is to simply open another database, but he really likes having all the records in one place.
Frank
Frank
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Keep in mind that the 2GB limitation relates to the size for the DBF, FPT & CDX files themselves, not the record count.
Admittedly I have not taken the time to try to understand the code above, but the text references all seem to be about 'records', not file size.
I have a client adding 100 meg. a month to one file and could reach the file limit size in about a year.
With that in mind, you can either do as I suggested above, and put the data into non-VFP database data table(s) (the preferred solution for best longevity) or you can distribute the data across multiple related VFP tables each of which would each be smaller in size.
Good Luck
Admittedly I have not taken the time to try to understand the code above, but the text references all seem to be about 'records', not file size.
I have a client adding 100 meg. a month to one file and could reach the file limit size in about a year.
With that in mind, you can either do as I suggested above, and put the data into non-VFP database data table(s) (the preferred solution for best longevity) or you can distribute the data across multiple related VFP tables each of which would each be smaller in size.
Good Luck
"One way is to simply open another database, but he really likes having all the records in one place."
As long as you are talking about VFP, the 'database' is very seldom the issue.
It is the 'data tables' (the DBF, CDX, & FPT files) that most often come up against the 2GB file limitation issue.
Yes, if your VFP DBC,DBX,DBT files (the 'database' files) were to exceed 2GB you would indeed have problems, but that is seldom the issue.
If "he" wants to be the one to design the data architecture, then, by all means, let him do it and let him experience the consequences. But if 'he' wants things to work, then get him to be open to advice from others.
NOTE - putting the data into one or more SQL Server data tables would not only keep "all the records in one place", it would also add security to the data via SQL Server security.
Good Luck
As long as you are talking about VFP, the 'database' is very seldom the issue.
It is the 'data tables' (the DBF, CDX, & FPT files) that most often come up against the 2GB file limitation issue.
Yes, if your VFP DBC,DBX,DBT files (the 'database' files) were to exceed 2GB you would indeed have problems, but that is seldom the issue.
If "he" wants to be the one to design the data architecture, then, by all means, let him do it and let him experience the consequences. But if 'he' wants things to work, then get him to be open to advice from others.
NOTE - putting the data into one or more SQL Server data tables would not only keep "all the records in one place", it would also add security to the data via SQL Server security.
Good Luck
wOOdys DBF resizer is adressing the probem of a dbf reaching the 2GB limit, to fix it back to all records below that limit, it does not extend dbfs to be capable to store more than 2GB.
"takes care about the absolute maximum of possible records" means it cuts off everything in the dbf above that limit, it does not extend the maximum. It truncates all records above 2GB. And it's also usable to fix a header corruption in regard to reccount.
You commented that line: *nMaxRecNew = MIN(nMaxRec, nMaxRecNew) && just to be save from idiots
Now the program is not safe from idiots. Sorry, not to offend you, but if you extend a dbf to contain more records than nMaxRecNew computed a few lines before that, you're making the dbf file larger than 2GB but also inaccessable and not usable to foxpro anymore.
Bye, Olaf.
"takes care about the absolute maximum of possible records" means it cuts off everything in the dbf above that limit, it does not extend the maximum. It truncates all records above 2GB. And it's also usable to fix a header corruption in regard to reccount.
You commented that line: *nMaxRecNew = MIN(nMaxRec, nMaxRecNew) && just to be save from idiots
Now the program is not safe from idiots. Sorry, not to offend you, but if you extend a dbf to contain more records than nMaxRecNew computed a few lines before that, you're making the dbf file larger than 2GB but also inaccessable and not usable to foxpro anymore.
Bye, Olaf.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
All the records in one place slow the performance down. You have to convince your client what means "one place" for database stored on file server. One Folder is the entity we are talking about...
Horizontal or vetical split is the only VFP solution today. (To wait for the next version is still not worth to do.)
BTW, does your client really need the data all together in one file? We are obviously working with (or reporting) a small subset. If they need e.g. monthly cumulative data then such data should be precalculated in separate tables for closed months and the only live data query should read current month which is much faster.
Horizontal or vetical split is the only VFP solution today. (To wait for the next version is still not worth to do.)
BTW, does your client really need the data all together in one file? We are obviously working with (or reporting) a small subset. If they need e.g. monthly cumulative data then such data should be precalculated in separate tables for closed months and the only live data query should read current month which is much faster.
ASKER
Thanks for the help guys. I think we need to consider some sort of an SQL back end.
ASKER
Open in new window