Learn how to a build a cloud-first strategyRegister Now


Foxpro 2 gig dbf limit

Posted on 2011-02-15
Medium Priority
Last Modified: 2012-06-27
Guys, is there a way to overcome the 2 gig limitation on Foxpro dbf files.   I have a client adding 100 meg. a month to one file and could reach the file limit size in about a year.  One way is to simply open another database, but he really likes having all the records in one place.

Question by:fmoore0001
  • 3
  • 2
  • 2
  • +1
LVL 12

Accepted Solution

jrbbldr earned 1000 total points
ID: 34902661
You can use another non-VFP backend for the data tables.

If you use something like SQL Server for your tables, they will support more than 2GB size.

The only issue would be that your queries would need to return record sets less than 2GB so that VFP could work on them.

Good Luck

Author Comment

ID: 34902698
Anyone had some experience with this Woody;s DBF Resizer:
* ResizeDbf 
* Copyright 2007 by wOOdy, ProLib Software GmbH, Germany
* This routine allows you to correct the filesize of a DBF
* depending on the desired Recordcount 
* It automatically takes care about the absolute maximum of 
* possible records


cFileName = GETFILE("DBF")
IF NOT FILE(cFileName)

? "Filename", cFilename
ADIR(aFileInfo, cFileName)
nFileSize = aFileInfo(1,2)
? "Length of file ", nFileSize

nFile = FOPEN(cFileName,12)
IF nFile < 1
	MESSAGEBOX("Error at FileOpen")

* Read the File characteristics from the header area
cHeader   = FREAD(nFile, 32)
nRecCount = CTOBIN(SUBSTR(cHeader, 5, 4), "4SR")
nHeader   = CTOBIN(SUBSTR(cHeader, 9, 2), "2SR")
nRecSize  = CTOBIN(SUBSTR(cHeader,11, 2), "2SR")   	

? "RecordCount", nRecCount
? "HeaderLength", nHeader
? "RecordLength", nRecSize

* Formula for the max number of possible records:
nMaxRec = Int((2^31-nHeader-2)/(nRecsize))
? "Max.Records   ", nMaxRec

nMaxrecNew = VAL(INPUTBOX("New Reccount","Input",TRANSFORM(nMaxRec),0,TRANSFORM(nMaxRec),"0"))
*nMaxRecNew = MIN(nMaxRec, nMaxRecNew)	 && just to be save from idiots
IF nMaxRecNew >0 
	IF MESSAGEBOX(TEXTMERGE("Set file <<cFileName>> to <<nMaxRecNew>> records?"),4+32+256) = 6 
		nNewSize = nHeader + (nRecSize * nMaxRecNew) 
 		FCHSIZE(nFile, nNewSize)	&& change the filesize
		FSEEK(nFile, 4,0)			&& reposition to the header (4 bytes move from Byte 1 = position 5)
		FWRITE(nFile, BINTOC(nMaxRec, "4SR"))	&& poke new reccount in reverse intel format


	USE (cFileName)
	? "Error on opening"

Open in new window

LVL 12

Expert Comment

ID: 34902894
Keep in mind that the 2GB limitation relates to the size for the DBF, FPT & CDX files themselves, not the record count.    
Admittedly I have not taken the time to try to understand the code above, but the text references all seem to be about 'records', not file size.

I have a client adding 100 meg. a month to one file and could reach the file limit size in about a year.

With that in mind, you can either do as I suggested above, and put the data into non-VFP database data table(s) (the preferred solution for best longevity) or you can distribute the data across multiple related VFP tables each of which would each be smaller in size.

Good Luck
Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

LVL 12

Expert Comment

ID: 34902960
"One way is to simply open another database, but he really likes having all the records in one place."

As long as you are talking about VFP, the 'database' is very seldom the issue.  
It is the 'data tables' (the DBF, CDX, & FPT files) that most often come up against the 2GB file limitation issue.  

Yes, if your VFP   DBC,DBX,DBT files (the 'database' files) were to exceed 2GB you would indeed have problems, but that is seldom the issue.

If "he" wants to be the one to design the data architecture, then, by all means, let him do it and let him experience the consequences.   But if 'he' wants things to work, then get him to be open to advice from others.

NOTE - putting the data into one or more SQL Server data tables would not only keep "all the records in one place", it would also add security to the data via SQL Server security.

Good Luck
LVL 30

Expert Comment

by:Olaf Doschke
ID: 34904317
wOOdys DBF resizer is adressing the probem of a dbf reaching the 2GB limit, to fix it back to all records below that limit, it does not extend dbfs to be capable to store more than 2GB.

"takes care about the absolute maximum of possible records" means it cuts off everything in the dbf above that limit, it does not extend the maximum. It truncates all records above 2GB. And it's also usable to fix a header corruption in regard to reccount.

You commented that line: *nMaxRecNew = MIN(nMaxRec, nMaxRecNew)       && just to be save from idiots

Now the program is not safe from idiots. Sorry, not to offend you, but if you extend a dbf to contain more records than nMaxRecNew computed a few lines before that, you're making the dbf file larger than 2GB but also inaccessable and not usable to foxpro anymore.

Bye, Olaf.
LVL 30

Assisted Solution

by:Olaf Doschke
Olaf Doschke earned 1000 total points
ID: 34904384
correction: nMaxRec is the max possible number, nMaxRecNew is the number the user can enter via INPUTBOX.

Anyway, FCHSIZE(nFile, nNewSize) will already fail, if nNewSize is above 2GB as also the low level file handle function FOPEN FSEEK, FREAD, FPUTS, FCLOSE and FCHSIZE are limited to 2GB too.

It's possible to create and read files larger than 2GB with foxpro using Windows API or using Scripting.FilesystemObject and it's .OpenTextFile() method and .ReadLine() etc.

But also that does not help extending DBFs. (or FPT, CDX)

If the problem is with FPT, consider storing contents of the memo/blob fields as seperate files and point to that file in the DBF. If the problem is within the DBF try to determine if you're not wastin space with too large char fields, if the problem is within the CDX try to reindex and/or to put some indexes in secondary CDX or IDX files.

Other possibilities is a 1:1 vertical split of the file, eg put fields1 to n in one dbf, field n+1 to M in another dbf, this is limited up to the point each DBF has only one field of the record of course and does not help much in the case a single fpt field (memo/blob) is the source of the file growth.

Then you can of course add tables, table1,table2, table3 etc. to keep the record structure, but that makes queries complicated, as you need to union query results from each table and make sure that union doesn't grow larger than 2GB.

What can also save a few bytes is SET BLOCKSIZE TO 0 for fpt. But it's also no miracle reducing fpt lower than the data you store within.

The argument of keeping all in one (or a few) files is nonsense, as you can keep all seperate files in a subfolder, then to that directory it also does not mean bloating the file list, it's one folder in there. Of course with many files inside, but what does that matter? It will not use much more disc space. And copying the data to another place will also not take more time. Store file neames relative and the data also is portable to other folders like any DBC and it's DBFs, that also store there file position relative to the DBC. In case of storing images in Genereal fields you even save disc space despite the fact you're storing more files and each entry in the file system also takes a bit of disc space.

Bye, Olaf.
LVL 43

Expert Comment

ID: 34905255
All the records in one place slow the performance down. You have to convince your client what means "one place" for database stored on file server. One Folder is the entity we are talking about...

Horizontal or vetical split is the only VFP solution today. (To wait for the next version is still not worth to do.)

BTW, does your client really need the data all together in one file? We are obviously working with (or reporting) a small subset. If they need e.g. monthly cumulative data then such data should be precalculated in separate tables for closed months and the only live data query should read current month which is much faster.

Author Closing Comment

ID: 35196481
Thanks for the help guys.  I think we need to consider some sort of an SQL back end.

Featured Post

Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Microsoft Visual FoxPro (short VFP) is a programming language with it’s own IDE and database, ranking somewhat between Access and VB.NET + SQL Server (Express). Product Description: http://msdn.microsoft.com/en-us/vfoxpro/default.aspx (http://msd…
Often times it's very very easy to extend a volume on a Linux instance in AWS, but impossible to shrink it. I wanted to contribute to the experts-exchange community a way of providing a procedure that works on an AWS instance. It can also be used on…
We’ve all felt that sense of false security before—locking down external access to a database or component and feeling like we’ve done all we need to do to secure company data. But that feeling is fleeting. Attacks these days can happen in many w…
Is your data getting by on basic protection measures? In today’s climate of debilitating malware and ransomware—like WannaCry—that may not be enough. You need to establish more than basics, like a recovery plan that protects both data and endpoints.…
Suggested Courses
Course of the Month20 days, 15 hours left to enroll

810 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question