• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 3694
  • Last Modified:

Foxpro 2 gig dbf limit

Guys, is there a way to overcome the 2 gig limitation on Foxpro dbf files.   I have a client adding 100 meg. a month to one file and could reach the file limit size in about a year.  One way is to simply open another database, but he really likes having all the records in one place.

Frank
0
fmoore0001
Asked:
fmoore0001
  • 3
  • 2
  • 2
  • +1
2 Solutions
 
jrbbldrCommented:
You can use another non-VFP backend for the data tables.

If you use something like SQL Server for your tables, they will support more than 2GB size.

The only issue would be that your queries would need to return record sets less than 2GB so that VFP could work on them.

Good Luck
0
 
fmoore0001Author Commented:
Anyone had some experience with this Woody;s DBF Resizer:
************************************************************
* ResizeDbf 
************************************************************
* Copyright 2007 by wOOdy, ProLib Software GmbH, Germany
************************************************************
* This routine allows you to correct the filesize of a DBF
* depending on the desired Recordcount 
* It automatically takes care about the absolute maximum of 
* possible records
************************************************************

SET TALK OFF
CLEAR
USE

cFileName = GETFILE("DBF")
IF NOT FILE(cFileName)
	RETURN .f.
ENDIF

? "Filename", cFilename
ADIR(aFileInfo, cFileName)
nFileSize = aFileInfo(1,2)
? "Length of file ", nFileSize


nFile = FOPEN(cFileName,12)
IF nFile < 1
	MESSAGEBOX("Error at FileOpen")
	RETURN .F.
ENDIF

* Read the File characteristics from the header area
cHeader   = FREAD(nFile, 32)
nRecCount = CTOBIN(SUBSTR(cHeader, 5, 4), "4SR")
nHeader   = CTOBIN(SUBSTR(cHeader, 9, 2), "2SR")
nRecSize  = CTOBIN(SUBSTR(cHeader,11, 2), "2SR")   	


? "RecordCount", nRecCount
? "HeaderLength", nHeader
? "RecordLength", nRecSize

* Formula for the max number of possible records:
nMaxRec = Int((2^31-nHeader-2)/(nRecsize))
? "Max.Records   ", nMaxRec

nMaxrecNew = VAL(INPUTBOX("New Reccount","Input",TRANSFORM(nMaxRec),0,TRANSFORM(nMaxRec),"0"))
*nMaxRecNew = MIN(nMaxRec, nMaxRecNew)	 && just to be save from idiots
IF nMaxRecNew >0 
	IF MESSAGEBOX(TEXTMERGE("Set file <<cFileName>> to <<nMaxRecNew>> records?"),4+32+256) = 6 
		nNewSize = nHeader + (nRecSize * nMaxRecNew) 
 		FCHSIZE(nFile, nNewSize)	&& change the filesize
		FSEEK(nFile, 4,0)			&& reposition to the header (4 bytes move from Byte 1 = position 5)
		FWRITE(nFile, BINTOC(nMaxRec, "4SR"))	&& poke new reccount in reverse intel format
	ENDIF
ENDIF

FCLOSE(nFile)

TRY 
	USE (cFileName)
	GO BOTTOM
	BROWSE NOWAIT
CATCH
	? "Error on opening"
ENDTRY

Open in new window

0
 
jrbbldrCommented:
Keep in mind that the 2GB limitation relates to the size for the DBF, FPT & CDX files themselves, not the record count.    
Admittedly I have not taken the time to try to understand the code above, but the text references all seem to be about 'records', not file size.

I have a client adding 100 meg. a month to one file and could reach the file limit size in about a year.

With that in mind, you can either do as I suggested above, and put the data into non-VFP database data table(s) (the preferred solution for best longevity) or you can distribute the data across multiple related VFP tables each of which would each be smaller in size.

Good Luck
0
Cloud Class® Course: Microsoft Windows 7 Basic

This introductory course to Windows 7 environment will teach you about working with the Windows operating system. You will learn about basic functions including start menu; the desktop; managing files, folders, and libraries.

 
jrbbldrCommented:
"One way is to simply open another database, but he really likes having all the records in one place."

As long as you are talking about VFP, the 'database' is very seldom the issue.  
It is the 'data tables' (the DBF, CDX, & FPT files) that most often come up against the 2GB file limitation issue.  

Yes, if your VFP   DBC,DBX,DBT files (the 'database' files) were to exceed 2GB you would indeed have problems, but that is seldom the issue.

If "he" wants to be the one to design the data architecture, then, by all means, let him do it and let him experience the consequences.   But if 'he' wants things to work, then get him to be open to advice from others.

NOTE - putting the data into one or more SQL Server data tables would not only keep "all the records in one place", it would also add security to the data via SQL Server security.

Good Luck
0
 
Olaf DoschkeSoftware DeveloperCommented:
wOOdys DBF resizer is adressing the probem of a dbf reaching the 2GB limit, to fix it back to all records below that limit, it does not extend dbfs to be capable to store more than 2GB.

"takes care about the absolute maximum of possible records" means it cuts off everything in the dbf above that limit, it does not extend the maximum. It truncates all records above 2GB. And it's also usable to fix a header corruption in regard to reccount.

You commented that line: *nMaxRecNew = MIN(nMaxRec, nMaxRecNew)       && just to be save from idiots

Now the program is not safe from idiots. Sorry, not to offend you, but if you extend a dbf to contain more records than nMaxRecNew computed a few lines before that, you're making the dbf file larger than 2GB but also inaccessable and not usable to foxpro anymore.

Bye, Olaf.
0
 
Olaf DoschkeSoftware DeveloperCommented:
correction: nMaxRec is the max possible number, nMaxRecNew is the number the user can enter via INPUTBOX.

Anyway, FCHSIZE(nFile, nNewSize) will already fail, if nNewSize is above 2GB as also the low level file handle function FOPEN FSEEK, FREAD, FPUTS, FCLOSE and FCHSIZE are limited to 2GB too.

It's possible to create and read files larger than 2GB with foxpro using Windows API or using Scripting.FilesystemObject and it's .OpenTextFile() method and .ReadLine() etc.

But also that does not help extending DBFs. (or FPT, CDX)

If the problem is with FPT, consider storing contents of the memo/blob fields as seperate files and point to that file in the DBF. If the problem is within the DBF try to determine if you're not wastin space with too large char fields, if the problem is within the CDX try to reindex and/or to put some indexes in secondary CDX or IDX files.

Other possibilities is a 1:1 vertical split of the file, eg put fields1 to n in one dbf, field n+1 to M in another dbf, this is limited up to the point each DBF has only one field of the record of course and does not help much in the case a single fpt field (memo/blob) is the source of the file growth.

Then you can of course add tables, table1,table2, table3 etc. to keep the record structure, but that makes queries complicated, as you need to union query results from each table and make sure that union doesn't grow larger than 2GB.

What can also save a few bytes is SET BLOCKSIZE TO 0 for fpt. But it's also no miracle reducing fpt lower than the data you store within.

The argument of keeping all in one (or a few) files is nonsense, as you can keep all seperate files in a subfolder, then to that directory it also does not mean bloating the file list, it's one folder in there. Of course with many files inside, but what does that matter? It will not use much more disc space. And copying the data to another place will also not take more time. Store file neames relative and the data also is portable to other folders like any DBC and it's DBFs, that also store there file position relative to the DBC. In case of storing images in Genereal fields you even save disc space despite the fact you're storing more files and each entry in the file system also takes a bit of disc space.

Bye, Olaf.
0
 
pcelbaCommented:
All the records in one place slow the performance down. You have to convince your client what means "one place" for database stored on file server. One Folder is the entity we are talking about...

Horizontal or vetical split is the only VFP solution today. (To wait for the next version is still not worth to do.)

BTW, does your client really need the data all together in one file? We are obviously working with (or reporting) a small subset. If they need e.g. monthly cumulative data then such data should be precalculated in separate tables for closed months and the only live data query should read current month which is much faster.
0
 
fmoore0001Author Commented:
Thanks for the help guys.  I think we need to consider some sort of an SQL back end.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Cloud Class® Course: Microsoft Azure 2017

Azure has a changed a lot since it was originally introduce by adding new services and features. Do you know everything you need to about Azure? This course will teach you about the Azure App Service, monitoring and application insights, DevOps, and Team Services.

  • 3
  • 2
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now