Link to home
Start Free TrialLog in
Avatar of parlando
parlando

asked on

Memcpy > 64K

In a 16-bit Windows app, I have allocated a gloabl memory block of 90000 bytes (array of 300 elements of 300 bytes each).
The memory is allocated OK, but when I try to memcpy something into element 218, the application crashes with a GP.
Probably something to do with the 64K limit. How can I work my way around this.
I use the Boralnd C++ 4.52 compiler, and compile in large memory model.
Avatar of rbr
rbr

2 tries since I don't know this compiler.
1.) Use the huge memory modell
2.) Many Dos-C compiler have a fmemcpy (f stands for far). Check your menuals (help-files) for this function.
Avatar of parlando

ASKER

1) Using the huge memory model involves too many changes to my application(s).
2) _fmemcpy didn't help.
I've worked my way around it by chopping the block into smaller pieces, wich was a possibility in this case.
The urgency for a solution has gone, but I'm still interested in solutions for future use.
Try creating two huge pointers, then
1) copy the first 64kB
2) add 64kB to both pointers
3) copy the rest

I don't know if it works, the problem is, that you need to change the descriptors (1st 16-bit of pointers), if the add does that, it should be ok.
It's not that I need to copy large blocks of data, because the allocation of memory was succesful (it's a Windows application).
It's just that I can't copya block of 300 bytes (a record) past the 64K limit.
It is still the same problem.
In 16 bit, a far pointer has 2 parts.
The 1st 16 bits selects a (max.) 64kB block of memory. It is called a segment in real mode, and a selector in protected mode.
The 2nd 16 bits are an offset.

When you need to access data beyond 64kB, you can't just add something to your pointer, because you'll either get a wrong offset (if you only add to the 2nd 16 bits), or the 1st 16 bits of your pointer will become invalid.

In BC++ 3.1 real mode, you used huge pointers to solve this problem. (In real mode, you just need to do your calculation a bit differently. When you declare a pointer as huge, the compiler uses this method, which you only use when needed, because it is a lot slower).

In protected mode (which I think you use), you can't tell what value you should put into the 1st 16 bits, because it depends on which os you use. If BC++ knows how, the huge pointer approch should work. If not, there must be some Windows function, which can do it for you.
Assuming your 300-byte array item is a struct called myStruct:

//Allocate array of 300 pointers.
myStruct *myStructArray = malloc(300 * sizeof (myStruct *)) ;

//use farmalloc to allocate each individual entry
int x;
for (x=0; x < 300; x++)
   myStructArray[x] = farmalloc(sizeof(myStruct));





Free 300-byte structures and list before quitting program.
for (x=0; x < 300; x++)
  farfree(myStructArray[x]);
free(myStructArray);


Note: be careful when accessing your structures (that now live on the far heap outsize of the 64K data segment). Things like strcpy will not work as expected..you need to use the far versions.
Since you specifically said memcpy, the far version (if you decide to use my suggested approach) is _fmemcpy().
It's just things like strcpy (or memcpy in my case) that cause the problems. The memory allocation itself works fine.
The size parameter of the memcpy() function has type size_t. I bet a size_t on
your machine is defined as a 16 bit unsigned int. BTW, the parameter for malloc()
happens to be a size_t too; are you definitely, positively, absolutely, totally
sure that your malloc succeeded? IOW can your malloc allocate more than 64K bytes?

kind regards,

Jos aka jos@and.nl
Absolutely positively sure! It's a Windows application, and I use GlobalAlloc().
ASKER CERTIFIED SOLUTION
Avatar of SaGS
SaGS

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Difficult to tell what's wrong without seeing the code, but you'll definitely need huge pointers to do what you intended first.

See this :

h = GlobalAlloc (200000,..);
char huge *hp;
char far *fp;
hp = GlobalLock (h);
hp += 64*1024L; // offsets hp by 64K => succeeds (1)
fp += 64*1024L; // offsets fp by 64K => fails (2)

In (1), the compiler generates all the code necessary to move a huge pointer across the segment boundary. Line (2) will because the compiler will treat fp as a 'normal' pointer and will simply increment its value.

Also, if I remember well, there is no huge version of the std C lib in Borland. Hence, all C library calls (strcpy, etc.) will see a huge pointer as a far, yielding unexpected (and generally disastrous) results.
wpd is right about the 'huge' memory model libraries, and my statement "No (or almost no) C run-time function accepts this except in 'huge' mem model" is not exact.

With 'huge' model, the 'large' model libraries are used. However, the docs that came with my compiler (this info is implementation-specific) state that the following functions within the large model library support huge pointers: bsearch, fread, fwrite, _fmemccpy, _fmemchr, _fmemcmp, _fmemcpy, _fmemicmp, _fmemmove, _fmemset, _halloc, _hfree, _lfind, _lsearch, _memccpy, memchr, _memicmp, memmove, qsort, memcmp, memcpy, memset. The last 3 of these also have intrinsic versions (the compiler generates inline code instead of calling a function), but the inline versions do not support huge pointers.



(My HTML formatting test. Disregarg the following text.
<B>bold</B>, <I>italic</I>,<PRE>
    preformatted text,    non roportional font
</PRE>
link to <A HREF="https://www.experts-exchange.com/Q.10121056">this page</A>.)
Thanks guys,
With this information I'll be able to solve the problem.I've learned a few new things about (huge) memory models and restrictions.