In neither is lngResult initialized, so we'll presume the intent wast to start with 0.
The first one sums the entries in the array multiplied by 2 raised to decreasing multiples of seven.
The powers of two are 3*7, 2*7, 1*7, and 0*7, or 21, 14, 7, and 0.
This also has the effect of shifting the entries in the array left by 21, 14, 7, and 0 bits each.
So the final result is array(0)<<21 OR array(1)<<14 OR array(2)<<7 OR array(3).
Or, putting the entries in array into seven bit clumps. Given that your example has arr(3) requiring 8 bits, I'd be much happier if the powers of two were multiples of 8 instead of 7.
In any case, the second one has the effect of making a 32-bit item with the array entries stored left to right, 0 to 3, the result being a long whose value is:
65536*(256*arr(3) + arr(2)) + (256*arr(1)+arr(0) (recall that on Intel CPUs, bytes in integers are stored least significant first).
Note that can also be considered reversing the entries of the array and puting the result into 32 bits.
Given what the second one does, if the multiplier in the first one had been 8, it's result would have been putting the entries in the array into a 32-bit item in the same order as the second.
0
Sebastian_MaresAuthor Commented:
OK, I understand that the second function will convert a Big Endian value to Long. Assuming we have the following values: 0xFF , 0x51 , 0x01 , 0x35, the function will convert the array to the value 0xFF510135.
The main problem is with the first function. I have no idea why the powers of two are multiples of 7.
0
Sebastian_MaresAuthor Commented:
OK, I think I got it... The first function is used for calculating the size of an ID3v2 tag. The size descriptor is stored inside the ID3v2 tag header. Here is what ID3.org says about the format of the size descriptor:
"The ID3v2 tag size is encoded with four bytes where the most
significant bit (bit 7) is set to zero in every byte, making a total
of 28 bits. The zeroed bits are ignored, so a 257 bytes long tag is
represented as $00 00 02 01."
This should explain why the powers of 2 are a multiple of 7. :-)
0
Sebastian_MaresAuthor Commented:
Last thing... The first function could also be:
Public Function Convert1(abytByteArray() As Byte) As Long
Dim lngCounter As Long
Dim lngResult As Long
On Error GoTo ErrorHandler
For lngCounter = 0 To 3
lngResult = ShiftLeft(lngResult, 7) + abytByteArray(lngCounter)
Next lngCounter
The first one sums the entries in the array multiplied by 2 raised to decreasing multiples of seven.
The powers of two are 3*7, 2*7, 1*7, and 0*7, or 21, 14, 7, and 0.
This also has the effect of shifting the entries in the array left by 21, 14, 7, and 0 bits each.
So the final result is array(0)<<21 OR array(1)<<14 OR array(2)<<7 OR array(3).
Or, putting the entries in array into seven bit clumps. Given that your example has arr(3) requiring 8 bits, I'd be much happier if the powers of two were multiples of 8 instead of 7.
In any case, the second one has the effect of making a 32-bit item with the array entries stored left to right, 0 to 3, the result being a long whose value is:
65536*(256*arr(3) + arr(2)) + (256*arr(1)+arr(0) (recall that on Intel CPUs, bytes in integers are stored least significant first).
Note that can also be considered reversing the entries of the array and puting the result into 32 bits.
Given what the second one does, if the multiplier in the first one had been 8, it's result would have been putting the entries in the array into a 32-bit item in the same order as the second.