defa0009
asked on
Allocation size for a byte[] - best practice
I am trying to figure out the best approach for reading a file into a byte array. Not really after any code but more on how to understand the pros and cons of doing it one way or another. This guy at work hard codes the size of his byte[] arrays. Below is his code and then my suggestion to him and then his reply...
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
A guy at work uses this code to zip up some files:
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
File zipFile = new File( "outfile.zip" );
ZipOutputStream zipFileOS = new ZipOutputStream(
new BufferedOutputStream(
new FileOutputStream( zipFile ) ) );
byte[] data = new byte[8192]; <--------- hard coded
FileInputStream fileToZipIS = null;
BufferedInputStream origin = null;
ZipEntry entry = null;
int iCount = 0;
String[] asFilesToZip = { "fileToZip-01.pdf", "fileToZip-02.pdf", "fileToZip-03.pdf" };
for ( int ii = 0; ii < asFilesToZip.length; ii++ )
{
// add new file to the zip file first:
File fileToZip = new File( asFilesToZip[ii] );
fileToZipIS = new FileInputStream( fileToZip );
origin = new BufferedInputStream( fileToZipIS, data.length );
entry = new ZipEntry( fileToZip.getName() );
zipFileOS.putNextEntry( entry );
// write the file contents into the zip file:
while( ( iCount = origin.read( data, 0, data.length ) ) != -1 )
{
zipFileOS.write( data, 0, iCount );
}
origin.close();
}
zipFileOS.close();
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
And my suggestion to him was:
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
File zipFile = new File( "outfile.zip" );
ZipOutputStream zipFileOS = new ZipOutputStream(
new BufferedOutputStream(
new FileOutputStream( zipFile ) ) );
byte[] data = null;
FileInputStream fileToZipIS = null;
BufferedInputStream origin = null;
ZipEntry entry = null;
int iCount = 0;
File fileToZip = null;
String[] asFilesToZip = { "fileToZip-01.pdf", "fileToZip-02.pdf", "fileToZip-03.pdf" };
for ( int ii = 0; ii < asFilesToZip.length; ii++ )
{
// add new file to the zip file first:
fileToZip = new File( asFilesToZip[ii] );
data = new byte[(int) fileToZip.length()]; <------------- not hard coded
fileToZipIS = new FileInputStream( fileToZip );
origin = new BufferedInputStream( fileToZipIS, data.length );
entry = new ZipEntry( fileToZip.getName() );
zipFileOS.putNextEntry( entry );
// write the file contents into the zip file:
while( ( iCount = origin.read( data, 0, data.length ) ) != -1 )
{
zipFileOS.write( data, 0, iCount );
}
origin.close();
}
zipFileOS.close();
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
His reply to me about my suggestion was:
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
Memory allocation is expensive. There is no need to do the allocation in the loop.
Why would you do it that way? If you offer a convincing reason then I might agree. Without a reason it seems like it's just your preference and a less performant one at that.
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
So then I said to him:
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -
What happens if you file goes over 8192 bytes?
The size of the byte[] should be the size of the file your reading in.
So am I missing something here. He seems pretty convinced that my suggestion is very invalid as you can tell by his response. What would happen if one of his file went over 8192 bytes that he allocated for it?
--------------------------
A guy at work uses this code to zip up some files:
--------------------------
File zipFile = new File( "outfile.zip" );
ZipOutputStream zipFileOS = new ZipOutputStream(
new BufferedOutputStream(
new FileOutputStream( zipFile ) ) );
byte[] data = new byte[8192]; <--------- hard coded
FileInputStream fileToZipIS = null;
BufferedInputStream origin = null;
ZipEntry entry = null;
int iCount = 0;
String[] asFilesToZip = { "fileToZip-01.pdf", "fileToZip-02.pdf", "fileToZip-03.pdf" };
for ( int ii = 0; ii < asFilesToZip.length; ii++ )
{
// add new file to the zip file first:
File fileToZip = new File( asFilesToZip[ii] );
fileToZipIS = new FileInputStream( fileToZip );
origin = new BufferedInputStream( fileToZipIS, data.length );
entry = new ZipEntry( fileToZip.getName() );
zipFileOS.putNextEntry( entry );
// write the file contents into the zip file:
while( ( iCount = origin.read( data, 0, data.length ) ) != -1 )
{
zipFileOS.write( data, 0, iCount );
}
origin.close();
}
zipFileOS.close();
--------------------------
And my suggestion to him was:
--------------------------
File zipFile = new File( "outfile.zip" );
ZipOutputStream zipFileOS = new ZipOutputStream(
new BufferedOutputStream(
new FileOutputStream( zipFile ) ) );
byte[] data = null;
FileInputStream fileToZipIS = null;
BufferedInputStream origin = null;
ZipEntry entry = null;
int iCount = 0;
File fileToZip = null;
String[] asFilesToZip = { "fileToZip-01.pdf", "fileToZip-02.pdf", "fileToZip-03.pdf" };
for ( int ii = 0; ii < asFilesToZip.length; ii++ )
{
// add new file to the zip file first:
fileToZip = new File( asFilesToZip[ii] );
data = new byte[(int) fileToZip.length()]; <------------- not hard coded
fileToZipIS = new FileInputStream( fileToZip );
origin = new BufferedInputStream( fileToZipIS, data.length );
entry = new ZipEntry( fileToZip.getName() );
zipFileOS.putNextEntry( entry );
// write the file contents into the zip file:
while( ( iCount = origin.read( data, 0, data.length ) ) != -1 )
{
zipFileOS.write( data, 0, iCount );
}
origin.close();
}
zipFileOS.close();
--------------------------
His reply to me about my suggestion was:
--------------------------
Memory allocation is expensive. There is no need to do the allocation in the loop.
Why would you do it that way? If you offer a convincing reason then I might agree. Without a reason it seems like it's just your preference and a less performant one at that.
--------------------------
So then I said to him:
--------------------------
What happens if you file goes over 8192 bytes?
The size of the byte[] should be the size of the file your reading in.
So am I missing something here. He seems pretty convinced that my suggestion is very invalid as you can tell by his response. What would happen if one of his file went over 8192 bytes that he allocated for it?
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
I'd say you're right. Flexibility, not to mention the file size issue > 8192, is important.
Also.. BufferedInputStream default buffer size itself is 8192..
And it is not required to send the size in the BufferedInputStream constructor..
And it is not required to send the size in the BufferedInputStream constructor..
No - it's best to do it in chunks (got you and your mate mixed up) ;)
ASKER
Thanks guys...
Yes it is right..
>> What happens if you file goes over 8192 bytes?
The while loop rotates for multiple times..