Why is GDI+ so slow when extracting part of the photo?

I'm using GDI+ to load photos.
If I load the full region of a 500m pixel jpg photo(2592*1944) into a 1024*1024 hdc, it takes 250ms.
But if I load the center square only (1944*1944), and scale it into 1024*1024, this takes 500ms, twice the time!
Shouldn't it be easier to decode just part of the jpg?
Besides, in this case I have to use RectF structure, which uses float point to define the rectangle, why should it be float?
My CPU: athlon xp 2500, OS: winxp pro sp2, VC++ 2005
My code is:

                #define HDC_W 1024
                #define HDC_H 1024

      Image img( m_photoPath );
      int oriWidth = img.GetWidth();
      int oriHeight = img.GetHeight();
      
      Status result = Ok;

      if (m_cropNum == 0)
      {
            // no cropping
            int zoomX, zoomY, zoomWidth, zoomHeight;

            if (oriWidth >= oriHeight)
            {
                  zoomWidth = HDC_W;
                  zoomHeight = oriHeight * HDC_H / oriWidth;

                  zoomX = 0;
                  zoomY = (HDC_H - zoomHeight)/2;

            }
            else
            {
                  zoomHeight = HDC_H;
                  zoomWidth = oriWidth * HDC_W / oriHeight;

                  zoomX = (HDC_W - zoomWidth) / 2;
                  zoomY = 0;

            }

            RECT rc;
            ::SetRect(&rc, 0, 0, HDC_W, HDC_H);
            ::FillRect(m_hdc, &rc, (HBRUSH)::GetStockObject(BLACK_BRUSH));

            result = m_pGraphics->DrawImage(&img, zoomX, zoomY, zoomWidth, zoomHeight);
      }
      else
      {
            // crop the center square out and show that only


            int srcX, srcY, srcWidth, srcHeight;

            if (oriWidth >= oriHeight)
            {
                  srcX = (oriWidth - oriHeight) / 2;
                  srcY = 0;
                  srcWidth = srcHeight = oriHeight;
            }
            else
            {
                  srcX = 0;
                  srcY = (oriHeight - oriWidth) / 2;
                  srcWidth = srcHeight = oriWidth;
            }


            RectF dst(0.0f, 0.0f, (float)HDC_W, (float)HDC_H);

            result = m_pGraphics->DrawImage(&img, dst, srcX, srcY, srcWidth, srcHeight, UnitPixel, NULL);
                   }
softimageAsked:
Who is Participating?
 
DanRollinsConnect With a Mentor Commented:
All work with an image is doen internally with bitmaps.  A 500M pixel image is 2Gb of data (four bytes per pixel).  That's more than most systems can keep in RAM at once.

JPEG encoding is not linear... that is there is no one-to-one corespondance between a scanline of the displyable bitmap and a particular part of the JPG file.  The system cannot just jump to say, offset 10Mb and start decoding... It pretty much needs to decode the whole thing.  

When you are decoding into a  2592*1944 image into a 1024*1024 hdc, the system can scale each of the JPG features as it goes.  It does not need to have the entire image in memory at once.

But when accessing a subset, the entire image must be decoded into RAM (2592*1944*4 = 20Mb) and then the central part (1K*1K*4= 5MB) is blitted into the destination hdc.  Using 24MB of RAM might force the system to use virtual memory and swap out some other programs or data to make room for it.  That operation is time-consuming.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.