Solved

CPU Question...

Posted on 2006-11-14
7
280 Views
Last Modified: 2010-04-25
I have been rendering 3D scenes with Maya, and I have noticed a sudden drop in productivity. I have 2 Dual Core Xeon 5070 cpus in the system in question. Before, when rendering, all 4 cores would render, using between 90-100% of processing power. I would check the Task Manager, just to verify this. But, lately, I've been working on a different scene, with a fraction of the geometry and lighting, and it seems it's only utilizing half of the power it should be using. It is staying near 50% usage, using half of each core's power. I checked the Affinity of Maya, and it is set to use all 4 cores (labeled as CPU 0, 1, 2, and 3). I set it's Priority to High, but it still doesn't help. Any help would be greatly appreciated!
0
Comment
Question by:MandEMfg
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 2
  • 2
7 Comments
 
LVL 69

Expert Comment

by:Callandor
ID: 17939262
It is the job of the OS and program to decide how many cores to use.  I think 4 cores running at 90-100% is an indication of a tough computation task, whereas 50% utilization means it has processing power to spare.  You mention that the scene has a fraction of the geometry and lighting, so I would expect it to work less to render it.  What's the problem?  Isn't the behavior that you see what you would expect?
0
 

Author Comment

by:MandEMfg
ID: 17941084
Well, that would be normal behavior for, let's say, running an app. I would expect a less taxing app to be easier for the processors to handle. But, in rendering, the processor should be working at 100%, trying to compute and assemble all of the aspects of the render. Basically, because the cores are running at half power, the render is taking longer for the smaller scene than it is for a render of the larger scene. The lighting is set up in the same fashion, with the same exact settings. With less geometry and lighting to calculate, you would think the processors would finish much quicker, rather than take even longer. Does that make sense? It seems to me like the render should be taking much less time, instead of mnore time, given the differences of the scenes.
0
 
LVL 69

Assisted Solution

by:Callandor
Callandor earned 250 total points
ID: 17943947
How much use a program makes of multiple cores depends on how the program was written.  You don't have any real control over how when and how it spawns threads to take advantage of parallel processing.  Perhaps the code looks for heavy cpu tasks and allocates them at those times, but chooses not to do so for less intensive tasks.  There may be a tradeoff in going with multiple cores when the job is not too difficult.
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
LVL 70

Accepted Solution

by:
garycase earned 250 total points
ID: 17948344
Just throw away that obsolete 5000 series Xeon machine [I'll be glad to give you my address for disposal :-) ] and get a nice Core-architecture 5100 series Xeon :-)    (actually, from your earlier questions, I thought you were getting a 5100 series machine)

... but in case that's not an option ==>  What is the relationship of the SIZE of the scenes you are comparing?  [the older one that used 90+ % and the newer one that's only using 50%]   If the new scene can't be processed entirely in RAM, virtual memory paging could account for the lower CPU utilization (paging is a very low CPU utilization task, so it would significantly reduce the overall CPU utilization).   Another possibility is that the distribution of the scene in RAM is such that you're getting a very low memory cache hit rate ... this would result in a lot of wasted cycles and would also explain why the CPU is not running at a higher utilization.   As an example, a reference to memory that "hits" the L1 cache takes 1 clock cycle;  if it "hits" the L2 cache it typically takes 4 to 8 cycles; but a "miss" takes 25 to 100 clock cycles.   So if your scene is distributed in a way that doesn't match the caching algorithm's predictive performance, the CPU could be waiting for a lot of memory access cycles => and thus not processing at maximum efficiency.   This is, of course, just speculation as to why you're seeing the lower utilization % => and, unfortunately, if that IS the reason, I'm not aware of anything you could do to improve it.  [Note, by the way, that the Core-based Xeon's have both a larger cache and an improved prediction algorithm ... so if this is the issue, it should be much less of one with a 5100 series Xeon]
0
 

Author Comment

by:MandEMfg
ID: 17948784
Just forward me that address, and I'll get this heap of junk out to you asap :p

I'm stuck waiting for my Manager to purchase the new machine. After all of the questions I had answered, and compiling the machine to get, I've sadly hit a stand still. But, while I'm waiting, I have no choice but to struggle with this dinosaur, for the time being. :p

The older file is 9.78MB, and the new version is 7.40MB. That's strange, considering that the smaller, newer file takes more than double the time to render. In the Task Manager, though, the older file utilizes the RAM more, while the newer one uses about half of what the older one does.
0
 
LVL 70

Expert Comment

by:garycase
ID: 17949260
Strange ... both of those are small enough it's certainly not a paging issue.  But it COULD be a cache-hit issue, depending on how the smaller file's data is being accessed.  Unless there are some settings in Maya that can change the behavior, you're probably getting all the performance you can.
0
 

Author Comment

by:MandEMfg
ID: 17950988
Ouch. That kinda sucks. Oh well, I guess I'll have to deal with it. Thanks for the help!!
0

Featured Post

On Demand Webinar: Networking for the Cloud Era

Ready to improve network connectivity? Watch this webinar to learn how SD-WANs and a one-click instant connect tool can boost provisions, deployment, and management of your cloud connection.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Basic computer tune-up with little or no hardware upgrades. Giving an old computer a tune-up usually results in a minimal performance gain, but a gain nonetheless. Several times a week, I’m faced with users at work who ask to make their computers…
Like many organizations, your foray into cloud computing may have started with an ancillary or security service, like email spam and virus protection. For some, the first or second step into the cloud was moving email off-premise. For others, a clou…
Finding and deleting duplicate (picture) files can be a time consuming task. My wife and I, our three kids and their families all share one dilemma: Managing our pictures. Between desktops, laptops, phones, tablets, and cameras; over the last decade…
In this video we outline the Physical Segments view of NetCrunch network monitor. By following this brief how-to video, you will be able to learn how NetCrunch visualizes your network, how granular is the information collected, as well as where to f…
Suggested Courses

636 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question