CPU Question...

I have been rendering 3D scenes with Maya, and I have noticed a sudden drop in productivity. I have 2 Dual Core Xeon 5070 cpus in the system in question. Before, when rendering, all 4 cores would render, using between 90-100% of processing power. I would check the Task Manager, just to verify this. But, lately, I've been working on a different scene, with a fraction of the geometry and lighting, and it seems it's only utilizing half of the power it should be using. It is staying near 50% usage, using half of each core's power. I checked the Affinity of Maya, and it is set to use all 4 cores (labeled as CPU 0, 1, 2, and 3). I set it's Priority to High, but it still doesn't help. Any help would be greatly appreciated!
MandEMfgAsked:
Who is Participating?
 
Gary CaseConnect With a Mentor RetiredCommented:
Just throw away that obsolete 5000 series Xeon machine [I'll be glad to give you my address for disposal :-) ] and get a nice Core-architecture 5100 series Xeon :-)    (actually, from your earlier questions, I thought you were getting a 5100 series machine)

... but in case that's not an option ==>  What is the relationship of the SIZE of the scenes you are comparing?  [the older one that used 90+ % and the newer one that's only using 50%]   If the new scene can't be processed entirely in RAM, virtual memory paging could account for the lower CPU utilization (paging is a very low CPU utilization task, so it would significantly reduce the overall CPU utilization).   Another possibility is that the distribution of the scene in RAM is such that you're getting a very low memory cache hit rate ... this would result in a lot of wasted cycles and would also explain why the CPU is not running at a higher utilization.   As an example, a reference to memory that "hits" the L1 cache takes 1 clock cycle;  if it "hits" the L2 cache it typically takes 4 to 8 cycles; but a "miss" takes 25 to 100 clock cycles.   So if your scene is distributed in a way that doesn't match the caching algorithm's predictive performance, the CPU could be waiting for a lot of memory access cycles => and thus not processing at maximum efficiency.   This is, of course, just speculation as to why you're seeing the lower utilization % => and, unfortunately, if that IS the reason, I'm not aware of anything you could do to improve it.  [Note, by the way, that the Core-based Xeon's have both a larger cache and an improved prediction algorithm ... so if this is the issue, it should be much less of one with a 5100 series Xeon]
0
 
CallandorCommented:
It is the job of the OS and program to decide how many cores to use.  I think 4 cores running at 90-100% is an indication of a tough computation task, whereas 50% utilization means it has processing power to spare.  You mention that the scene has a fraction of the geometry and lighting, so I would expect it to work less to render it.  What's the problem?  Isn't the behavior that you see what you would expect?
0
 
MandEMfgAuthor Commented:
Well, that would be normal behavior for, let's say, running an app. I would expect a less taxing app to be easier for the processors to handle. But, in rendering, the processor should be working at 100%, trying to compute and assemble all of the aspects of the render. Basically, because the cores are running at half power, the render is taking longer for the smaller scene than it is for a render of the larger scene. The lighting is set up in the same fashion, with the same exact settings. With less geometry and lighting to calculate, you would think the processors would finish much quicker, rather than take even longer. Does that make sense? It seems to me like the render should be taking much less time, instead of mnore time, given the differences of the scenes.
0
Free Tool: Path Explorer

An intuitive utility to help find the CSS path to UI elements on a webpage. These paths are used frequently in a variety of front-end development and QA automation tasks.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

 
CallandorConnect With a Mentor Commented:
How much use a program makes of multiple cores depends on how the program was written.  You don't have any real control over how when and how it spawns threads to take advantage of parallel processing.  Perhaps the code looks for heavy cpu tasks and allocates them at those times, but chooses not to do so for less intensive tasks.  There may be a tradeoff in going with multiple cores when the job is not too difficult.
0
 
MandEMfgAuthor Commented:
Just forward me that address, and I'll get this heap of junk out to you asap :p

I'm stuck waiting for my Manager to purchase the new machine. After all of the questions I had answered, and compiling the machine to get, I've sadly hit a stand still. But, while I'm waiting, I have no choice but to struggle with this dinosaur, for the time being. :p

The older file is 9.78MB, and the new version is 7.40MB. That's strange, considering that the smaller, newer file takes more than double the time to render. In the Task Manager, though, the older file utilizes the RAM more, while the newer one uses about half of what the older one does.
0
 
Gary CaseRetiredCommented:
Strange ... both of those are small enough it's certainly not a paging issue.  But it COULD be a cache-hit issue, depending on how the smaller file's data is being accessed.  Unless there are some settings in Maya that can change the behavior, you're probably getting all the performance you can.
0
 
MandEMfgAuthor Commented:
Ouch. That kinda sucks. Oh well, I guess I'll have to deal with it. Thanks for the help!!
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.