Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

I thought I have a decent understanding of graphics pipelines, but apparently do not. I found some code that fairly efficiently renders particles, and I've convinced the code to do a lot of things it didn’t originally do, but I'm missing something. Also, if anyone has ideas of better zones to post this in, please let me know.

First, here's the basic code. Since my problem is in the concepts rather than the programming, I'll leave it as pseudocode:

variables:

fov (field of view),

cx (x value of the center pixel),

cy (y value of the center pixel),

pan (amount to pan the particles),

tilt (amount to tilt the particles),

cameraX (x value of the camera),

cameraY (y value of the camera),

cameraZ (z value of the camera),

screenWidth (the width of the screen to display to)

focalLength = screenWidth / 2 * (cos( fov / 2 ) / sin( fov / 2 ));

matrix.identity();

matrix.appendRotation( pan, Y_AXIS );

matrix.appendRotation( tilt, X_AXIS );

matrix.appendTranslation( cameraX, cameraY, cameraZ );

p00 = matrix.rawData[ 0 ];

p01 = matrix.rawData[ 1 ];

p02 = matrix.rawData[ 2 ];

p10 = matrix.rawData[ 4 ];

p11 = matrix.rawData[ 5 ];

p12 = matrix.rawData[ 6 ];

p20 = matrix.rawData[ 8 ];

p21 = matrix.rawData[ 9 ];

p22 = matrix.rawData[ 10 ];

p30 = matrix.rawData[ 12 ];

p31 = matrix.rawData[ 13 ];

p32 = matrix.rawData[ 14 ];

for each point {x, y, z}:

{

d = focalLength + p32 + x * p02 + y * p12 + z * p22;

if (d < 0)

{

d = -d;

}

w = focalLength / d;

xi = (int)( w * ( x * p00 + y * p10 + z * p20 ) + cx + p30);

if (xi >= screenWidth || xi < 0)

{

continue;

}

yi = (int)( w * ( x * p01 + y * p11 + z * p21 ) + cy + p31);

buffer_index = xi + (yi * screenWidth);

}

This algorithm works perfectly, and looks fantastic. Obviously, the code for the matrix work calls a library. The library's help says that appending a transformation does exactly what it should - it multiplies by putting the new transformation on the left of the existing matrix. I can double check that using the debugger to verify if necessary.

The problem comes about when I try adding in rotation around an arbitrary point. From what I understand, that should be relatively easy - simply change the matrix setup from the above to the following:

matrix.identity();

matrix.appendTranslation( -centerOfRotationX, -centerOfRotationY, -centerOfRotationZ );

matrix.appendRotation( pan, Y_AXIS );

matrix.appendRotation( tilt, X_AXIS );

matrix.appendTranslation( centerOfRotationX, centerOfRotationY, centerOfRotationZ );

matrix.appendTranslation( cameraX, cameraY, cameraZ );

However, this causes problems. As long as I leave the centerOfRotation at the origin, everything is good, as it had been. However, when I set any other center of rotation, my values are off. It seems to be rotating around some other position. The most obvious way that this problem presents itself is that I draw the center of rotation a different color from all the other particles, and as I pan or tilt the particles, the center of rotation moves on the screen. (How much it moves depends on how "zoomed in" to the center I am.) Obviously, the center of rotation should not ever move.

I've tried everything I can think of. I removed the camera translations and added them only into the xi, yi calculations (with an added scale to the matrix manipulations to account for zoom capabilities), and it seemed to be a little bit better, but the problem was still there. I’ve completely removed the camera translations, and the problem still presents itself.

The only thing that seems to help is for me to add a "magic" number to the focalLength. I set my program so I could tweak the number, and see the results, and I can get the movement of the center of rotation to be down to a few pixels in both the x and y direction, but I'm sure that's not right (and I haven't figured out where the "magic" number comes from, other than trial and error). If it helps any, one data point I've had is that when I set the center of rotation to {0, -1, 0}, the only wobble is along the y-axis. When I set to {-1, 0, 0} or {0, 0, -1}, there is wobble in both the x- and y-axis. The wobble seems to be a projection of a circle or occasionally a projection of a figure 8. But maybe none of that matters – it’s effectively altering the zoom, so maybe I’m just zooming out enough to mitigate the issue. (Probably, not, though – that would mean I could simply set the “magic” to a huge number, and the problem would stop evidencing itself, and I haven’t seen that.)

I've been working on this for days and making no progress. Can anyone give me any ideas?

Thanks!

First, here's the basic code. Since my problem is in the concepts rather than the programming, I'll leave it as pseudocode:

variables:

fov (field of view),

cx (x value of the center pixel),

cy (y value of the center pixel),

pan (amount to pan the particles),

tilt (amount to tilt the particles),

cameraX (x value of the camera),

cameraY (y value of the camera),

cameraZ (z value of the camera),

screenWidth (the width of the screen to display to)

focalLength = screenWidth / 2 * (cos( fov / 2 ) / sin( fov / 2 ));

matrix.identity();

matrix.appendRotation( pan, Y_AXIS );

matrix.appendRotation( tilt, X_AXIS );

matrix.appendTranslation( cameraX, cameraY, cameraZ );

p00 = matrix.rawData[ 0 ];

p01 = matrix.rawData[ 1 ];

p02 = matrix.rawData[ 2 ];

p10 = matrix.rawData[ 4 ];

p11 = matrix.rawData[ 5 ];

p12 = matrix.rawData[ 6 ];

p20 = matrix.rawData[ 8 ];

p21 = matrix.rawData[ 9 ];

p22 = matrix.rawData[ 10 ];

p30 = matrix.rawData[ 12 ];

p31 = matrix.rawData[ 13 ];

p32 = matrix.rawData[ 14 ];

for each point {x, y, z}:

{

d = focalLength + p32 + x * p02 + y * p12 + z * p22;

if (d < 0)

{

d = -d;

}

w = focalLength / d;

xi = (int)( w * ( x * p00 + y * p10 + z * p20 ) + cx + p30);

if (xi >= screenWidth || xi < 0)

{

continue;

}

yi = (int)( w * ( x * p01 + y * p11 + z * p21 ) + cy + p31);

buffer_index = xi + (yi * screenWidth);

}

This algorithm works perfectly, and looks fantastic. Obviously, the code for the matrix work calls a library. The library's help says that appending a transformation does exactly what it should - it multiplies by putting the new transformation on the left of the existing matrix. I can double check that using the debugger to verify if necessary.

The problem comes about when I try adding in rotation around an arbitrary point. From what I understand, that should be relatively easy - simply change the matrix setup from the above to the following:

matrix.identity();

matrix.appendTranslation( -centerOfRotationX, -centerOfRotationY, -centerOfRotationZ );

matrix.appendRotation( pan, Y_AXIS );

matrix.appendRotation( tilt, X_AXIS );

matrix.appendTranslation( centerOfRotationX, centerOfRotationY, centerOfRotationZ );

matrix.appendTranslation( cameraX, cameraY, cameraZ );

However, this causes problems. As long as I leave the centerOfRotation at the origin, everything is good, as it had been. However, when I set any other center of rotation, my values are off. It seems to be rotating around some other position. The most obvious way that this problem presents itself is that I draw the center of rotation a different color from all the other particles, and as I pan or tilt the particles, the center of rotation moves on the screen. (How much it moves depends on how "zoomed in" to the center I am.) Obviously, the center of rotation should not ever move.

I've tried everything I can think of. I removed the camera translations and added them only into the xi, yi calculations (with an added scale to the matrix manipulations to account for zoom capabilities), and it seemed to be a little bit better, but the problem was still there. I’ve completely removed the camera translations, and the problem still presents itself.

The only thing that seems to help is for me to add a "magic" number to the focalLength. I set my program so I could tweak the number, and see the results, and I can get the movement of the center of rotation to be down to a few pixels in both the x and y direction, but I'm sure that's not right (and I haven't figured out where the "magic" number comes from, other than trial and error). If it helps any, one data point I've had is that when I set the center of rotation to {0, -1, 0}, the only wobble is along the y-axis. When I set to {-1, 0, 0} or {0, 0, -1}, there is wobble in both the x- and y-axis. The wobble seems to be a projection of a circle or occasionally a projection of a figure 8. But maybe none of that matters – it’s effectively altering the zoom, so maybe I’m just zooming out enough to mitigate the issue. (Probably, not, though – that would mean I could simply set the “magic” to a huge number, and the problem would stop evidencing itself, and I haven’t seen that.)

I've been working on this for days and making no progress. Can anyone give me any ideas?

Thanks!

I think the answer is about when you apply the rotation about a point. I guess what your after is rotation and then translation to the point. So the code would be something like this -

matrix.identity();

matrix.appendRotation( pan, Y_AXIS );

matrix.appendRotation( tilt, X_AXIS );

matrix.appendTranslation( centerOfRotationX, centerOfRotationY, centerOfRotationZ );

matrix.appendTranslation( cameraX, cameraY, cameraZ );

The matrix sequence you show would rotate the particles around the point, in the sense of a planet orbiting around the sun. I guess you're trying to rotate them at that position. Though I'm not clear about the use of camera X, Y and Z in this context.

You've called rotation around the Y axis 'pan'. It's normally called 'yaw', as in Yaw, Pitch and Roll. Pan is translation along the view plane. That confused me for a bit. Equally unsure why the code uses screen width and ignores screen height.

The code is a rendering engine for points. No lines, polygons, or anything else - just millions of points/particles. The source data is laser scanning, which can be used on large scales to bring distance measurements of entire buildings into a dataset, or on much smaller scales to accurately measure all the points comprising a household-size object. My use of the variables pan/tilt or pan/sweep are extant from that industry, where for each point measurement you know the pan and tilt of the scanner, and the distance it measures.

I'm sure it'll make life much easier if I switch the terms. From now on, I'll use "pan" from the user's perspective - translation of the object, and I'll use yaw and pitch for the rotation.

The points don't have a mesh behind it, but instead are themselves treated en masse as a single object. The intent is to make a 3D engine for these "point clouds" that has all the features of regular viewing engines. The user can place the camera inside the cloud, and see the points rotate around them, or the user can position the camera far enough out of the point cloud that they see the whole thing as a single object that they can rotate. All of that works fine, and I've added a lot of other features to what this engine can do.

The algorithm is optimized to run the points through most of the whole graphics pipeline - the rotations are setting the model's frame. The camera translations position the camera in relation to the model, giving (or emulating?) the capabilities to set the pan (as you're used to using the term) and the zoom-in level (through camera translation along the z-axis). You are correct in that the focal length setting is used for a perspective projection. I don't know if I would use the word "shaping" the particles, though by using the perspective projection, we are fitting the particles in to a view frustrum.

You're correct to question why I didn't show the use of screenHeight. Sorry, that came from overzealous use of my scapal in what code I pasted up. The next few lines look at the buffer_index and make sure that the value is between 0 and the maximum buffer size (which is set based on the screenHeight). Effectively, the screenWidth and the buffer_index check combine to make sure we only display points visible within the view frustrum. (And I have additional code to handle z-depth sorting.)

Without the translations of the centerOfRotation, it rotates around the origin. This is fine and works acceptably, but in a typical "point cloud" engine, you can select any single point and treat that as your center of rotation. Normally, rotation around an arbitrary point is easy: translate that point to the origin by subtracting it out, perform your rotation, then translate back to the center of rotation. It seems perfectly sensible in my head, but something is off. When I do this, I'm seeing that the center of rotation itself rotates. My guess is that this code does too much at once - model-space transformations are pretty clear cut, but the camera translation is mixed directly in with the perspective projection, and somewhere in there, my center of rotation isn't where it should be. I also think that the issue may be that I need to change the center of projection as well as the center of rotation, and that's what is causing my problems. I'm out of my league for making this code change the center of projection, though, at least for the moment.

Did that clarify things, or muddy the waters up further? I can give you more information as you need it.

I did try your suggestion of only doing the translation to the center of Rotation once, after the actual rotation, but the visual center of rotation swings much further than it had been.

I've been studying the algorithm and am confused, certain parts strike me as wrong, although you describe it working correctly. I have some questions that might improve my understanding.

Is the camera position negative? Say you have a set of points (a scan of a room) that range from [0, 0, 0] to [100, 100, 100]. To place the camera in the center of it, are the camera coordinates [-50, -50, -50]?

Does the algorithm remove points which are behind the camera? I can't see it doing that anywhere.

The algorithm appears to treat the translation values as if they are not in the rotated space. Or to put it another way, the points rotate around their own center, you move the camera in and out, left or right, up or down, in world space. In which case something calculates the center of the points and translates them before rendering starts. Am I right?

Regarding removing points behind the camera, I had to look back in the original code. Again, I apologize for being overzealous with my scalpal. We're trying to only post only the relevant code, and when I was cutting and pasting, this part seemed unnecessary:

```
d = focalLength + p32 + x * p02 + y * p12 + z * p22;
pz = d;
if (d < 0)
{
d = -d;
}
if (minZ < pz)
{
```

instead of the setting we had above for the d. Now I see/remember why that was in there. By having the pz value and making sure it's above the minimum value (which is set to 0), we make sure we don't get the view frustrum that's behind the camera. Checking for it to stay within the frustrum makes sure that its within the other constraints.The addition of the focallength in calculating d and w means that the camera won't be in the center, though making sure that it stays positive means that we won't ever "be looking through the back of the camera." (Where I mentioned that we could position the user in the center of a large scan, it's one where the scanned distance is much larger than the focallength, so that the center of the building is accessible.)

Even when I take out the camera position code, though, I still see the rotation point itself is rotating. Somehow it gets messed up in the projection, but beyond that, I don't know where.

Getting to your final paragraph, yes, effectively the points do rotate around their own center, and we do move the camera in/out, left/right, and up/down in world-space. And you're very much on the right track about the centering - to some extent. It's not part of our rendering process at all, but when we create the points file, we take the source data from the laser scan, find the center, and subtract the center from each point so that the whole thing is centered. Are you saying that when I set a different center of rotation I should re-find the center? I thought I take care of that by subtracting the centerOfRotation.

Thanks for all the time and brainwork you've put into this.

I'll try your idea first thing in the morning and let you know.

Thanks for your interest and help!

That was my initial approach toward the solution, to calculate the center point after pan/tilt and then position the camera relative to it. I think the matrix operations I give above do the same thing. They put the center vector into the translation part of the matrix, pan it, tilt it, and the add the camera offset.

You've got me to the point where I can work with it now. The transforms I'm using are:

```
matrix.identity();
matrix.appendTranslation( -centerOfRotationX, -centerOfRotationY, -centerOfRotationZ );
matrix.appendRotation( pan, Y_AXIS );
matrix.appendRotation( tilt, X_AXIS );
matrix.appendTranslation( cameraX, cameraY, cameraZ ); // kinda, see below
```

I use negatives to move the center to the origin, and I don't translate it back - doing so was the major mistake I had made. One of my biggest issues is that I had been testing using my pseudo-orthagraphic mode and that doesn't work with the arbitrary center of rotation. That may have been masking the fact that perspective mode does work. (If I can't get pseudo-orthographic working on my own, I'll post it as a separate question. You've done more than enough on this one as it is. If I do post another question, I'll make sure to post a note on here so you become aware of the new one.)

It turns out that the code I provided still doesn't work, much as I'd like it to, but I have alternative code that does so. Through my conversations with you, I've realized that the camera movements aren't in world space - they're in the view volume, so that the cameraX and cameraY are in pixels rather than worldUnits. One of the first things I've done was to remove the camera transform from the matrix and simply add in the camera positioning when calculating xi and yi. I account for cameraZ using scaling. Using those techniques, and the transformations listed above, I get the center point to stay centered! (Actually, there's a tiny bit of jitter, but I can blame that on floating to integer rounding - it never moves more than 1 pixel in any direction.)

So, I think I've got it working for now. Thanks for all of your work, I know it's a royal pain to deal with badly butchered pseudocode!

I thought the centerOfRotation part might have to be negated, somehow the negative camera vector made me unsure. I had realised the translation was in pixel values. It seemed like a reasonable compromise, given that the algorithm is fast. I though of it as pixels on the projection plane.

I like the idea of rotating the model and moving the camera in a separate space. It simplifies the code in many ways. A little thing I noticed (may not be relevant now), this code -

```
d = focalLength + p32 + x * p02 + y * p12 + z * p22;
pz = d;
if (d < 0)
{
d = -d;
}
if (minZ < pz)
{
```

Could be reduced to this -

```
d = p32 + x * p02 + y * p12 + z * p22;
if (minZ < d)
{
```

The pseudocode only used p32 once, so you could add focalLength once, before you start the loop. Assuming the minZ will always be postive, theres no need to handle d being negative, those points will never be drawn.

It occured to me that this would be an easy system to split between multiple processors, if thats an option.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.

Open in new window

This assumes that 'centerOfRotation' is a vector in model space and that 'camera' is the vector from the central point to the viewpoint. The direction of 'camera' is important. If the center changes you will need to add the difference between the old center and the new center to 'camera', so that the camera stays in the same place.

I'd go into the workings of the math, except I might still be wrong. I'd like to see if the above code works, and then talk over other ideas (if you're interested).