I have read through source code for a number of the manipulator classes including FlightManipulator, FirstPersonManipulator, StandardManipulator, and CameraManipulator.
I can't seem to track down the code that actually manipulates the camera. I see a bunch of instance variables being altered and GUIEventAdapters getting passed off to other methods, but I just can't seem to get down far enough in the code to see where the manipulation is occurring.
I think I noticed a frame() method in one of the lower level classes - is this where all the data is used to compute new camera position and location? If so, how does that frame() method know to use the velocity, acceleration, pitch, yaw, etc that is built into subclasses like FlightManipulator?
Any clarification about how the camera manipulator class structure works would be appreciated.
Additionally, I am having difficulty understanding the use of quaternions in relation to the OSG coordinate system. If someone could point me towards some descriptive reference material on working in the OSG coordinate system I would appreciate it.
Thanks.
Matt
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32416#32416
_______________________________________________
osg-users mailing list
osg-...@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
If I'm not mistaken, you just need to define your own versions of the
pure virtuals in CameraManipulator(); particularly getInverseMatrix. And
remember, if you use a matrix created by Matrix::makeLookAt, that is
ALREADY inverted. :) (Cedric helped me with that...)
> Matt
>
> ------------------
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=32416#32416
>
>
>
>
>
> _______________________________________________
> osg-users mailing list
> osg-...@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
--
Follow us on Twitter! http://twitter.com/emperorlinux
EmperorLinux, Inc.
http://www.emperorlinux.com
1-888-651-6686
What is being manipulated is the matrix that is returned from the manipulator. The parameters that you've found all are used ultimately in computing the matrix. Have a look at FirstPersonManipulator::getMatrix() for example - matrices are made from _trans and _rotate parameters and multiplied to get the overall matrix of the camera. Variables like velocity are used in conjunction with a time step to figure out what the new _trans should be, so that next time getMatrix() is called, the updated matrix reflects the changes that were made.
Cheers,
Tom
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32418#32418
> [...] so that next time getMatrix() is called, the updated matrix reflects the changes that were made.
That's almost right - getInverseMatrix() is the method that's called by
the view(er) each frame. ;-)
J-S
--
______________________________________________________
Jean-Sebastien Guay jean-seba...@cm-labs.com
http://www.cm-labs.com/
http://whitestar02.webhop.org/
> I think the function the original poster is looking for is, e.g.: bool
> DriveManipulator::calcMovement().
Well, that method and the equivalents in other manipulators do calculate
some parts of the final camera view matrix, but the final calculations
are done directly inside the get{Inverse}Matrix() methods. And I think
the OP was specifically asking how the manipulator affected the camera,
since no part of the camera manipulator code actually ever touches any
part of the view or camera...
So the answer is that the view calls getInverseMatrix() at each frame to
get an updated view matrix which it will then give to the camera.
That also has another implication. If you're making a camera manipulator
that needs to modify the projection matrix, then you need to put in some
other mechanism in your own code to get the projection matrix that the
manipulator would calculate to the camera. Because the only thing that
will happen automatically when using a camera manipulator is that
getInverseMatrix() will be called to get an updated view matrix each
frame, nothing will be done about the projection matrix at all. I've
always found that weird... A more general camera manipulator should have
a mechanism to alter any relevant property of the camera, not just the
view matrix... It could even have a pointer to the camera, which IMHO
would have been a more straightforward and clear design, at the cost of
higher coupling.
One example of this would be implementing a camera manipulator for
otho2D views which would implement zooming by altering the
left,right,top,bottom values of the frustum (or even the same principle
for perspective views which would zoom by altering the FOV). This alters
the projection matrix, and you'll have to get that projection matrix
from your manipulator to the camera each frame yourself.
Hope this helps,
That also has another implication. If you're making a camera manipulator that needs to modify the projection matrix, then you need to put in some other mechanism in your own code to get the projection matrix that the manipulator would calculate to the camera. Because the only thing that will happen automatically when using a camera manipulator is that getInverseMatrix() will be called to get an updated view matrix each frame, nothing will be done about the projection matrix at all. I've always found that weird... A more general camera manipulator should have a mechanism to alter any relevant property of the camera, not just the view matrix... It could even have a pointer to the camera, which IMHO would have been a more straightforward and clear design, at the cost of higher coupling.
That also has another implication. If you're making a camera manipulator that needs to modify the projection matrix, then you need to put in some other mechanism in your own code to get the projection matrix that the manipulator would calculate to the camera. Because the only thing that will happen automatically when using a camera manipulator is that getInverseMatrix() will be called to get an updated view matrix each frame, nothing will be done about the projection matrix at all. I've always found that weird... A more general camera manipulator should have a mechanism to alter any relevant property of the camera, not just the view matrix... It could even have a pointer to the camera, which IMHO would have been a more straightforward and clear design, at the cost of higher coupling.
The class that manipulates the view matrix is a MatrixManipulator... Maybe it would be an elegant solution to just use another MatrixManipulator to manipulate the projection matrix...?
> Ops, it's actually called CameraManipulator... but I've been thinking of
> it as a matrix manipulator. Anyway the suggestion is still valid.
It was called MatrixManipulator until recently. That may be why you were
thinking of that.
But the point is that the thing that makes it all work is that the
osgViewer::View calls the CameraManipulator's getInverseMatrix() once
per frame to get the new view matrix and gives that to the camera. In
order for what you suggest to work, the View would have to call another
manipulator's getMatrix() once per frame and give that to the camera as
projection matrix? Why not just change CameraManipulator to have both
get{Inverse}ViewMatrix() and get{Inverse}ProjectionMatrix()? Seems to me
like you would want the same entity (CameraManipulator) to be
calculating both matrices anyways.
I tend to agree with you - it feels like a CameraManipulator should be able to change (manipulate!) the Camera in any way. Also, thanks for catching the getMatrix vs getInverseMatrix... you're definitely correct, although I'd hope that anyone writing a CameraManipulator would implement both appropriately, otherwise an interface method is messed up! :)
Cameras can have callbacks just like any other node (I'd imagine) - I use UpdateCallbacks rather than CameraManipulators in my project and they work just fine, and they give you direct access to the Camera node (well, through the public interface of Camera)... but they aren't treated as an event handler, which can be limiting. Am I correct in thinking you are advocating an approach that blends these two concepts?
Thank you!
Cheers,
Tom
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32510#32510
> although I'd hope that anyone writing a CameraManipulator would implement both appropriately, otherwise an interface method is messed up! :)
Well yes obviously. But generally getMatrix() can just be written as
return osg::Matrix::inverse(getInverseMatrix()); and you're done. Since
it's not called by "the system" it's not that important if it isn't very
optimal...
And actually the base class CameraManipulator::getMatrix() could do this
by default, and then subclasses would only need to implement it if they
knew they had to do better. But we're talking details here.
> Am I correct in thinking you are advocating an approach that blends these two concepts?
Yes, well, advocating might be a strong word. I'm saying that
conceptually, it would make sense that a CameraManipulator would be able
to manipulate a camera in all ways necessary. But I'm not saying I want
to do anything about it! :-)
In our software, we do something like what you describe, blending camera
manipulators and an update callback on the camera. We have a data class
that contains all settings we would want to be able to change on a
camera. Our software can store multiple instances of this, which will
represent different cameras we can switch to, but only one will be
active for a given view at a time. Both the update callback and the
manipulator have a pointer to the active camera data object, and they
both apply the settings in this class when either we switch cameras or
change the data inside this object at runtime, and make sure the data is
up to date inside the object when the user alters the view using the
camera manipulator.
The whole thing involves more classes than I think should be necessary,
and it's a bit invasive, but it works well enough.
J-S
--
______________________________________________________
Jean-Sebastien Guay jean-seba...@cm-labs.com
http://www.cm-labs.com/
http://whitestar02.webhop.org/
I'm currently working on a manipulator to mate up with a joystick I have - the first time around I'm just kind of hacking it together to find out how everything works, but I want to come back and really polish it up. So I have an academic question for all you very experienced OSG'ers:
Looking through the manipulators it is brutally apparent that camera "navigation" is inextricably linked to specific input signals. Wouldn't it make infinitely more sense to define business logic camera manipulators and then simply hook up an IO device to that business logic?
For example, the FlightManipulator does pretty much exactly what I'm looking for in business logic for my joystick, but FlightManipulator is filled with references to mouse interaction, even though a mouse really has nothing to do with the idea of pitching, rolling, throttling, etc.
So why are the manipulators currently set up the way they are? and would the community benefit from a new idiom and class structure in this particular area? If you guys agree with my point, I may try to develop such a structure.
Thanks.
Matt
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32583#32583
> So why are the manipulators currently set up the way they are? and would the community benefit from a new idiom and class structure in this particular area? If you guys agree with my point, I may try to develop such a structure.
I totally agree with the principle. The reason is probably just the
classic "historical reasons" and "we've never needed that before" ones
that are part of open source, incremental, evolving development.
Now that the camera manipulator API is already different between OSG
2.8.x and SVN, now might be the right time to do what you suggest.
What would be the idea specifically? Would the camera manipulators
expose a series of input connections, say "pitch up/down", "throttle
up/down", and then have an InputDevice class that would expose outputs
that you could connect to the camera manipulator's inputs?
I would suggest that axes be mappable to either an analog axis (say
mouse up/down or joystick y axis) or a pair of buttons (one for
incrementing and one for decrementing the value).
J-S
--
______________________________________________________
Jean-Sebastien Guay jean-seba...@cm-labs.com
http://www.cm-labs.com/
http://whitestar02.webhop.org/
Your view of how the system would work is probably very similar to what I had in mind. The idea is that camera manipulators really define business logic behavior and should be used as such. For example, these are some possible API examples (not exhaustive)
FlightManipulator:
//these do not set pitch or roll angles, these accept magnitudes between,
//lets say, [-1,1] and then the implementation decides how strongly that
//"pushes" the pitch angle or roll angle.
pitch(force)
roll(force)
throttle(force)
etc..
FirstPersonManipulator:
lookHor(speed)
lookVert(speed)
walk(speed)
etc...
ModelManipulator (my new name for a TrackballManipulator):
zoom(amount)
rotate(angle)
etc...
Then there would be separate GUIEventAdapters which are just GUI Event Handlers, and these handlers map their specific input to calls to the manipulator. Hence, when the joystick handler receives a joystick event, the handler figures out what was pressed or pushed, then requests a business logic action from the FlightManipulator, or the FirstPersonManipulator, or the ModelManipulator...depending on which manipulator is currently in use. That sound good?
I think the fundamental issue is that someone decided to combine the concept of a GUI Event Handler and the concept of a Camera Manipulator....the reason this is a problem is because, as stated before, a Camera Manipulator is all business logic - it deals only with 3D concepts. The GUI Event Handler on the other hand is sitting in the architecture layer where it knows things about IO (and therefore hardware). Hence we've crossed abstraction layers and now its limiting extensions of the system.
What do you think?
Also, J-S, I didn't really understand the point you were trying to make in your final paragraph:
>
> I would suggest that axes be mappable to either an analog axis (say
> mouse up/down or joystick y axis) or a pair of buttons (one for
> incrementing and one for decrementing the value).
>
Thanks.
Matt[/quote]
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32586#32586
> Your view of how the system would work is probably very similar to what I had in mind.
Thanks for the explanation, and yes it's very similar to what I was
thinking too.
How would you map the input (gotten by the GUIEventHandler) to calls to
the manipulator's functions in a general way? Because you wouldn't want
one handler per manipulator (that would be almost the same as what we
have now, just more code and more spread out), or even to have if
(flightmanip) flightmanip->pitch(val) else if (trackballmanip)
trackballmanip->rotateY(val) else ... in the event handler (even worse
than what we have now).
I was talking about connections before, that kind of thing is a bit hard
to design so that it's minimally intrusive and as general as possible at
the same time... The interfaces I've seen that attempted to implement
connections in a very general way had an extremely verbose API, and then
you would get google-eyed when you'd look at the setup code and might
miss typos and small bugs...
I'm just asking by curiosity... You could just code up your solution and
send it, and stop wasting time replying to my questions :-) . I have no
authority over what gets committed in OSG. I just wanted to know what
you have in mind.
> Also, J-S, I didn't really understand the point you were trying to make in your final paragraph:
>
>> I would suggest that axes be mappable to either an analog axis (say
>> mouse up/down or joystick y axis) or a pair of buttons (one for
>> incrementing and one for decrementing the value).
Well, sometimes you run out of axes but have buttons to spare, and you
could map two buttons to do (digitally) what an analog axis would
normally do. I know I could just subclass and do what I want, but I
think it would be a cool feature.
For example, you said pitch(force) would accept magnitudes [-1,1]. Then
you'd map an axis on an analog stick on your joystick to send, say,
pitch(X) where X would map [-1,1] to the [down,up] range. But say you
had two buttons and wanted to do the same thing, you could have button A
send pitch(0.5) and button B send pitch(-0.5) for example.
I'm just saying that if you can connect some value to an analog input
that takes a range, you should also be able to connect that value to two
digital inputs that go up and down as well. That's classic in games,
where say the left/right steering of a car can either map to a joystick
axis, or to the left/right arrow keys on your keyboard.
Geez, a long explanation for what's basically a detail that could have
been added later... That's me. Sorry about that.
J-S
--
______________________________________________________
Jean-Sebastien Guay jean-seba...@cm-labs.com
http://www.cm-labs.com/
http://whitestar02.webhop.org/
I just want to chime in here by saying that I have implemented something like this for some proprietary code I wrote. In my case, I am using Qt's signal/slot mechanism to perform the mapping between an input device (mouse, joystick[s], spacenavigator, etc) and an effect (camera manipulation, enable/disable featureX, ...). I created what I called a slotManager that is persistent and its sole purpose is to direct input devices to some desired action. Since it uses Qt's signal/slot mechanism, it doesn't require knowledge of the implementation particulars, just needs to be smart enough to fire off execution of the event. It works similar to an ORB, where you can publish and subscribe to events of your choosing. I have a front-end GUI that allows for device mapping and configuration for axes and buttons on a per-device basis.
It works for me, but it can be seen as overkill for the majority of users. I suspect for the general user, the default classes would be close enough that only minor tweaking would be necessary to achieve the desired result.
Coming up with something that does the same thing, but doesn't require Qt is also a good amount of work to make general purpose. I am just about to look into the latest (>2.9.7) changes to camera manipulators to see if a thin Qt layer on top of the new classes would still satisfy my purposes. If so, then I might push some code out to the OSG community to utilize.
So no direct help for you right away, but maybe something in the not-so-distant future.
Regards,
Chuck Seberino