In moveit we generally think of the contents of JointTrajectoryPoint as
corresponding roughly to "robot state", which is always in terms of
joint values.
The "MultiDOFJointTrajectoryPoint" kind of tricks you because it looks
like what you want, but it is not. The purpose of that is for parts of
robots where the control happens directly in cartesian space. In the
past we've used that for the mobile base of the PR2, or I could imagine
it being useful for a quadcopter. It is rare for a robot to directly be
controlled in cartesian space at the lowest level, but sometimes the
software interface to them makes that easier. In any case, for a 6-axis
articulated arm like the Epson C3, we wouldn't typically use it.
So what we generally do is compute our desired poses in cartesian space,
then use moveit::core::RobotState::setFromIK() to solve the inverse
kinematics to get a set of joint values. Then we can use path planning
to figure out how to get from current state to that destination state.
Or if you want the end-effector to move straight in cartesian space, use
RobotState::computeCartesianPath() to essentially do a sequence of IK
calls all along a line.
In the end you'll get a sequence of RobotStates corresponding
(hopefully) to the motion you want.
In the reverse direction, if you are receiving joint values and you want
to know where in cartesian space the end-effector is, you need to put
the joint values into a RobotState instance, then call
RobotState::update() and RobotState::getGlobalLinkTransform().
Hope this helps,
Dave