one of the users of Observation Manager (Igor Dulevich), pointed me to
some inaccuracies in our current <OAL> model. (Btw: Igor, joined the
OAL group recently...welcome Igor!)
Zoom eyepieces:
Besides the ability to change the focal length, zoom eyepieces might
also change the AFOV. (e.g. Synta SkyWatcher Zoon 8-24 with an AFOV
40°-60°)
This is currently not reflected in our schema.
Multiple filter usage:
Our observation element is currently limited to the usage of one
filter. On moon observation one might use
2 polarizid filter in combination.
Multiple lens usage:
Our observation element is currently limited to the usage of one lens
element. Some observers seem to stack
barlow lenses e.g. for webcam photography.
I think those inaccuracies are not fatal, but should be considered in
a next version/patch or whatever might come next.
I also think that we should wait a little to collect more feedback on
<OAL> 2.0. I think the more application developers adopt the <OAL>
format, the more feedback we'll receive. When we collected enought
feedback, I think we should start thinking on how/when we release an
update/patch/version of <OAL>
Best regards + clear skies (it's been cloudy here since weeks! *sigh*)
Dirk
I guess some changes were inevitable (big smile), and welcome to OAL
Igor.
Regarding zoom eyepieces - I haven't used these myself. Is the focal
length (and AFOV) a distinct setting that the user can discern?
Regarding stacked filters - imagers also do this. I have a user that
created one equipment item to represent the combination of filters.
Obviously we have a workaround but supporting multiple filters is
preferable.
Stacked lenses - I've done this myself in truly steady skies. The same
workaround described above can be done but supporting multiple lenses
would be better.
I heartily agree that more time should pass before attempting an
update.
Poor weather here too,
- Phyllis
I can understand that our current schema does not allow to express the
situations described above. As already explained, there are
workarounds. But before we seriously enter a discussion on a possible
improvement of the schema, I'd like to share a couple of thoughts with
you.
There are two different usage scenarios for OAL:
1) Accurate documentation of an observation, trying to be as realistic
as possible. The information collected is read, understood and used by
humans. There is no (or no rather sophisticated) processing that
yields some added value.
2) Accurate documentation of observations, allowing for machine based
simulations, calculations or some basic sort of research. This
requires that all information collected can be "used" by program code.
With Eye&Telescope, I try to address both scenarios. You can not only
collect information on obsrevations but you can also have the program
reconstruct the circumstances of an observation: alt/az-coordinates
and air mass, effect of extinction, contrast above threshold and a
simulated eyepiece view. With our fstOffset we are even able to take
into account the individual observers' differences in perceptibility
performance.
Let me just hook into the latter: this group documents that it was not
too easy to find an acceptable solution. The reason (at least in my
opinion) was that there is no clear, natural, proven way to model
this. A model for individual differences in human vision can be more
or less accurate, meaning more or less effort to implement. A every
model you can image is not 100% accurate. There are always assumptions
and simplifications. And the "better" a model is made, the more
parameters it has and the harder it becomes to understand which input
deltas drive the output in what manner. Complicated models with a lot
of parameters tend to become hard to understand. And what's even
worse: they tend to become very hard to verify because the
nolinearities and interdependencies of state variable "drive their own
dynamics".
This statement can be found in good textbooks on simulation modelling,
but it also comes from my 7 years of experience as a simulation
engineer, dealing with continuos models (= differential equations) as
well as discrete models (=time steps from one state to the next),
covering applications in various industries. As a former simulation
expert please believe me that good models are not necessarily the most
complicated ones. If a model becomes so complex that it cannot be
understood in it's full range of dynamics you never can be sure
whether a pattern in the output is a reliable prediction of true
systems dynamics or just an artifacts of the model or simply a bug in
implementing the equations or state machines used.
So I focus on covering the most important and commonly understood
effects instead of making it too cmplex to be handy.
Think of a high (top notch) quality audio system: most of the devices
do not have too many controls. It should sound excellently without a
lot of fiddling and playing with buttons. Got my idea?
After this digression, back to the suggestions:
* A zoom FOV model requires the FOV=f(focal length) correlation. I
would not assume a linear function. Of course this is differnet for
every vendor or model of zoom eyepieces. How to solve this in a clear,
simple way? And what for? Is it *really* so important to have an
accurate representation of the eyepiece view when we are not observing
the margin of FOV but the object centered? If need is, you can always
document true circumstances with a set of fake eyepieces, representing
various actual focal lengths.
Can you imagine a user prompted to determine and enter the FOV=f(focal
lengths) function in an OAL 2.x application? I cannot. Quite a lot of
users of E&T wonder what "light grasp" means. Perhaps even this simple
concept is beyond what most people like to think about.
* Stacked filters / two lenses: simulate or document? Even if you
restricht to "document" only, multiple filters or lenses immediately
rise the demands for searching/queryies/filtering of observations:
now we can select ONE filter that was used in observations to be
found. With more than one element, filtering criteria had to be
extended. This *bloats the user interface*. Please think of the
restrictions of screen real estate imposed by netbooks or smartphones
(I dream of an iPhone / Win mobile logging app compatible with OAL),
that's definitely undesirable.
The proposed "improvements" would mean a lot of work for app
developers (has anyone thought of www.deepskylog.org, too?), would not
provide true added value (because there are practical workarounds) and
the implementation would bloat the interfaces and make apps harder to
understand for users.
Last point: the suggested improvements mean bigger effort and
complexity for all new developers who think about supporting and using
our standard. If a standard is felt as clumsy, exaggerated or overly
complex, chances are good that it is simply not used. Please think of
this! For my own part, I'm not willing to extend E&T for the proposed
features. Hope that it became clear why!
No possible (or desireable) improvements, though?
I'm thinking of a different animal from time to time: an OAL-derived
XML schema for an observing plan. I want to have E&T to produce user
defined (= using an XSLT) observing plan exports, tailored for an
iPhone or for usage on a netbook/notebook for visually impaired people
demanding bigger screen fonts. In a few years I probably will have to
use reading glasses for looking at a screen in night vision mode and
will benefit from this approach for myself. If you'd like to share
ideas on OAL-based observing plans, we should discuss this in an own
thread. Right now, I'm not ready to tackle this topic, priorities are
different.
Tom