PyFITS schema format preview

45 views
Skip to first unread message

Erik Bray

unread,
Mar 20, 2014, 1:27:16 PM3/20/14
to ssb
I've been working off and on on a pet project to develop a schema format for
FITS headers--something FITS has been sadly missing for its entire existence as
far I can tell.

This was motivated by more issues than I can count, either where PyFITS itself
did fully check conformity of some header keywords to the FITS standard or to
any of the conventions it supports.

Even more importantly, this is motivated by many issues that have come up in
which users of FITS files have used PyFITS to update files while assuming that
it would ensure validity of headers it has no knowledge of. There are limits to
what can be done about that, but even some basic validation has potential for
enormous improvements.

Finally, having a schema format for FITS should, if it is adopted by enough
developers, reduce enormous quantities of repetitive code intended for checking
FITS headers. Instead this enables an up-front, reusable declaration of what
should and should not be in the headers for specific FITS products.

There are many other reasons I could give for why FITS needs a schema format.
Developing a schema for FITS that is *useful* is also a challenge, but I believe
this draft covers enough major use cases to make an impact. It's certainly
better than having nothing at all.

In quirk about this format, which is also one of its strengths, is that it is
*not* a "plain text" format. Rather, the schemas themselves are declared in
Python code. This provides enormous flexibility and simplicity to the format:
Rather than adding dozens (or even hundreds) of complex rules to the format, one
can write arbitrarily complex Python code to validate the value of a particular
keyword.

Despite the schemas being written in a Pythonic format, their use is not limited
to within PyFITS or a pure Python pipeline. One could just as easily write a
Python script that checks a given file against a schema and produces a
report--the fact that it's written in Python is irrelevant to the user of such a
script.

But that's getting ahead--for now I would really appreciate any feedback on this
draft and thoughts on whether or not it seems useful. Some version of this will
go into PyFITS regardless because it is useful for PyFITS itself to check
conformance with the FITS Standard. But it can certainly be extended to work
with any FITS convention.

The draft documentation for the format is here:

http://embray.github.io/PyFITS/schema/users_guide/users_schema.html

There is also "API documentation" but it's not very useful yet. To see a full
example schema, take a look in the code. This schema defines the bare-minimum
keywords defined by the FITS Standard for either primary or extension HDUs:

https://github.com/embray/PyFITS/blob/standard-keywords/lib/pyfits/hdu/base.py#L71

You can try out the branch in which it is implemented by running:

$ git clone --branch standard-keywords https://github.com/embray/PyFITS.git

Finally I should again emphasize that this is very much a rough draft, and there
is hardly anything about its design that I'm particularly married to. So please
let me know what can be improved.

Thanks,

Erik

Eric Jeschke

unread,
Mar 20, 2014, 4:29:52 PM3/20/14
to astro...@googlegroups.com, ssb, emb...@stsci.edu
Hi Erik,

I haven't yet looked in detail at your docs, but I just want to note that something like this would be a HUGE benefit to pyfits/astropy.io.fits users.  At our observatory we have all manner of custom code for checking fits headers, and being able to consolidate a lot of that with pyfits/astropy (which we are using to build the fits files anyway) seems like a big win.

Thumbs up.

~Eric Jeschke
Subaru Telescope, NAOJ

Demitri Muna

unread,
Mar 20, 2014, 4:40:05 PM3/20/14
to astro...@googlegroups.com
Hi Erik,

I really like the idea of a FITS validator, particularly for all of the various flavors of FITS. This would go a long way to documenting these things in, as you point out, a machine readable way.

Is your aim here to provide a means to validate FITS files for the community in a standardized way, or rather to validate FITS files only when using Python. I think there's an argument to be made that there is greater value in the former compared to the latter (though both are certainly valuable). What about programs that aren't written in Python? I think if you're going to put the time into this, it could made into something more general that could be used much more widely.

Of course, schemes for validating data do already exist - JSON, XML, etc. I would strongly suggest considering implementing such a scheme using existing tools rather than inventing something new. Tthis way there is a world of software that exists that can be leveraged in any language. I know you wanted something "arbitrarily complex", but that would make it approaching impossible to implement outside of Python. I'd argue that if some validation is extremely complicated, it might be out of the scope of a validator and better as a custom tool.

As for the code itself, I'd prefer that the keywords not be top-level properties of the schema class, e.g.
>>> class ExampleSchema(Schema):
...     FOO = {'mandatory': True}
...     keywords = {
...         'DATE-OBS': {'value': str}
...     }
...

You highlight exactly the problem - the limitations of Python are adding complexity where there shouldn't be any. I'd expect *all* the keywords to be accessible through a "keyword" property - otherwise how would you know what they are? What value is there for them to be top-level properties? This is the same issue I ran into with the column fields in astropy.io.fits.Column.

Anyway, a few thoughts.

Cheers,
Demitri


_________________________________________
Demitri Muna

Department of Astronomy
An Ohio State University

http://scicoder.org/



Eric Jeschke

unread,
Mar 20, 2014, 4:45:55 PM3/20/14
to astro...@googlegroups.com
There are already tools to validate in a more general environment, e.g. fitsverify.

By sticking to the python "ecosystem" the tool will be much more useful and flexible to the main users, which are working in python.

.02c,
~Eric

Perry Greenfield

unread,
Mar 20, 2014, 4:58:12 PM3/20/14
to astro...@googlegroups.com
In an ideal world, yes that would be nice. But the choice here is really more like a tool like this, or no tool given the time and resources available. We actually have a fair amount of experience here with JSON schema and have been using it for the JWST pipelines. But it's not the easiest thing for people to use; it's my impression that this is far easier for a Python developer to use than JSON schema. As a tool for actual applications in Python, this stands a far greater chance of being useful than a more generic tool.

Perry
> --
> You received this message because you are subscribed to the Google Groups "astropy-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to astropy-dev...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Erik Bray

unread,
Mar 20, 2014, 5:25:25 PM3/20/14
to astro...@googlegroups.com
Thanks for your comments Demitri--I have a few inline replies below:

On Thu, Mar 20, 2014 at 4:40 PM, Demitri Muna <demitr...@gmail.com> wrote:
> Hi Erik,
>
> I really like the idea of a FITS validator, particularly for all of the
> various flavors of FITS. This would go a long way to documenting these
> things in, as you point out, a machine readable way.
>
> Is your aim here to provide a means to validate FITS files for the community
> in a standardized way, or rather to validate FITS files only when using
> Python. I think there's an argument to be made that there is greater value
> in the former compared to the latter (though both are certainly valuable).
> What about programs that aren't written in Python? I think if you're going
> to put the time into this, it could made into something more general that
> could be used much more widely.

So this is a really good point. I made some mention of this in my
original post, but it deserves some expansion. My aim here is
somewhere in-between developing a "standard" and working with FITS in
Python.

This is just a guess, but I feel like perhaps one of the main reasons
there has *not* been a FITS schema-like format yet is that it's
difficult to come up with one that provides especially useful
validation. The temptation, which I admit is difficult to resist
(especially for standards bodies :), is to invent an entirely new
syntax that is independent of any specific programming environment or
language. But this runs one quickly into a problem: It becomes
necessary to define syntax in that new language for validating the
different types of rules one might need to define on the values of
different keywords. To take JSON Schema, for example (since you
mentioned it in the next paragraph) the following special keywords are
defined for checking the value of some scalar (be it a
string/integer/etc.):

minLength
maxLength
pattern
format
multipleOf
minimum
maximum
exclusiveMinimum
exclusiveMaximum
enum
minProperties
maxProperties
...and so on...

That list is not exhaustive, but it should give you an idea. And even
with all that language syntax it still only allows fairly trivial
checks--maybe good enough for most things, but good luck implementing
a fully correct validator for the FITS datetime format in JSON Schema
:) And that's still a simple example.

The point is that in the process defining new syntax to properly
validate something as complex as some FITS headers, the closer one
gets to designing something that will meet all the use cases to more
arbitrarily complex that syntax becomes. Possibly to the point that
you're writing writing a Turing complete language. It seemed easier
to to skip that process and start straight away with an existing
programming language that would allow arbitrarily complex rules. As
such the idea behind this Pythonic schema format is to simply provide
a structure for organizing those rules for a set of keywords.

As for programs that aren't written in Python, it does not matter.
One can write a validator program that uses this schema format
internally, but is otherwise a black box to the user (unless the user
also needs to define a new schema). This would be true of *any*
format. A library is needed to check a file against the format, and
though that library may be available in many implementations it might
not be available for *all* languages. For use in a pipeline one would
just write a wrapper script and execute it--I envision the same being
done here. One could write a tool like fitsverify, but that accepts
and checks custom schemas (via a plugin interface that I have yet to
implement) in addition to simply checking against the FITS Standard.

All that said, by requiring Python to run the schema validator it does
limit this format's use, for example, in a web browser (on the client
side). If that's something we care about maybe JavaScript would be a
better language for this, though all the above points apply equally to
JavaScript.

> Of course, schemes for validating data do already exist - JSON, XML, etc. I
> would strongly suggest considering implementing such a scheme using existing
> tools rather than inventing something new. Tthis way there is a world of
> software that exists that can be leveraged in any language. I know you
> wanted something "arbitrarily complex", but that would make it approaching
> impossible to implement outside of Python. I'd argue that if some validation
> is extremely complicated, it might be out of the scope of a validator and
> better as a custom tool.

I think my point above addresses the question about using an existing
format like JSON or XML. I actually did try coming up with a way to
convert a FITS file to JSON but it's not as obvious as one might
think, and you wind up having to define keyword-specific rules for how
to convert FITS to JSON. You actually need a FITS->JSON conversion
language. And then once it's in JSON, well, you're stuck with
whatever JSON Schema can do which is actually quite good *if* it's
extended (which can't be doing in a language independent way) but is
not so great when starting from FITS. Same goes for XML--I believe
this was tried with FITSML but I don't think that project went very
far (though Brian Thomas would be able to say more about that than I
could).

Using a format like this even allows "custom tools" that implement
validation for more complex conventions to still organize those rules
in a "standard" way that will be compatible with other FITS validation
tools using the same format.

> As for the code itself, I'd prefer that the keywords not be top-level
> properties of the schema class, e.g.
>
>>>> class ExampleSchema(Schema):
> ... FOO = {'mandatory': True}
> ... keywords = {
> ... 'DATE-OBS': {'value': str}
> ... }
> ...
>
>
> You highlight exactly the problem - the limitations of Python are adding
> complexity where there shouldn't be any. I'd expect *all* the keywords to be
> accessible through a "keyword" property - otherwise how would you know what
> they are? What value is there for them to be top-level properties? This is
> the same issue I ran into with the column fields in astropy.io.fits.Column.

This particular "syntax" of defining rules for keywords as class-level
attributes I did expect to be somewhat controversial. I personally
really *like* it because most FITS keywords will work fine for this,
and it cuts down on the amount of punctuation needed to define rules
for dozens of keywords. I find it easy on the eyes and quick to type
out myself.

Also, it says this in the docs, but all the keywords *are* accessible
through the Schema.keywords attribute. Even in the example above, if
one later enters:

>>> ExampleSchema.keywords

It will return:

{'FOO': {'mandatory': True}, 'DATE-OBS': {'value': str}}

In other words, the Schema.keywords dictionary is automatically filled
in when the schema is defined. The fact that it has be used directly
for something like DATE-OBS is an unfortunate workaround. I would
definitely be open to ideas here.

Thanks,

Erik

Erik Bray

unread,
Mar 20, 2014, 5:35:21 PM3/20/14
to astro...@googlegroups.com
On Thu, Mar 20, 2014 at 4:58 PM, Perry Greenfield <stsci...@gmail.com> wrote:
> In an ideal world, yes that would be nice. But the choice here is really more like a tool like this, or no tool given the time and resources available. We actually have a fair amount of experience here with JSON schema and have been using it for the JWST pipelines. But it's not the easiest thing for people to use; it's my impression that this is far easier for a Python developer to use than JSON schema. As a tool for actual applications in Python, this stands a far greater chance of being useful than a more generic tool.

As a followup, since Perry mentioned using JSON Schema for JWST: I
happen to actually really like JSON Schema, and don't regret our
decision to work with it (at least not yet). In particular the v4
draft of the standard is very powerful because it's designed around
extensibility and geared toward allowing custom meta-schemas that are
supersets of the base meta-schema. It also really helps to use it
with data that is starting out in JSON-like data structures and the
awkward custom data structure that is a FITS header.

I found that in order to do everything we want to do it becomes
necessary to define these sorts of custom meta-schemas. The problem
is that in order to check something against a new meta-schema one
usually has to write new code that knows how to process the new rules.
The jsonschema library for Python is very nicely designed for this
and made it very easy to implement new meta-schemas, I found. I
haven't tried the JavaScript library but I suspect the same is true
there. But you can see where I'm going with this--if one is going to
define their own specialized meta-schema they still have to write code
that processes it for every language one wants to support.

I say just write it in one language and wrap the thing in an
executable if it needs to be used in a general context :)

Erik Bray

unread,
Mar 20, 2014, 5:53:44 PM3/20/14
to astro...@googlegroups.com
On Thu, Mar 20, 2014 at 5:25 PM, Erik Bray <erik....@gmail.com> wrote:
>> Is your aim here to provide a means to validate FITS files for the community
>> in a standardized way, or rather to validate FITS files only when using
>> Python. I think there's an argument to be made that there is greater value
>> in the former compared to the latter (though both are certainly valuable).
>> What about programs that aren't written in Python? I think if you're going
>> to put the time into this, it could made into something more general that
>> could be used much more widely.
>
> So this is a really good point. I made some mention of this in my
> original post, but it deserves some expansion. My aim here is
> somewhere in-between developing a "standard" and working with FITS in
> Python.

One more followup point on this, since I just thought of the words:

The goal here is not to provide a new specialized syntax for creating
general purpose schemas for FITS headers. Rather, it's to provide one
way of organizing the validation rules that are already being written
(in some cases over and over again by different Python tools) in a
standard, reusable, and interoperable way. This doesn't do a perfect
job of that yet, but I think it's a step in the right direction.

Demitri Muna

unread,
Mar 20, 2014, 8:16:32 PM3/20/14
to astro...@googlegroups.com
Hi Erik,

On 20 Mar 2014, at 5:25 PM, Erik Bray <erik....@gmail.com> wrote:

> My aim here is somewhere in-between developing a "standard" and working with FITS in Python.

I say let's do both! I wasn't clear enough in my original email. I'm not suggesting using other schemes like JSON or XML to perform the actual validation - absolutely, make a Python tool because that's what people will use and that makes sense. What I'm saying is make validation description *independent* of Python. Any tool you write will read/parse this description using Python and can provide the same interface you're proposing. Don't have a validation rule that says "field x must be equal to the result of this Python function", because at that point the scheme is locked into Python specifically.

As an example, imagine a set of [XML, JSON, even custom text] files that describe the validation. You write a Python tool that reads them, parses them, and performs a validation on a file. Make it as easy to use for the user as possible. Then someone comes along, let's just randomly call him "Amit", and wants to create a validator in JavaScript. He takes those files, writes his own JS parser for those files, and then can perform the validation. YOU don't have to write any other tool but Python, but allow others to do the same. You can easily imagine if well written and adopted by the community, someone will write a C library to perform the same validation based on those files. And that is a Good Thing. But it can't happen if one of your rules is a Python function. A proper, well-written FITS validator has the potential to outlive Python.

Yes, I have my own motivation here - I'm writing a FITS browser in C (well, Objective-C). I want to be able to use this. We're not doing much with JavaScript, but that's going to change over the next several years. Allow other languages to use these files, and it will go a long way to becoming a standard.

> This is just a guess, but I feel like perhaps one of the main reasons
> there has *not* been a FITS schema-like format yet is that it's
> difficult to come up with one that provides especially useful
> validation.

Ha, you're far too kind to astronomers. I suspect it's because it occurred to less than ten astronomers, and then they got distracted by a deadline before finishing the thought. :)

> The temptation, which I admit is difficult to resist
> (especially for standards bodies :), is to invent an entirely new
> syntax that is independent of any specific programming environment or
> language.

Absolutely design the standard/syntax while developing it in Python. Just don't *depend* on Python. I'd even be happy to work on the C side of it (just don't attach it to a deadline).

Cheers,
Demitri

_________________________________________
Demitri Muna

Department of Astronomy

Michael Droettboom

unread,
Mar 20, 2014, 9:24:20 PM3/20/14
to astropy-dev, ssb

Erik,

I think this is really cool. I think I share some of Demitri’s concerns about the Python-specificness of this, but I also agree with you that any language sufficiently useful would also approach Turing completeness, and therein lies madness. So, just in terms of getting something up and running without a gargantuan effort, I think this makes a lot of sense. I do notice from the docs, however, that most of the rules are ultimately expressed in simple lambda expressions that really just use the basic expression syntax. I wonder how far you could get building the description on something like a JSON tree supplemented with functions that only include the basic binary operations (i.e. not looping constructs or anything more sophisticated). If that, plus a few convenience functions like the WCSAxes trick gets you most of what you need, you’d have a pretty simple language-agnostic format on your hands. In any case, practicality may trump that at least in the short term.

To change the subject a bit — this directly relates to a couple of tools I’ve worked on that I’d love to punt off to a more robust and powerful tool such as this.

fitsblender is something that, given a set of “combining rules” and a set of N fits headers, knows how to combine them into a single header. (It’s used by astrodrizzle). The combining rules include things like “use the maximum of this keyword across all headers”, or “include this keyword multiple times, once from each header”. It seems that you could include this information about how keywords should be combined in this schema (at least as an extension to it). It should just be a matter of having another field on the keyword descriptor (for example 'combine': 'max'). Even if this functionality is a little obscure for including directly in PyFITS’ validator, there’s nothing to preclude doing that, right?

fits_generator is a tool that, again, given a set of rules, knows how to transform from what is effectively one FITS schema to another. (In this case, we use it to transform JWST Level 1 products to Level 1b products etc.)

Anyway, as you say, there’s a number of tools out there really dancing around this problem without having a central place to hang solutions off of, so I’m really excited to see this.

I may have some more specific comments on documentation etc. — I assume you’d prefer I just file issues for those?

Mike



--
You received this message because you are subscribed to the Google Groups "astropy-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to astropy-dev+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Michael Droettboom


Jonathan Eisenhamer

unread,
Mar 20, 2014, 10:11:38 PM3/20/14
to astropy-dev, ssb
Though I have only had experience in the use of them and I don't know what extent python supports them, one may want to keep an eye on possibly developing a domain specific language. In the ruby-verse, both the definition, implementation of a parser, and the generalized specification of the parser for use by other languages has made some significant progress.

just throwing it out there,
jde

From: Michael Droettboom [mdb...@gmail.com]
Sent: Thursday, March 20, 2014 21:24
To: astropy-dev
Cc: ssb
Subject: Re: [astropy-dev] PyFITS schema format preview

Erik Bray

unread,
Mar 21, 2014, 11:41:46 AM3/21/14
to astro...@googlegroups.com
On Thu, Mar 20, 2014 at 8:16 PM, Demitri Muna <demitr...@gmail.com> wrote:
> Hi Erik,
>
> On 20 Mar 2014, at 5:25 PM, Erik Bray <erik....@gmail.com> wrote:
>
>> My aim here is somewhere in-between developing a "standard" and working with FITS in Python.
>
> I say let's do both! I wasn't clear enough in my original email. I'm not suggesting using other schemes like JSON or XML to perform the actual validation - absolutely, make a Python tool because that's what people will use and that makes sense. What I'm saying is make validation description *independent* of Python. Any tool you write will read/parse this description using Python and can provide the same interface you're proposing. Don't have a validation rule that says "field x must be equal to the result of this Python function", because at that point the scheme is locked into Python specifically.

Yes, obviously. See my previous comment that "The temptation, which I
admit is difficult to resist (especially for standards bodies :), is
to invent an entirely new syntax that is independent of any specific
programming environment or language"

In principle you would be correct and I would also have similar
preferences. However, this still entirely ignores the point that
defining such a format immediately becomes an enormous constraint on
what you can actually do with it. The more power and flexibility it
gains the more one spends time and effort implementing what ends up
amounting to a new programming language. Believe me--I've put a lot
of thought into this. If someone wants to spend the effort developing
a "language independent" format (which, as I've stated, will really be
a new language unto itself) I won't discourage them. I think it could
still be better than tying specifically to Python. But that's not an
effort I'm willing to spend my time one.

Also, I don't think I'm being too kind to astronomers. I'm sure there
are some out there who would scoff at the need for such a thing. But
I'm hardly the first one to think of this, not even in the context of
FITS. That's partly what XDF and FITSML were about. I suspect those
failed, in part, because they were constrained by what one could
describe in pure XML.


On Thu, Mar 20, 2014 at 9:24 PM, Michael Droettboom <mdb...@gmail.com> wrote:
> Erik,
>
> I think this is really cool. I think I share some of Demitri’s concerns
> about the Python-specificness of this, but I also agree with you that any
> language sufficiently useful would also approach Turing completeness, and
> therein lies madness. So, just in terms of getting something up and running
> without a gargantuan effort, I think this makes a lot of sense. I do notice
> from the docs, however, that most of the rules are ultimately expressed in
> simple lambda expressions that really just use the basic expression syntax.
> I wonder how far you could get building the description on something like a
> JSON tree supplemented with functions that only include the basic binary
> operations (i.e. not looping constructs or anything more sophisticated). If

I'm not sure that really gains much. The only schemas I've
implemented so far implement the basic FITS Standard (plus the
checksum convention) and for the *most* part that does, thankfully,
only involve a few simple expressions in the lambdas. However, if you
scroll through you'll note a few commented out rules that are marked
TODO. I will get to those, but they involve a bit more complexity,
such as parsing table formats. Or units (something that I think I'll
only bother with if/when this is merged into Astropy, although I might
just backport the FITS unit *parser*).

And the rules for the FITS Standard itself are still comparably
simple. I've started on a prototype schema for FITS WCS, and the
kinds of rules one has to define grow considerably more complex,
requiring flow control constructs like branching and looping. It all
still fits nicely into the schema organization, but the value checkers
are non-trivial. One could argue, "Well, if this format is
FITS-specific anyways just make a few special rules for the FITS
Standard and FITS WCS and call it a day." But that would obviate the
need for a special schema format in the first place, and just becomes
a basic FITS checker. I envision usefulness of this for conventions
beyond just the standard FITS--it should also be useful for
instrument-specific conventions, themselves which can be fairly
complex.

There are two points I could make that might point toward a compromise:

1. Most of the schema organization is *not* Python specific. Yes, I
use Python classes to represent schemas, but that doesn't have to be
the case. Most of the structure could fit just as easily in a
JavaScript object, for example. The inheritance rules are distinctly
Python-influenced, but there's no reason they couldn't be implemented
in another language. That just leaves the validation functions
themselves. Again, I believe there should be no *arbitrary*
restrictions on these that make certain things impossible. But that
doesn't mean there could be *no* restrictions whatsoever. Although I
don't see much value in devising some kind of expression microformat
one could, say, limit the allowed Python language constructs to ones
that can be very easily translated to other languages, just as
JavaScript through the Pyjs translator
<http://pyjs.org/Translator.html>.

As a related alternative, the validation functions could be provided
as strings, but again rather than inventing a new language they might
as well use an existing language. I chose Python because PyFITS is
written in Python, but it could just as easily be JavaScript or
something else. As long as it's possible to fire up an interpreter
for that language to pass that string through it can be useful. This
calls to mind how MongoDB uses JavaScript as its query language.

2. On the Python end of things one thing I still intend to do is
simplify how the validation functions or lambda are written.
Currently every function has to accept that **ctx dict of arbitrary
keyword arguments. But there's a lot I can do with function signature
introspection to limit the number of arguments a function needs to
accept when executing the schema.

> that, plus a few convenience functions like the WCSAxes trick gets you most
> of what you need, you’d have a pretty simple language-agnostic format on
> your hands. In any case, practicality may trump that at least in the short
> term.

Unfortunately "a few convenience functions" can still only get you so
far. It prevents implementation of validators for custom conventions,
of which there are countless in FITS. Part of my goal in all this is
to try to introduce some sanity to the universe of FITS formats.

> To change the subject a bit — this directly relates to a couple of tools
> I’ve worked on that I’d love to punt off to a more robust and powerful tool
> such as this.
<...snip...>

Regarding tools like fitsblender and fits_generator, I have definitely
been thinking along the lines of integration with these kinds of
tools. On the fitsblender end of things, PyFITS currently has very
loosey rules for combining FITS headers. Currently, when using an
existing header in a new HDU it does things like strip out all the
keywords particular to the FITS data format, such as the BITPIX and
NAXIS keywords. There's just a list of these (along with the
umpteenth block of code to loop over the NAXISn keywords). But now I
have a very explicit way of pointing to which keywords are part of
some "standard" and should be kept, and which are custom keywords, or
keywords specific to another convention. That still does not fully
answer how they should be combined. But I'm thinking along those
lines.

Likewise, on the fits_generator end of things, I am definitely
thinking about how this can be used to translate between different
FITS conventions, or even to/from FITS and a different data format
entirely (i.e. FINF). Not sure about that yet, but just implementing
a standard ruleset for FITS itself was only the first step.
Reply all
Reply to author
Forward
0 new messages