Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Microsoft HD

5 views
Skip to first unread message

Jeff

unread,
Apr 14, 2008, 12:14:18 PM4/14/08
to
Hi

Does anyone know what the underlying technolgy of the Mocrosoft HD Photo
format is?
So far the most I've been able to find out is that it's not based on
wavelets.

Also, does anyone have a view they'd like to share about how HD Photo
compares with Jpeg2000 in terms of technology and likely future uptake.

Thanks
Jeff


Steve Eddins

unread,
Apr 14, 2008, 12:32:32 PM4/14/08
to

These links might be of interest:

http://jeffmatherphotography.com/dispatches/2008/01/microsoft-hd-photo/

http://jeffmatherphotography.com/dispatches/2008/01/the-jpeg-family-circus/

They were written by a MathWorks software developer who works on image
and scientific data format support for MATLAB.

---
Steve Eddins
http://blogs.mathworks.com/steve/

Jeff

unread,
Apr 14, 2008, 2:22:30 PM4/14/08
to

"Steve Eddins" <Steve....@mathworks.com> wrote in message
news:fu00v0$lgp$1...@fred.mathworks.com...


Many thanks Steve - very helpful.
Looks like HD uses a type of Principal Component Transform as the basis for
both lossy and lossless compression.

Jeff


Thomas Richter

unread,
Apr 15, 2008, 2:52:05 AM4/15/08
to
Jeff schrieb:

>
> Many thanks Steve - very helpful.
> Looks like HD uses a type of Principal Component Transform as the basis for
> both lossy and lossless compression.

No, not really. The transform is an overlapped 4x4 block transform that
is related to a traditional DCT scheme, or at least approximates it
closely. The encoding is a simple adaptive huffman with a move-to-front
list defining the scanning order, and an inter-block prediction for the
DC and the lowest-frequency AC path of the transformation.

Some parts are really close to H264 I-frame compression, i.e. the idea
to use a pyramidal transformation scheme and transform low-passes again
(here with the same, in H264 with a simpler transformation).

The good part is that lossy and lossless use the same transformation.
The bad part is that the quantizer is the same for all frequencies,
meaning there is no CSF adaption, and the entropy coder back-end is
not state of the art.

So long,
Thomas

Jeff

unread,
Apr 15, 2008, 6:24:40 AM4/15/08
to

"Thomas Richter" <th...@math.tu-berlin.de> wrote in message
news:fu1jb4$uit$1...@infosun2.rus.uni-stuttgart.de...

Thanks a lot for that clarification.

The link provided by the previous respondent stated "JPEG-XR uses a
principal components transform (PCT)" - but it looks like he misinterpretted
the acronymn PCT which should stand for Photo Core Transform.

I installed the Photoshop plugins for HD and Jpeg2000. I don't see a
significant difference in terms of image quality vs file size but I do see a
noticable difference in the encode/decode (file save/open) times with HD
being significantly faster than jpeg2000. I wonder if this difference is
inherent to the formats or is down to a more efficient implementation?

Licensing issues with both formats seem to be unclear at the moment with
various parties potentially claiming ownership of parts of jpeg2000 and
Microsoft still not formally placing the source code for their format in the
Open Specification Promise.

Thanks again
Jeff


Thomas Richter

unread,
Apr 15, 2008, 11:49:02 AM4/15/08
to
Jeff schrieb:

> The link provided by the previous respondent stated "JPEG-XR uses a
> principal components transform (PCT)" - but it looks like he misinterpretted
> the acronymn PCT which should stand for Photo Core Transform.
>
> I installed the Photoshop plugins for HD and Jpeg2000. I don't see a
> significant difference in terms of image quality vs file size but I do see a
> noticable difference in the encode/decode (file save/open) times with HD
> being significantly faster than jpeg2000. I wonder if this difference is
> inherent to the formats or is down to a more efficient implementation?

These are actually two questions. a) image quality. That depends, for JPEG2000
at least, heavily on the codec you use. I'm not clear which one is in Photoshop,
probably Adobe implemented their own. Unfortunately, it is very easy to get
very bad results. /-: For measurements done at the JPEG, JPEG2000 outperformed
HDPhoto in terms of quality, which was even worse - for the codes we had initially -
than traditional JPEG. We have been able to improve things considerably, but
it's still not in the region of a good JPEG2000 encoder. The JPEG2000 code we
used there was a pretty good one following high standard (the Pegasus code),
and the HDPhoto code we used was the MS DPK code, and, for visual measurements,
another implementation with visual improvements, also from Pegasus.

(Visual quality here: "Let people look at the pictures" following ITU guidelines.
We also measured with "objective metrics", giving approximately the same results)

b) running time: This is less a question of codec, but really a question
of complexity, and your observation is correct and coincides with that of
the committee and myself. You *can* definitely make JPEG2000 a bit faster
(compared to open source codecs even a lot faster, I'm talking about factors
of four or higher), but not as fast as HDPhoto.

> Licensing issues with both formats seem to be unclear at the moment with
> various parties potentially claiming ownership of parts of jpeg2000 and
> Microsoft still not formally placing the source code for their format in the
> Open Specification Promise.

Actually, that's not correct. JPEG2000 is an ISO WG1 standard, and as such you
can get a royality-free licence from all participating companies for the baseline
technology. Meaning, you have to pay zero for a licence. The same holds for HDPhoto
once standardization is complete. It's a golden rule of the JPEG to licence the
base line technology under RANDZ terms, i.e. nondiscriminatory, royality free.

You probably confuse the licence on the technology with the licence of a software
implementation. The latter depends on the vendor, of course. For JPEG2000, there
are (pretty bad, though) open source implementations. For HDPhoto, there is the
Microsoft DPK code (also not exactly recommendable, but you asked for source, so there
is one you can get). There will be reference software for it, sponsored by MS, and it's
up to committee to decide on its (software) licence terms. The decision hasn't been made yet.
The reference software looks in better shape, IMHO, but its not yet released, as stated.

Thus, please don't mix up things.

So long,
Thomas

Arash Partow

unread,
Apr 15, 2008, 5:01:13 PM4/15/08
to
Hi Thomas,

>> (Visual quality here: "Let people look at the pictures" following ITU guidelines.
>> We also measured with "objective metrics", giving approximately the same results)

what were the objective metrics used? I can think of PSNR, any others?

Arash Partow
__________________________________________________
Be one who knows what they don't know,
Instead of being one who knows not what they don't know,
Thinking they know everything about all things.
http://www.partow.net

Thomas Richter

unread,
Apr 16, 2008, 3:06:47 AM4/16/08
to
Arash Partow schrieb:

> Hi Thomas,
>
>>> (Visual quality here: "Let people look at the pictures" following ITU guidelines.
>>> We also measured with "objective metrics", giving approximately the same results)
>
> what were the objective metrics used? I can think of PSNR, any others?

PSNR is a lousy metric to model/predict subjective tests. We also used
VDP (Daly's visual difference predictor), multi-scale mean-SSIM and a
DCT based metric (PSNR-HSV) with some modifications. Some details
differ, but the general trend of all the metrics match that of the
subjective tests, outlined above.

Greetings,
Thomas

0 new messages