Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

1080i & 720p HDTV Resolution

16 views
Skip to first unread message

DAB sounds worse than FM

unread,
Feb 8, 2005, 8:59:31 AM2/8/05
to
All quotes taken from chapter titled "Resolution" in "Digital Video &
HDTV, Algorithms & Interfaces", by Charles Poynton.

Peter N. Glaskowsky, Editor in Chief, Microprocessor Report in the first
review of this book on here: http://tinyurl.com/3p52a, says that
"Charles Poynton is one of the world's leading experts on TV and video
technology, and he's a great writer too."

Quotes from Resolution chapter (apologies for any blatant typos, I'm
typing while reading the book and not looking at the screen):

"In this chapter, I will explain resolution, which is closely related to
sharpness."

From sub-section on Kell effect:

"Television systems in teh 1930s failed to deliver the maximum
resolution that was to be expected from Nyquist's work. In 1934, Kell
published a paper quantifying the fraction of the maximum theoretical
resolution achieved by RCA's experimental television system. He called
this fraction k; later, it became known as the Kell factor."

"Kell's 1934 paper concerned only progressive scanning. With the
emergence of interlaced systems, it becamse clear that twitter resulted
from excessive vertical detail. To reduce twitter to tolerable levels,
it was necessary to reduce vertical resolution to substantially below
that of a well-designed progressive system have the same spot size --
for a progressive system with a given k, and interlaced system having
the same spot size had to have lower k. Many people lumped this
consideration into "Kell factor," but researchers such as Mitsuhashi
indentify this reduction separately as an interlace factor or interlace
coefficient."

From sub-section titled "Resolution":

"In computing, unfortunately, the term resolution has come to refer
simply to the count of vertical and horizontal pixels in the pixel
array, without regard for any overlap at capture, or overlap at display,
that mya have reduce the amount of detail in the image. A system may be
described as having "resolution" of 1152x864 -- this system ahs a totl
of about 1 million pixels. Interpreted this way, "resolution" doesn't
depend upon whether individual pixels can be discerned ("resolved") on
the face of the display.

Resolution in a digital image system is bounded by the count of pixels
across teh image width and height. However, as picture detail increases
in frequency, electronic and optical effects cause response to diminish
even with the bounds imposed by sampling. In video, we are concerned
with resolution that is delivered to the viewer; we are also interested
in limitations of bandwidth in capture, recording, processing, and
display. In video, resolution concerns the maximum number of line pairs
(or cycles) that can be resolved on the display screen. This is a
subjective criterion! Resolution is related to perceived sharpness."

From "Interlace Revisited" sub-section:

"As I mentioned on page 56, at practical vertical scan rates, it is
possible to flash alternate image rows in alternate vertical scans
without causing flicker. This is interlace. The scheme is possible owing
to the fact that temporal sensitivity of the visual system decreases at
high spatial frequencies.

Twitter is introduced, however, by vertical detail whose scale
approaches the scan-line pitch. Twitter can be reduced to tolerable
levels by reducing the vertical detail somwhat, to perhaps 0.7 times. On
its own, this reduction in vertical detail would push the viewing
distance back to 1.4 times that of progressive scanning.

However, to maintain the same sharpness as a progressive system at a
given data capacity, all else being equal, in interlaced scanning only
half the picture data needs to be transmitted in each vertical scan
period (field). For a given frame rate , this reduciton in dataa per
scan enables pixel count per frame to be doubled."

"Interlaced scannign was chosen over progressive in the early days of
television, half a century ago. All other things being equal -- such as
data rate, frame rate, spot size, and viewing distance -- various
advantages have been claimed for interlace scanning.

* If you neglect the introduction of twitter, and consider just the
static pixel array, interlace offers twice the static resolution for a
given bandwidth and fram rate.

* If you cnosider an interlaced image of the same size as a progressive
image and viewed at the same distance -- that is preserving the picture
angle -- then there is a decrease in scan-line visibility."

From the page margin:

"Twitter and scan-line visibility are inversely proportional to the
count of image rows, a one-dimensional quantity. However, sharpness is
proportional to pixel count, a two-dimensional (areal) quantity. to
overcome twitter aat the same picture angle, 1.4 times as many image
rows are required; however, 1.2 times as many rows and 1.2 times as many
columns are still available to improve picture angle."


I've copied all that into a new thread so that we can resolve (no pun
intended) the issue of resolution for the different formats, and the
above quotes can be used so that we're all singing from the same
hymn-sheet.

I accept that authors have their own agendas on controversial subjects,
but I don't think there's anything debatable in the above quotes.

Here's my understanding and conclusions:

Resolution
-----------

* the Kell factor is not the correct term for the vertical resolution of
interlaced not being optimal, because the Kell factor applies to both
interlaced and progressive. However, an interlace coefficient is an
appropriate parameter to be included in resolution calculations to take
twitter mitigation into consideration.

* With regards to twitter he says: "Twitter can be reduced to tolerable
levels by reducing the vertical detail somewhat, to perhaps 0.7 times."
But he also says: "Twitter and scan-line visibility are inversely
proportional to the count of image rows." Taken together, they imply
that HDTV with 1080 lines will inherently suffer less from twitter than
SDTV, so the 0.7 factor can be increased, thus making the subsequent
calculations even more in favour of 1080i.

* Overall, the horizontal resolution of 1080i is 50% higher than for
720p (1920/1280 = 1.5), and the vertical resolution of 1080i is 5%
higher than for 720p ((1080 x 0.7) / 720 = 1.05), but because twitter is
inherently reduced due to it being inversely proportional to the number
of lines, then 1080i is likely to have a significantly higher vertical
resolution than 720p.

Sharpness
-----------

* he says "sharpness is proportional to pixel count, a two-dimensional
(areal) quantity", and I would expect that picture quality sharpness on
video is analogical to picture quality sharpness of images taken by
digital cameras, where the number of Mpixels is the standard
differentiator.


So, there may be arguments about the quality of i/p conversion for
1080i, but as far as resolution and picture sharpness are concerned then
1080i is the clear winner, and by some margin.


--
Steve - www.digitalradiotech.co.uk - Digital Radio News & Info

Find the cheapest Freeview, DAB & MP3 Player Prices:
http://www.digitalradiotech.co.uk/freeview_receivers.htm
http://www.digitalradiotech.co.uk/dab_digital_radios.htm
http://www.digitalradiotech.co.uk/mp3_players_1GB-5GB.htm
http://www.digitalradiotech.co.uk/mp3_players_large_capacity.htm


Dave Farrance

unread,
Feb 8, 2005, 9:27:02 AM2/8/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote:
>So, there may be arguments about the quality of i/p conversion for
>1080i, but as far as resolution and picture sharpness are concerned then
>1080i is the clear winner, and by some margin.

Except that all the stuff you quoted predated advanced codecs that can
compress as well as interlacing, only with better quality. Advanced
codecs can't handle pre-interlaced video very efficiently. A better
comparison would be between 1080i and 1080p, when compressed into the
same bandwidth with an advanced codec.

--
Dave Farrance

DAB sounds worse than FM

unread,
Feb 8, 2005, 10:35:40 AM2/8/05
to


You know who I think are the worst people on technical-related
newsgroups? It's people like you that just pass off hearsay as fact.
People like you are even relatively dangerous, because it's saying
things like you've just said that leads to myths and half-truths being
propagated.

You don't even have a clue about these advanced codecs, but you've just
read what the EBU have said and read what others on here have said (who
have also fell for the EBU spin) and taken it as read that what the EBU
say is fact.

I've read the majority of this book about H.264 and MPEG-4 video coding:

http://tinyurl.com/6na24

and I have not seen one reference to these new codecs performing better
on progressive streams than on interlaced streams. Moreover, H.264
(which is the video codec that will be used in future) works identically
on interlaced and progressive images in that it works identically on the
macroblocks and sub-macroblocks presented to it.

If you're going to pass yourself off as being knowledgable about this
subject, then provide some references that back up your claim, or shut
TF up.

Aztech

unread,
Feb 8, 2005, 11:30:16 AM2/8/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote in message
news:g%4Od.121$bc1...@newsfe3-win.ntli.net...

> Dave Farrance wrote:
>> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>>> So, there may be arguments about the quality of i/p conversion for
>>> 1080i, but as far as resolution and picture sharpness are concerned
>>> then 1080i is the clear winner, and by some margin.
>>
>> Except that all the stuff you quoted predated advanced codecs that can
>> compress as well as interlacing, only with better quality. Advanced
>> codecs can't handle pre-interlaced video very efficiently. A better
>> comparison would be between 1080i and 1080p, when compressed into the
>> same bandwidth with an advanced codec.
>
>
> You know who I think are the worst people on technical-related newsgroups?
> It's people like you that just pass off hearsay as fact. People like you are
> even relatively dangerous, because it's saying things like you've just said
> that leads to myths and half-truths being propagated.

That's why we had to alert you to the very existence of the Kell factor last
week?

Codecs perform better with progressive material, that's a fact, not hearsay.
It's part of the reason 720p requires less bandwidth for a given perceived
quality.


Az.


davidr...@postmaster.co.uk

unread,
Feb 8, 2005, 11:43:54 AM2/8/05
to
Did you see my latest "summary" reply in the last thread we discussed
this in?

DAB sounds worse than FM wrote:

[snip]

> * With regards to twitter he says: "Twitter can be reduced to
tolerable
> levels by reducing the vertical detail somewhat, to perhaps 0.7
times."
> But he also says: "Twitter and scan-line visibility are inversely
> proportional to the count of image rows." Taken together, they imply
> that HDTV with 1080 lines will inherently suffer less from twitter
than
> SDTV, so the 0.7 factor can be increased, thus making the subsequent
> calculations even more in favour of 1080i.

You should be careful to understand what he's saying here. I don't have
the book, but you can check; he may mean that "Twitter and scan-line


visibility are inversely proportional to the count of image rows"

_because_ of the 0.7 factor (which represents a higher special
frequency cut-off for 1080 than 576!). If you start using a different
factor for higher line counts, you may well remove the inverse
proportionality.

As I've said already, whatever the line count, you can't represent a
horizontal dark/light pattern (horizontal lines, if you like) with a
spacing equal to half the line count, because that's just a flashing
screen!

Cheers,
David.

davidr...@postmaster.co.uk

unread,
Feb 8, 2005, 11:53:57 AM2/8/05
to
DAB sounds worse than FM wrote:

> You know who I think are the worst people on technical-related
> newsgroups? It's people like you that just pass off hearsay as fact.
> People like you are even relatively dangerous, because it's saying
> things like you've just said that leads to myths and half-truths
being
> propagated.

There's a bit of common sense to be applied too though!

> I've read the majority of this book about H.264 and MPEG-4 video
coding:
>
> http://tinyurl.com/6na24
>
> and I have not seen one reference to these new codecs performing
better
> on progressive streams than on interlaced streams. Moreover, H.264
> (which is the video codec that will be used in future) works
identically
> on interlaced and progressive images in that it works identically on
the
> macroblocks and sub-macroblocks presented to it.

Do you mean it treats the frames as if they were progressive (so it
just encodes two fields as one picture), or it treats subsequent fields
as if they were subsequent progressive frames, possibly accounting for
the vertical displacement (so it encodes two fields as two pictures).
Or a combination of both. (DV uses a combination of both, IIRC,
depending on the content, but it's hardly an intelligent codec).

> If you're going to pass yourself off as being knowledgable about this

> subject, then provide some references that back up your claim, or
shut
> TF up.

To be fair though, there is nothing out there comparing 720p50 with
720i25, or 1080i25 with 1080p50. You can (quite correctly IMO) infer
that performance with very high bitrate MPEG-2 favours interlacing,
since 1080i25 beats 720p50 much of the time in the SVT test, but beyond
that, I haven't seen any evidence either way.

You haven't pointed to a test which proves "advanced" codecs benefit
from interlacing, just as I haven't pointed to one which proves that
they don't.

Has that test every been done?

Cheers,
David.

davidr...@postmaster.co.uk

unread,
Feb 8, 2005, 11:58:24 AM2/8/05
to
This is my summary from the previous thread...


The formats under discussion are

576i25 = standard 720x576, 25 frames per second interlaced (i.e.
50 fields per second) a/k/a SD "PAL"

720p50 = 1280x720, 50 frames per second progressive

1080i25 = 1920x1080, 25 frames per second interlaced (i.e. 50
fields per second)

1080p50 = 1920x1080, 50 frames per second progressive

We have also touched on
1080p24 and 1080p25 = 1920x1080, 24 or 25 frames per second progressive

- ideal for film material.
1080i compromised = 1440x1080i - often used in place of the usual
1920x1080i in current broadcasts in the USA


Some facts are not up for dispute:

1. All HD formats are better than 576i25

2. Raw 1080i25, optimally deinterlaced and displayed, usually looks
better than 720p50, optimally displayed.

3. Raw 1080p50 is higher quality the other formats. Material originated

in 1080p50 and then converted to any other format would not look worse
than material originated directly in that other format.

4. Interlacing is a type of lossy coding. Thus
a) On its own, interlacing allows a higher quality image to be
transmitted in a given channel capacity than would otherwise be
possible
b) Some of the information removed by interlacing may be correctly
re-created later due to redundancies in the moving image
c) Some of the information removed by interlacing cannot be
correctly re-created, where the information is uniquely available
within the disposed-of image data.

5. Broadcast signals are spectrum limited. We will not get raw HD video

data delivered to our door/aerial/dish - it will be compressed.

6. The EBU are proposing 1280x750p50 as the HD terrestrial delivery
format of choice.

7. The EBU are making no suggestions as to the _production_ format of
choice.

8. The EBU point to 1080p50 as the ultimate HD format (at least within
current HD).

9. Other countries currently deploying HD are using a combination of
720p, 1080i and 1080p24, 1080i being most common.

10. Terrestrial HD broadcasts launched in the near future are unlikely
to use the same MPEG-2 codec employed by these existing HD broadcasts.

11. Camera technology has been "progressive" for a decade a more.

12. Display technology is predicted to become mainly progressive in the

near future.

Is there anything to disagree with there?


So the question, as posed, is really simple: What format should be used

on a future HD terrestrial system? The EBU propose 720p now, 1080p
later. You propose 1080i.

You believe if we have 720p50, we'll never see 1080p50. I'm dead
certain if we have 1080i25 we'll never see 1080p50!

The argument we've been having is about something quite simple: whether

a codec can ever achieve similar compression gains to "interlacing",
without actually having to interlace.

That's it.


Unfortunately, you've already lost the argument: Compare interlaced
MPEG-2 HD with progressive MPEG-4 HD - already a codec has arrived that

manages to beat the previous codec and dump interlacing.

However, using interlacing with MPEG-4 may be able to give further
gains. The question is whether we can have a codec that is better than
what has come before, and which cannot "benefit" from interlacing.

Again, the answer is yes: even MPEG-2, at lower bitrates, does better
with a (lower resolution) progressive source than with an interlaced
one.

So again, that's happened.


So now all we're looking for is a codec where it give a better, and
substantially artefact free picture at a lower bitrate without
interlacing than with.

I've given a few silly examples of how a codec could easily achieve
similar gains to interlacing by using some simple encoding strategy.
You've taken these as concrete proposals and attacked them.

Providing quotes from a book that (correctly) justifies interlacing
(mathematically and practically) in the absence of other data reduction

techniques is not helping the debate at all. I'm asking you to "think
outside the box" - think about what _could_ be done - think about what
_might_ or even _should_ be possible.

It may well be that, even within the lifetime of the proposed system,
interlacing will be of benefit rather than a detriment to the system.
However, good old common sense suggests that this cannot be true
forever: interlacing dumps half the information in a totally non
intelligent manner. Codecs dump much more information, supposedly using

some intelligence.

The question is: when will the intelligence in the codec catch up?
When, not if, surely?

You could apply a similar argument to colour sub-sampling. You would
probably come to the conclusion that it's so efficient, and so simple,
that even though it has a slightly perceptible effect sometimes, it'll
be with us for a long time. You might conclude the same for
interlacing, but at its worst, the effects of interlacing are far less
benign, and far more difficult to undo. What's more, it confuses two
domains - two dimensions - vertical movement and time. There must come
a point where this is more hinderence than help to an advanced video
codec.

Don't you agree?


So, the death of interlacing: I think it's "when" not "if" - what do
you think?

Cheers,
David.

Message has been deleted

DAB sounds worse than FM

unread,
Feb 8, 2005, 1:17:32 PM2/8/05
to
Aztech wrote:
> "DAB sounds worse than FM" <dab...@low.quality> wrote in message
> news:g%4Od.121$bc1...@newsfe3-win.ntli.net...
>> Dave Farrance wrote:
>>> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>>>> So, there may be arguments about the quality of i/p conversion for
>>>> 1080i, but as far as resolution and picture sharpness are concerned
>>>> then 1080i is the clear winner, and by some margin.
>>>
>>> Except that all the stuff you quoted predated advanced codecs that
>>> can compress as well as interlacing, only with better quality.
>>> Advanced codecs can't handle pre-interlaced video very efficiently.
>>> A better comparison would be between 1080i and 1080p, when
>>> compressed into the same bandwidth with an advanced codec.
>>
>>
>> You know who I think are the worst people on technical-related
>> newsgroups? It's people like you that just pass off hearsay as fact.
>> People like you are even relatively dangerous, because it's saying
>> things like you've just said that leads to myths and half-truths
>> being propagated.
>
> That's why we had to alert you to the very existence of the Kell
> factor last week?


Scuse me? If you're referring to me asking you for clarification of what
you meant, I was taking the piss, because you know jack-shit about once
you scratch the surface.


> Codecs perform better with progressive material, that's a fact,


Reference. Where the fk are your references? The only reason why codecs
performs slightly better with progressive material was provided by ME.


> not
> hearsay. It's part of the reason 720p requires less bandwidth for a
> given perceived quality.


No, no it's not, you haven't been fking paying attention, have you? One
reason why 720p will require less fking bandwidth is because it's
encoding less fking pixels per second, you fool.

1920 x 1080 x 25 = 51.85 Mpixels/s

1280 x 720 x 50 = 46.08 Mpixels/s

51.85 / 46.08 = 1.1252

In other words, 1080i needs to encode 12.5% more pixels per second, and
hence 12.5% more macroblocks per second.

DAB sounds worse than FM

unread,
Feb 8, 2005, 2:11:54 PM2/8/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>
>> You know who I think are the worst people on technical-related
>> newsgroups? It's people like you that just pass off hearsay as fact.
>> People like you are even relatively dangerous, because it's saying
>> things like you've just said that leads to myths and half-truths
>> being propagated.
>
> There's a bit of common sense to be applied too though!


Excuse me? You've not provided a single reason why progressive-scanned
streams should require a lower bit rate other than the number of pixels
encoded per second is lower. That isn't an efficiency gain at all, and
it is an efficiency gain that you need to find to prove this assertion.

I've come up with one possibility why there may be a *small* efficiency
improvement, and that is because interlaced macroblocks are twice the
size (in terms of area of the image) of progressive macroblocks, so the
correlation between pixels should be slightly higher for progressive
scanning.

That is the ONLY reason ANYBODY has EVER proposed on these threads as to
why there might be an efficiency gain for interlaced vs progressive. All
the rest has just been hearsay taken from the biased EBU people!!!!


>> I've read the majority of this book about H.264 and MPEG-4 video
>> coding:
>>
>> http://tinyurl.com/6na24
>>
>> and I have not seen one reference to these new codecs performing
>> better on progressive streams than on interlaced streams. Moreover,
>> H.264 (which is the video codec that will be used in future) works
>> identically on interlaced and progressive images in that it works
>> identically on the macroblocks and sub-macroblocks presented to it.
>
> Do you mean it treats the frames as if they were progressive (so it
> just encodes two fields as one picture), or it treats subsequent
> fields as if they were subsequent progressive frames, possibly
> accounting for the vertical displacement (so it encodes two fields as
> two pictures). Or a combination of both. (DV uses a combination of
> both, IIRC, depending on the content, but it's hardly an intelligent
> codec).


I think it can do both.

What I meant, though, is that the encoder takes 16 x 16 pixel
macroblocks (or sub-macroblocks) and encodes them, whether they're
interlaced or progressive, and it doesn't do anything different on
either format, because once it's being encoded it doesn't know that
there's any difference.


>> If you're going to pass yourself off as being knowledgable about this
>
>> subject, then provide some references that back up your claim, or
>> shut TF up.
>
> To be fair though, there is nothing out there comparing 720p50 with
> 720i25, or 1080i25 with 1080p50. You can (quite correctly IMO) infer
> that performance with very high bitrate MPEG-2 favours interlacing,


It's not just at very high bit rate. Taking figures from the SVT WideXGA
picture quality test results:

New Mobile - 720p is better at the lowest tested bit rate of 10 Mbps,
1080i is better at 12Mbps and above

Stockholm Plan - 1080i is better at all bit rates

Knightsfields - 720p is better at the lowest tested bit rate of 10 Mbps,
1080i is better at about 12Mbps and above

Park Run - 720p is better at all bit rates

And remember that I propose that 720p be used for sport.


> since 1080i25 beats 720p50 much of the time in the SVT test, but
> beyond that, I haven't seen any evidence either way.
>
> You haven't pointed to a test which proves "advanced" codecs benefit
> from interlacing, just as I haven't pointed to one which proves that
> they don't.


No, I haven't pointed to any test that shows that interlacing is better
with the new codecs, because I haven't looked, because my understanding
is that the encoding "engine" works identically for both interlaced and
progressive pictures. Moreover, I don't trust any forthcoming EBU test
comparing the 2 formats, because it's trivially easy to rig such a test
by choosing the right source material, just as I don't trust Ofcom's and
the DAB industry's blind listening test for a new MPEG encoder.


> Has that test every been done?


I don't know.

DAB sounds worse than FM

unread,
Feb 8, 2005, 2:39:17 PM2/8/05
to
davidr...@postmaster.co.uk wrote:
> Did you see my latest "summary" reply in the last thread we discussed
> this in?


I read it, but I felt that the term 'resolution' had been poorly defined
and badly understood, so I thought it was a good idea to come to an
agreement specifically about what resolution is, and which of the
formats has the higher resolution. I've seen you say that 1080i doesn't
have a higher resolution, and I think I've seen Stephen Neal admit that
1080i has a slightly higher resolution, and I don't think either are
correct.


> DAB sounds worse than FM wrote:
>
> [snip]
>
>> * With regards to twitter he says: "Twitter can be reduced to
>> tolerable levels by reducing the vertical detail somewhat, to
>> perhaps 0.7 times." But he also says: "Twitter and scan-line
>> visibility are inversely proportional to the count of image rows."
>> Taken together, they imply that HDTV with 1080 lines will inherently
>> suffer less from twitter than SDTV, so the 0.7 factor can be
>> increased, thus making the subsequent calculations even more in
>> favour of 1080i.
>
> You should be careful to understand what he's saying here. I don't
> have the book, but you can check; he may mean that "Twitter and
> scan-line visibility are inversely proportional to the count of image
> rows" _because_ of the 0.7 factor (which represents a higher special
> frequency cut-off for 1080 than 576!). If you start using a different
> factor for higher line counts, you may well remove the inverse
> proportionality.


No, and this is why I've asked you repeatedly whether your understanding
of 'twitter' is different to mine? See below.


> As I've said already, whatever the line count, you can't represent a
> horizontal dark/light pattern (horizontal lines, if you like) with a
> spacing equal to half the line count, because that's just a flashing
> screen!


That effect ***IS*** twitter. And the reason why twitter is reduced as
the number of lines increases is because the size of the black and white
horizontal lines would have to be minute for HDTV for them to flash.

David, I cannot keep copying paragraphs from my book, because it takes
ages. *Please* do some research into what twitter is. I respect your
views far more than virtually everybody else on these groups, but I
don't think you have the right definition of what twitter is, or you
wouldn't be making some of the comments that you are making.

I've added an image of a nearly-horizontal black line to the bottom of
this page:

http://www.digitalradiotech.co.uk/hd_screen_sizes.htm

The line spans 8 vertical lines for the resolution I can see, and with
interlaced scanning you'd see lines 1, 3, 5 and 7 in one field and lines
2, 4, 6 and 8 in the next.

If the number of horizontal lines increases then the amount missing in
each field is reduced. For example, for simplicity, assume the number of
lines increases by a factor of 2, then the line would span 16 vertical
lines, and in field 1 you'd see lines 1, 3, 5, 7, 9, 11, 13 and 15, and
in field 2 you'd see lines 2, 4, 6, 8, 10, 12, 14 and 16. So you'd still
only see half of the line at any one time, but the amount missing in
each place is twice as small, and as it's twice as small then it won't
be as readily perceptible.

Anyway, back to watching TV on my computer monitor with its inherent
interlaced-to-progressive conversion. ;)

Dave Farrance

unread,
Feb 8, 2005, 4:37:09 PM2/8/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote:

>You don't even have a clue about these advanced codecs, but you've just
>read what the EBU have said and read what others on here have said (who
>have also fell for the EBU spin) and taken it as read that what the EBU
>say is fact.

Maybe it'd help if I gave the first fundamental building block of the
argument against interlacing which is:

It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing.

(Progressive displays being a starting assumption here, because the
majority of HDTV-ready TVs being sold in Europe are already progressive,
and that proportion is projected to increase.)

Here's the proof...

An interlacing system with compression looks like this:

Source -> Interlace -> Compress -> Transport -> Decompress ->
De-interlace -> Display

Now group the interlacing and compression blocks together, and the
decompression and de-interlacing blocks together:

Source -> Interlace&Compress -> Transport -> Decompress&De-interlace ->
Display.

Now we can see that the Interlacing & Compression block is itself a
compression codec by definition. i.e. it takes the source video and
compresses it with some loss to a reduced bitrate. The same applies to
the decompression and de-interlacing block. So it reduces conceptually
to a progressive system with the same transported bitrate.

Source -> Compress -> Transport -> Decompress -> Display

QED.

With me so far?

--
Dave Farrance


DAB sounds worse than FM

unread,
Feb 8, 2005, 5:31:54 PM2/8/05
to
Dave Farrance wrote:
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> You don't even have a clue about these advanced codecs, but you've
>> just read what the EBU have said and read what others on here have
>> said (who have also fell for the EBU spin) and taken it as read that
>> what the EBU say is fact.
>
> Maybe it'd help if I gave the first fundamental building block of the
> argument against interlacing which is:
>
> It is trivial to prove that in the world of advanced codecs


Here you go again. Face the facts, you don't understand how advanced
video codecs work, so you are obviously not going to be able to prove
anything.


> and progressive displays that there is no bitrate reduction gained
> from
> interlacing.


No bit rate reduction gained from interlacing? Wow, you really don't
understand codecs at all!!


> (Progressive displays being a starting assumption here, because the
> majority of HDTV-ready TVs being sold in Europe are already
> progressive, and that proportion is projected to increase.)


Absolutely.


> Here's the proof...


Proof? Here's a link to definitions of the word 'proof':

http://www.onelook.com/?w=proof&ls=a

because you don't seem to understand the meaning of the word.


> An interlacing system with compression looks like this:
>
> Source -> Interlace -> Compress -> Transport -> Decompress ->
> De-interlace -> Display
>
> Now group the interlacing and compression blocks together, and the
> decompression and de-interlacing blocks together:
>
> Source -> Interlace&Compress -> Transport -> Decompress&De-interlace
> -> Display.
>
> Now we can see that the Interlacing & Compression block is itself a
> compression codec by definition. i.e. it takes the source video and
> compresses it with some loss to a reduced bitrate. The same applies to
> the decompression and de-interlacing block. So it reduces conceptually
> to a progressive system with the same transported bitrate.


What, so interlacing and compressing in the same block is different to
interlacing followed by compressing? Well, bugger me, patent your
fantastic idea immediatement!

Or, perhaps, you don't actually seem to know what you're talking about?


> Source -> Compress -> Transport -> Decompress -> Display
>
> QED.


Eh? QED follows after something has been proved, so what are you putting
it in this post for?


> With me so far?


No, I'm not with you at all. You see you started with this statement:

"It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing."

and attempted to prove that that statement is correct. Unfortunately,
you waffled a bit, described an interlacing system, then an interlacing
system with a slightly different block diagram which is still an
interlacing system; provided no explanation whatsoever of why or how
there would be any bit rate reduction for progressive compared to
interlaced.

Sorry pal, but you don't seem to know the first thing about video
encoding, so any attempts at proving something are obviously going to be
a little bit hindered, i.e. impossible.

Dave Farrance

unread,
Feb 9, 2005, 4:04:04 AM2/9/05
to

"DAB sounds worse than FM" <dab...@low.quality> wrote:

>Dave Farrance wrote:
>> ...


>> Now we can see that the Interlacing & Compression block is itself a
>> compression codec by definition. i.e. it takes the source video and
>> compresses it with some loss to a reduced bitrate. The same applies to
>> the decompression and de-interlacing block. So it reduces conceptually
>> to a progressive system with the same transported bitrate.

Some of the other posters that had tried to explain to you the factors
influencing the EBU decision had taken the principle that I outlined
here as a starting point, considering it too fundamental to be worth
explaining. I suspected that you'd missed this, hence my explanation. So
I'll ignore your comments where they are purely abusive, and see if I
can straighten out the other points.

>What, so interlacing and compressing in the same block is different to
>interlacing followed by compressing?

No, on the contrary, in that proof they are the same, just grouped
differently.

>No, I'm not with you at all. You see you started with this statement:
>"It is trivial to prove that in the world of advanced codecs and
>progressive displays that there is no bitrate reduction gained from
>interlacing."
>and attempted to prove that that statement is correct. Unfortunately,
>you waffled a bit, described an interlacing system, then an interlacing
>system with a slightly different block diagram which is still an
>interlacing system; provided no explanation whatsoever of why or how
>there would be any bit rate reduction for progressive compared to
>interlaced.

No it didn't, and this is the fundamental point here. It didn't show
that there would be a bitrate reduction for progressive compared to
interlaced, because I hadn't got that far. I was leaving that for a
later argument.

It merely proved what I said, which was there was no bitrate reduction
for interlaced compared to progressive. There is a difference. In other
words it proved that progressive could have the same bitrate as
interlaced.

Let me put the argument another way:

If you added interlacing to the system, and found that it resulted in a
bitrate reduction without loss of quality, that would merely demonstrate
that the codec was not state-of-the-art, because the interlacing could
be logically grouped with the codec. In other words, with a
state-of-the-art codec, you can't get a bitrate reduction without loss
of quality through interlacing.

Does that help?

--
Dave Farrance

DAB sounds worse than FM

unread,
Feb 9, 2005, 7:55:34 AM2/9/05
to
Dave Farrance wrote:
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> Dave Farrance wrote:
>>> ...
>>> Now we can see that the Interlacing & Compression block is itself a
>>> compression codec by definition. i.e. it takes the source video and
>>> compresses it with some loss to a reduced bitrate. The same applies
>>> to the decompression and de-interlacing block. So it reduces
>>> conceptually to a progressive system with the same transported
>>> bitrate.
>
> Some of the other posters that had tried to explain to you the factors
> influencing the EBU decision had taken


Erm, I have read all the EBU's articles on this, so I don't need their
position explained to me.


> the principle that I outlined
> here as a starting point, considering it too fundamental to be worth
> explaining. I suspected that you'd missed this, hence my explanation.
> So I'll ignore your comments where they are purely abusive, and see
> if I can straighten out the other points.


Here is your claim:

"It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing."

So, all I need to do is to provide a single counter-example to *prove*
that your example is false. So here it is:

You specifically claim that "is no bitrate reduction gained from
interlacing". So, comparing 1080i25 with 1080p50:

Because the H.264 codec consumes the vast majority of its bit rate by
encoding motion vectors and macroblock prediction residuals, and that
any moving object that covers a given number of pixels will require
twice as many macroblock prediction residuals and motion vectors for
1080p as for 1080i (simply due to the fact that an object's height in
pixels for interlaced is half that in any given field than it is for a
progressive frame), then 1080i saves 50% of the bit rate of 1080p, with
all else being equal.

If you read this introductory article on the new H.264/AVC video codec
(this is the advanced codec you're referring to, BTW, even though you
probably don't even know its name):

http://www.ebu.ch/trev_293-schaefer.pdf (543 KB)

it says this on page 1:

"The VCL design - as in any prior ITU-T and ISO/IEC JTC1 standard since
H.261 [2] - follows the so-called block-based hybrid video-coding
approach. The basic source-coding algorithm is a hybrid of inter-picture
prediction, to exploit the temporal statistical dependencies, and
transform coding of the prediction residual to exploit the spatial
statistical dependencies."

And on here:

http://www.vcodex.com/h264_overview.pdf

it says on page 1:

"The basic functional elements (prediction, transform, quantization,
entropy encoding) are little different from previous standards (MPEG1,
MPEG2, MPEG4, H.261, H.263); the important changes in H.264 occur in the
details of each functional element."

So, basically, the new advanced video codec that you're referring to
works in an identical *manner* as the MPEG-2 codec, but its efficiency
improvements are due to:

"There is no single coding element in the VCL that provides the majority
of the dramatic improvement in compression efficiency, in relation to
prior video coding standards. Rather, it is the plurality of smaller
improvements that add up to the significant gain."

And if there was a significant difference in the handling of interlaced
and progressive images, then you really would expect that article about
H.264 in the EBU Tech Review to mention it. Searching that document for
the terms: "interlace", "interlaced" or "interlacing" produces 2 "hits"
for the word "interlaced":

"Technical overview of H.264/AVC

The H.264/AVC design [2] supports the coding of video (in 4:2:0 chroma
format) that contains either progressive or interlaced frames, which may
be mixed together in the same sequence. Generally, a frame of video
contains two interleaved fields, the top and the bottom field. The two
fields of an interlaced frame, which are separated in time by a field
period (half the time of a frame period), may be coded separately as two
field pictures or together as a frame picture. A progressive frame
should always be coded as a single frame picture; however, it is still
considered to consist of two fields at the same instant in time."

Where's the mention of this big improvement in coding efficiency?

Again, quoting from:

http://www.vcodex.com/h264_overview.pdf

"the important changes in H.264 occur in the details of each functional
element"

But these functional elements work IDENTICALLY on macroblocks whether
the pixels are from an interlaced field or a progressive frame. To these
functional elements, a macroblock is simply a matrix of numbers. Nothing
more, nothing less.

An example of the improvement in efficiency is the use of arithmetic
coding instead of Huffman coding for the lossless entropy coding.
Arithmetic coding is more efficient than Huffman coding. And that's it.
The arithmetic coding isn't more efficient for progressive than for
interlaced, because it's just a routine for encoding data efficiently.
It does not and can not differentiate between whether the numbers it is
encoding are from a progressive or interlaced source.


>> What, so interlacing and compressing in the same block is different
>> to interlacing followed by compressing?
>
> No, on the contrary, in that proof they are the same, just grouped
> differently.


You provided absolutely no proof whatsoever.


>> No, I'm not with you at all. You see you started with this statement:
>> "It is trivial to prove that in the world of advanced codecs and
>> progressive displays that there is no bitrate reduction gained from
>> interlacing."
>> and attempted to prove that that statement is correct. Unfortunately,
>> you waffled a bit, described an interlacing system, then an
>> interlacing system with a slightly different block diagram which is
>> still an interlacing system; provided no explanation whatsoever of
>> why or how there would be any bit rate reduction for progressive
>> compared to interlaced.
>
> No it didn't, and this is the fundamental point here. It didn't show
> that there would be a bitrate reduction for progressive compared to
> interlaced, because I hadn't got that far. I was leaving that for a
> later argument.


Excuse me? This was your assertion:

"It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing."

and at the end of your post you wrote:

"QED."

which is what you write once you have proven something. But you proved
absolutely nothing.


> It merely proved what I said, which was there was no bitrate reduction
> for interlaced compared to progressive. There is a difference. In
> other words it proved that progressive could have the same bitrate as
> interlaced.


I've just had a look at your post, and you do not mention the image
dimensions anywhere, i.e. you don't say whether you're comparing 1080i
with 1080p or 1080i with 720p.

If you compare 1080i with 1080p, then all else being equal (same
advanced encoder set up identically) then the bit rate saving for
interlaced will be approximately a factor of 2.

If you compare 1080i with 720p then due to the differences in image size
there is a bit rate saving for 720p due to less pixels having to be
encoded:

1920 x 1080 x 25 = 51.84 Mpixels/s

1289 x 720 x 50 = 46.08 Mpixels/s

51.84 / 46.08 = 1.125

i.e. 1080i requires approximately a 12.5% increase in bit rate.

But you're comparing apples with oranges, because 1080i has 2.25 times
as many pixels on the screen than 720p -- that's where you get the
12.5%, because 2.25 / 2 = 1.125.


> Let me put the argument another way:
>
> If you added interlacing to the system, and found that it resulted in
> a bitrate reduction without loss of quality, that would merely
> demonstrate that the codec was not state-of-the-art,


Sorry, this is preposterous. I honestly do not think you know what you
are saying, because if you did then you would know how utterly
ridiculous that comment is.


> because the
> interlacing could be logically grouped with the codec. In other
> words, with a state-of-the-art codec, you can't get a bitrate
> reduction without loss of quality through interlacing.
>
> Does that help?


You're living in cloud cuckoo land, pal. Sorry, but you really do not
have the faintest idea what you're going on about.

davidr...@postmaster.co.uk

unread,
Feb 9, 2005, 9:35:46 AM2/9/05
to
DAB sounds worse than FM wrote:
> davidr...@postmaster.co.uk wrote:
> > Did you see my latest "summary" reply in the last thread we
discussed
> > this in?
>
> I read it, but I felt that the term 'resolution' had been poorly
defined
> and badly understood, so I thought it was a good idea to come to an
> agreement specifically about what resolution is, and which of the
> formats has the higher resolution. I've seen you say that 1080i
doesn't
> have a higher resolution, and I think I've seen Stephen Neal admit
that
> 1080i has a slightly higher resolution, and I don't think either are
> correct.

I read your replies very carefully most of the time, but it doesn't
seem that you do the same with other people's replies. Either that, or
I'm very bad at writing! I was honestly amazed when you replied to my
paragraph starting "Stephen has said..." with "don't spell my name like
that" - it was obvious (to me at least!) that I was talking to _you_
about what someone _else_ (called Stephen - with a ph!) had said.

There are other places where I'm talking about one specific thing to
address one specific point - and without quoting _everything_ I've
tried my best to make it clear exactly what I'm addressing. But you
argue against the specific thing in a very general sense as if I don't
understand the bigger picture. I must have written pages about the
evils of interlacing, simply to answer _one_ point of yours: that
deinterlacers will improve - imagine what a super computer and twenty
PhDs could do - deinterlacing=sorted. To argue against this, I've tried
to show situations where intelligence and computing power are
irrelevant - interlacing has totally destroyed specific information in
specific pictures.

However, the bigger picture (and I've mentioned this in my summary) is
that interlacing _does_ offer gains. Of course 1920 pixels gives you
better horizontal resolution than 1280 pixels - no one has ever doubted
or even bothered to debate it! Therefore of course 1080i gives higher
resolution than 720p. The perceived vertical resolution may be better
one way or the other, depending on various factors, but on much
material 1080i can again gain some real vertical resolution advantage
too (the specific disadvantages of interlacing aside).

So we don't need to clarify resolution. And all those quotes from your
book were pointless anyway - he's talking about the whole system from
end to end, but all we're talking about is the transmission format - in
that sense, you can talk about it in terms of the number of pixels - or
about half the number of pixels if you're talking about "lines of
resolution". If it's further limited in the camera or monitor, that's
kind of irrelevant - and, (unlike deinterlacing) will improve beyond
the bounds of the system in time.

[snip flashing screen due to horizontal lines]

> That effect ***IS*** twitter. And the reason why twitter is reduced
as
> the number of lines increases is because the size of the black and
white
> horizontal lines would have to be minute for HDTV for them to flash.

Steve, in one of your quotes from your book, he differentiated twitter
and large area flicker. Both happen because of excess high vertical
frequency components in the interlaced signal, but the perceived effect
is slightly different. I've never bothered to differentiate between the
two _until_ you posted that quote.

Read this very carefully: the limit for any interlaced system is half
the line count. That's the spacing (thus, frequency) at which a static
image can make the entire screen flash at frame rate due to
interlacing. Yes, that spacing is smaller for more lines, because it's
proportional to the line spacing!!! However, if you need to limit the
resolution by factor X relative to the line count to avoid twitter in
SD, then you have to limit the resolution by exactly the same factor X
relative to the line count to avoid twitter in HD. You keep trying to
suggest there's some further gain - there isn't! It's exactly the same
proportionality - the only gain is that the lines are smaller in HD,
thus the frequency is higher. Good! But that doesn't mean that a
factor or 0.7 (say) with SD will become 0.9 with HD (as you have tried
to suggest) - it won't. It'll still be 0.7 (if that is the compromise
you choose; 0.7 still leaves visible twitter!).

> David, I cannot keep copying paragraphs from my book, because it
takes
> ages. *Please* do some research into what twitter is. I respect your
> views far more than virtually everybody else on these groups

Yes, I know, which probably is the only reason we can have this
discussion without you telling me to fk off! Which is quite worrying
really!

> I've added an image of a nearly-horizontal black line to the bottom
of
> this page:
>
> http://www.digitalradiotech.co.uk/hd_screen_sizes.htm
>
> The line spans 8 vertical lines for the resolution I can see, and
with
> interlaced scanning you'd see lines 1, 3, 5 and 7 in one field and
lines
> 2, 4, 6 and 8 in the next.
>
> If the number of horizontal lines increases then the amount missing
in
> each field is reduced. For example, for simplicity, assume the number
of
> lines increases by a factor of 2, then the line would span 16
vertical
> lines, and in field 1 you'd see lines 1, 3, 5, 7, 9, 11, 13 and 15,
and
> in field 2 you'd see lines 2, 4, 6, 8, 10, 12, 14 and 16. So you'd
still
> only see half of the line at any one time, but the amount missing in
> each place is twice as small, and as it's twice as small then it
won't
> be as readily perceptible.

Yes, of course, because the limit scales with the line count. There are
paragraphs where you've tried to scale it even further, which is what I
was disagreeing with.

> Anyway, back to watching TV on my computer monitor with its inherent
> interlaced-to-progressive conversion. ;)

Funny - I was just watching the interlaced-to-progressive conversion of
a Samsung PAL progressive DVD player yesterday, and it was truly awful!
It's great with film sources, but give it some interlaced video and it
really sucks! Supply the interlaced signal to the progressive CRT or
plasma I was testing and the results are much better, because the
deinterlacing in these devices is better.

That doesn't prove either point - it just amazed me how badly
deinterlacing was carried out in that particular device.

Cheers,
David.

davidr...@postmaster.co.uk

unread,
Feb 9, 2005, 9:51:45 AM2/9/05
to
DAB sounds worse than FM wrote:
> davidr...@postmaster.co.uk wrote:
> > DAB sounds worse than FM wrote:
> >
> >> You know who I think are the worst people on technical-related
> >> newsgroups? It's people like you that just pass off hearsay as
fact.
> >> People like you are even relatively dangerous, because it's saying
> >> things like you've just said that leads to myths and half-truths
> >> being propagated.
> >
> > There's a bit of common sense to be applied too though!
>
> Excuse me? You've not provided a single reason why
progressive-scanned
> streams should require a lower bit rate other than the number of
pixels
> encoded per second is lower. That isn't an efficiency gain at all,
and
> it is an efficiency gain that you need to find to prove this
assertion.

I vaguely remember going through four different possible types of video
material (still picture, constantly changing picture, some movement,
something else) and suggesting very simply how you could include the
gains due to interlacing within the codec itself, avoiding interlacing.

You replied that H.264 didn't work like that. Seeing how you've just
replied to Dave Farrance's succinct re-stating of this issue, I don't
think it will help either of us if I go through it again.

> >> I've read the majority of this book about H.264 and MPEG-4 video
> >> coding:
> >>
> >> http://tinyurl.com/6na24
> >>
> >> and I have not seen one reference to these new codecs performing
> >> better on progressive streams than on interlaced streams.
Moreover,
> >> H.264 (which is the video codec that will be used in future) works
> >> identically on interlaced and progressive images in that it works
> >> identically on the macroblocks and sub-macroblocks presented to
it.
> >
> > Do you mean it treats the frames as if they were progressive (so it
> > just encodes two fields as one picture), or it treats subsequent
> > fields as if they were subsequent progressive frames, possibly
> > accounting for the vertical displacement (so it encodes two fields
as
> > two pictures). Or a combination of both. (DV uses a combination of
> > both, IIRC, depending on the content, but it's hardly an
intelligent
> > codec).
>
> I think it can do both.

I love that "think"! We're all idiots who shouldn't dare to comment on
this, but you've read a book about video coding, and you even _think_
you know how existing codecs handle interlacing - but heaven forbid
anyone else should suggest that future codecs might not benefit from
interlacing!

> What I meant, though, is that the encoder takes 16 x 16 pixel
> macroblocks (or sub-macroblocks) and encodes them, whether they're
> interlaced or progressive, and it doesn't do anything different on
> either format, because once it's being encoded it doesn't know that
> there's any difference.

That's how I read some of the MPEG-2 and Dirac stuff, though I vaguely
thought MPEG-2 could switch to field based if it wanted to. Dirac can't
yet. DV can.

If you think about what this means is happening inside the codec -
especially what information it will be throwing away, and think about
what your deinterlacer is going to have left to work with, then you
realise that my suggestion that a codec should be able to to better
isn't that far fetched.


Stop reading books and actually think through what will happen! Think
about what an interlaced frame (or two subsequent interlaced fields)
look like to a lossy coder, and then you should realise why it seems to
incredible to everyone else that it's better to interlace and then give
the codec this rubbish, than to give the codec the full progressive
picture and let it choose it's own compromises. If it can't do at least
as well as interlacing+codec then it's just not that bright!

Or more likely, it's refusing to trash the image in the way that
interlacing does, because it's not quite bright enough to realise the
ways in which fast motion can have reduced resolution, while other fast
changes can't.

> It's not just at very high bit rate. Taking figures from the SVT
WideXGA
> picture quality test results:
>
> New Mobile - 720p is better at the lowest tested bit rate of 10 Mbps,

> 1080i is better at 12Mbps and above
>
> Stockholm Plan - 1080i is better at all bit rates
>
> Knightsfields - 720p is better at the lowest tested bit rate of 10
Mbps,
> 1080i is better at about 12Mbps and above
>
> Park Run - 720p is better at all bit rates
>
> And remember that I propose that 720p be used for sport.

Would you like me to upload crappy MPEGs of the material somewhere so
you can at least see what it contains? There's almost nothing happening
in three of the clips! If you think Park run is sport, then it's
obvious you haven't watched it! ;-)

> No, I haven't pointed to any test that shows that interlacing is
better
> with the new codecs, because I haven't looked, because my
understanding
> is that the encoding "engine" works identically for both interlaced
and
> progressive pictures. Moreover, I don't trust any forthcoming EBU
test
> comparing the 2 formats, because it's trivially easy to rig such a
test
> by choosing the right source material, just as I don't trust Ofcom's
and
> the DAB industry's blind listening test for a new MPEG encoder.

Paranoia is showing here! I have to tell you, looking at the SVT
material that the 720p material is worse (format aside) than the other
stuff - maybe it was filmed slightly later in the day? Maybe the camera
man wasn't quite sure how to set the camera aperture correction for
that format. I don't know. I can tell you though: that SVT test was
certainly not rigged to favour 720p!!!

Once 1080p50 production equipment is available, then a really fair test
can be carried out.

Cheers,
David.

davidr...@postmaster.co.uk

unread,
Feb 9, 2005, 10:05:17 AM2/9/05
to
Dave Farrance wrote:

> If you added interlacing to the system, and found that it resulted in
a
> bitrate reduction without loss of quality, that would merely
demonstrate
> that the codec was not state-of-the-art, because the interlacing
could
> be logically grouped with the codec. In other words, with a
> state-of-the-art codec, you can't get a bitrate reduction without
loss
> of quality through interlacing.
>
> Does that help?

It helps me to realise that I'm sane! ;-)

It may well be that we don't have such state-of-the-art codecs for
years (though, I guess, by definition, the best we have at any given
time is always "state of the art", but sometimes what we have is still
rubbish!). However, the principle you show is so blinding simple that I
can't see any logical or even practical argument against it.


I thought of another example last night: PAL is great. You can fit both
luma _and_ chroma into the same 6MHz of bandwidth. Sure, it has some
shortcomings, but they're not too obvious with good post-processing
(e.g. BBC transform PAL decoder). So, what I propose to do is not to
store colour in MPEG-2. Oh no - just store the Y information. But that
Y information should actually be a PAL signal (undecoded) - so what you
have is the luma information which includes (around 4.43MHz), as luma
changes, the colour difference signal. It is basically what you see on
a B+W TV without a PAL decoder; that is whatg I want to feed to the
MPEG encoder as Y only. Do not store U and V. Now I realise the MPEG
encoder doesn't really know what to do with this PAL on Y signal, and
if anything survives it'll be more luck than judgement, and I know any
sensible encoder could compress colour information much better if it
actually _had_ discrete colour information rather than the PAL mess I'm
giving it, but look - I've done a 2:1 compression before we've even
started - surely that'll help the encoder to get the bitrate down even
lower?

Well, no, of course not. Apart from the problems with synchronising the
regenerated colour burst properly, it's plain stupid to give a lossy
codec something (that was originally _separate_ information) in a
single domain. It is still worse because the extra information is
expressed as high frequency components and, normally, it would rather
like to dump high frequency components! It's even dafter because all
the tools the lossy coder would normally use to compress the signal
would work a lot better if this primitive bit of PAL lossy coding
hadn't been put first in the signal chain - what's more, if the
advanced lossy coder can't handle colour information more efficiently
than the PAL lossy coder, then it's not very good.

So putting PAL into an MPEG encoder as coded luma-only is just silly.
It might work, but _if_ it does, and _if_ it gives efficiency/quality
gains, then that just _proves_ that the MPEG encoder is handling colour
compression _very_ inefficiently.

Draw your own comparison with interlacing! ;-)

Cheers,
David.

Googolplex

unread,
Feb 9, 2005, 11:34:36 AM2/9/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:

>>David, I cannot keep copying paragraphs from my book, because it
>>takes ages. *Please* do some research into what twitter is. I
>>respect your views far more than virtually everybody else on
>>these groups
>
>
> Yes, I know, which probably is the only reason we can have this
> discussion without you telling me to fk off! Which is quite worrying
> really!
>

You just know that if we were still in the pre-colour era, with PAL
about to be introduced, that DSWTFM would be campaigning that PAL is
crap because it gives artifacts and reduces the luma bandwidth
available. Not only that but it affects viewers who wish to remain
watching in black and white! He'd probably be campaigning for all new
colour TV receivers to be equipped with two tuners so as to allow the
chroma information to be broadcast on a different UHF channel to the
luma, halving the number of available UHF channels at a stroke. And
no-one would be disagreeing that PAL has some downsides, but they would
be tirelessly pointing out to him that it is a compromise.

DAB sounds worse than FM

unread,
Feb 9, 2005, 1:03:43 PM2/9/05
to
Googolplex wrote:
> davidr...@postmaster.co.uk wrote:
>> DAB sounds worse than FM wrote:
>
>>> David, I cannot keep copying paragraphs from my book, because it
>>> takes ages. *Please* do some research into what twitter is. I
>>> respect your views far more than virtually everybody else on
>>> these groups
>>
>>
>> Yes, I know, which probably is the only reason we can have this
>> discussion without you telling me to fk off! Which is quite worrying
>> really!
>>
>
> You just know that if we were still in the pre-colour era, with PAL
> about to be introduced, that DSWTFM would be campaigning


Firstly, are you actually aware that David Robinson completely agrees
with me about the state of the audio quality on DAB? It seems not.


> that PAL is
> crap because it gives artifacts and reduces the luma bandwidth
> available. Not only that but it affects viewers who wish to remain
> watching in black and white! He'd probably be campaigning for all new
> colour TV receivers to be equipped with two tuners so as to allow the
> chroma information to be broadcast on a different UHF channel to the
> luma, halving the number of available UHF channels at a stroke. And
> no-one would be disagreeing that PAL has some downsides, but they
> would be tirelessly pointing out to him that it is a compromise.


Don't even attempt to patronise me.

You paint me as being someone who is unreasonable. If you look at this
page:

http://www.digitalradiotech.co.uk/wasted_dab_multiplex_capacity.htm

read the following sentence (below the local DAB mux table):

"So, out of a total of 183 stations that could be using 160kbps or above
(including 3 BBC stations and the Digital One stations), we currently
have only 6 stations using 160kbps or above!!"

Compromise? Where is the compromise when THE SPACE IS NOT USED?

DAB sounds worse than FM

unread,
Feb 9, 2005, 1:23:31 PM2/9/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>> davidr...@postmaster.co.uk wrote:
>>> Did you see my latest "summary" reply in the last thread we
>>> discussed this in?
>>
>> I read it, but I felt that the term 'resolution' had been poorly
>> defined and badly understood, so I thought it was a good idea to
>> come to an agreement specifically about what resolution is, and
>> which of the formats has the higher resolution. I've seen you say
>> that 1080i doesn't have a higher resolution, and I think I've seen
>> Stephen Neal admit that 1080i has a slightly higher resolution, and
>> I don't think either are correct.
>
> I read your replies very carefully most of the time, but it doesn't
> seem that you do the same with other people's replies. Either that, or
> I'm very bad at writing! I was honestly amazed when you replied to my
> paragraph starting "Stephen has said..." with "don't spell my name
> like that" - it was obvious (to me at least!) that I was talking to
> _you_ about what someone _else_ (called Stephen - with a ph!) had
> said.


It was a mis-understanding, simple as that. We all do this from time to
time. I've seen you reply to people (digital radio-related) where I know
you've misunderstood their point, but it's not my place to butt-in.


> There are other places where I'm talking about one specific thing to
> address one specific point - and without quoting _everything_ I've
> tried my best to make it clear exactly what I'm addressing. But you
> argue against the specific thing in a very general sense as if I don't
> understand the bigger picture.


I've said on numerous occasions that I don't think you're using the
"right" definition of twitter, or if you are using the right definition
then my book has got it wrong.


> I must have written pages about the
> evils of interlacing, simply to answer _one_ point of yours: that
> deinterlacers will improve - imagine what a super computer and twenty
> PhDs could do - deinterlacing=sorted. To argue against this, I've
> tried to show situations where intelligence and computing power are
> irrelevant - interlacing has totally destroyed specific information in
> specific pictures.


One problem is that I think I've come up with a good idea of how to
"solve" this, but I don't want to say what the idea is in case I ever
want to try and develop the idea in the future. So, basically, I cannot
argue my case on this without explaining my idea. But I also don't think
my idea is all that revolutionary, and if I can think of it, then so can
many other people.


> However, the bigger picture (and I've mentioned this in my summary) is
> that interlacing _does_ offer gains.


Well, you've probably read Dave Farrance's posts claiming that
interlacing saves no bit rate whatsoever, and he won't listen to me, so
I wish you'd tell him that he's wrong.


> Of course 1920 pixels gives you
> better horizontal resolution than 1280 pixels - no one has ever
> doubted or even bothered to debate it! Therefore of course 1080i
> gives higher resolution than 720p. The perceived vertical resolution
> may be better one way or the other,


No, 1080i's vertical resolution is always greater. I think it only
depends on the twitter filter, and the value suggested in that book, and
the value I've seen Stephen Neal use is 0.7, and 0.7 x 1080 = 756 lines.
And twitter is inversely proportional to the number of lines, so twitter
should be less of a problem for HDTV than for SD, so I'd imagine that
that 0.7 value can be increased.


>> That effect ***IS*** twitter. And the reason why twitter is reduced
>> as the number of lines increases is because the size of the black
>> and white horizontal lines would have to be minute for HDTV for them
>> to flash.
>
> Steve, in one of your quotes from your book, he differentiated twitter
> and large area flicker. Both happen because of excess high vertical
> frequency components in the interlaced signal, but the perceived
> effect is slightly different. I've never bothered to differentiate
> between the two _until_ you posted that quote.
>
> Read this very carefully: the limit for any interlaced system is half
> the line count. That's the spacing (thus, frequency) at which a static
> image can make the entire screen flash at frame rate due to
> interlacing. Yes, that spacing is smaller for more lines, because it's
> proportional to the line spacing!!! However, if you need to limit the
> resolution by factor X relative to the line count to avoid twitter in
> SD, then you have to limit the resolution by exactly the same factor X
> relative to the line count to avoid twitter in HD.


I disagree. Look at how he words this:

"Twitter is introduced, however, by vertical detail whose scale

approaches the scan-line pitch. Twitter can be reduced to tolerable


levels by reducing the vertical detail somewhat, to perhaps 0.7 times."

Because he's using the terms "to tolerable levels" and "perhaps 0.7
times", then this is clearly a subjective issue and the 0.7 parameter
value can be varied depending on how perceptible the twitter is, and
because as has been stated many times the twitter is inversely
proportional to the number of lines, then this value can be increased.

Basically, this tiny-scale twitter is still there, but it's just either
not perceptible or only just perceptible.

>You keep trying to
> suggest there's some further gain - there isn't!


Says who?


> It's exactly the same
> proportionality - the only gain is that the lines are smaller in HD,
> thus the frequency is higher. Good! But that doesn't mean that a
> factor or 0.7 (say) with SD will become 0.9 with HD (as you have tried
> to suggest) - it won't.


It will.


> It'll still be 0.7 (if that is the compromise
> you choose; 0.7 still leaves visible twitter!).


Wrong.


>> I've added an image of a nearly-horizontal black line to the bottom
>> of this page:
>>
>> http://www.digitalradiotech.co.uk/hd_screen_sizes.htm
>>
>> The line spans 8 vertical lines for the resolution I can see, and
>> with interlaced scanning you'd see lines 1, 3, 5 and 7 in one field
>> and lines 2, 4, 6 and 8 in the next.
>>
>> If the number of horizontal lines increases then the amount missing
>> in each field is reduced. For example, for simplicity, assume the
>> number of lines increases by a factor of 2, then the line would span
>> 16 vertical lines, and in field 1 you'd see lines 1, 3, 5, 7, 9, 11,
>> 13 and 15, and in field 2 you'd see lines 2, 4, 6, 8, 10, 12, 14 and
>> 16. So you'd still only see half of the line at any one time, but
>> the amount missing in each place is twice as small, and as it's
>> twice as small then it won't be as readily perceptible.
>
> Yes, of course, because the limit scales with the line count. There
> are paragraphs where you've tried to scale it even further, which is
> what I was disagreeing with.


But when things get smaller they become less perceiveable/noticeable.


>> Anyway, back to watching TV on my computer monitor with its inherent
>> interlaced-to-progressive conversion. ;)
>
> Funny - I was just watching the interlaced-to-progressive conversion
> of a Samsung PAL progressive DVD player yesterday, and it was truly
> awful! It's great with film sources, but give it some interlaced
> video and it really sucks!


VHS? Who cares about VHS?


> Supply the interlaced signal to the
> progressive CRT or plasma I was testing and the results are much
> better, because the deinterlacing in these devices is better.
>
> That doesn't prove either point - it just amazed me how badly
> deinterlacing was carried out in that particular device.


It does show that it's far from impossible, and I would say that over
time then low-end equipment will even get decent i/p converters.

Just look at A/D converters. I'm sure that they used to be responsible
for a hell of a lot of nasty, nasty audio, but nowadays oversampling
DACs with the equivalent of 24-bit accuracy are cheap as chips (pardon
the pun) and they'll be standard in micro systems. It's just the way
technology goes. Another example is the DAB designers decision not to
put MP3 in DAB, and the decision was based on computational complexity.
Look at that decision now. That was a terrible decision.

DAB sounds worse than FM

unread,
Feb 9, 2005, 1:23:23 PM2/9/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>> davidr...@postmaster.co.uk wrote:
>>> Did you see my latest "summary" reply in the last thread we
>>> discussed this in?
>>
>> I read it, but I felt that the term 'resolution' had been poorly
>> defined and badly understood, so I thought it was a good idea to
>> come to an agreement specifically about what resolution is, and
>> which of the formats has the higher resolution. I've seen you say
>> that 1080i doesn't have a higher resolution, and I think I've seen
>> Stephen Neal admit that 1080i has a slightly higher resolution, and
>> I don't think either are correct.
>
> I read your replies very carefully most of the time, but it doesn't
> seem that you do the same with other people's replies. Either that, or
> I'm very bad at writing! I was honestly amazed when you replied to my
> paragraph starting "Stephen has said..." with "don't spell my name
> like that" - it was obvious (to me at least!) that I was talking to
> _you_ about what someone _else_ (called Stephen - with a ph!) had
> said.

It was a mis-understanding, simple as that. We all do this from time to
time. I've seen you reply to people (digital radio-related) where I know
you've misunderstood their point, but it's not my place to butt-in.

> There are other places where I'm talking about one specific thing to
> address one specific point - and without quoting _everything_ I've
> tried my best to make it clear exactly what I'm addressing. But you
> argue against the specific thing in a very general sense as if I don't
> understand the bigger picture.

I've said on numerous occasions that I don't think you're using the
"right" definition of twitter, or if you are using the right definition
then my book has got it wrong.

> I must have written pages about the
> evils of interlacing, simply to answer _one_ point of yours: that
> deinterlacers will improve - imagine what a super computer and twenty
> PhDs could do - deinterlacing=sorted. To argue against this, I've
> tried to show situations where intelligence and computing power are
> irrelevant - interlacing has totally destroyed specific information in
> specific pictures.

One problem is that I think I've come up with a good idea of how to
"solve" this, but I don't want to say what the idea is in case I ever
want to try and develop the idea in the future. So, basically, I cannot
argue my case on this without explaining my idea. But I also don't think
my idea is all that revolutionary, and if I can think of it, then so can
many other people.

> However, the bigger picture (and I've mentioned this in my summary) is
> that interlacing _does_ offer gains.

Well, you've probably read Dave Farrance's posts claiming that
interlacing saves no bit rate whatsoever, and he won't listen to me, so
I wish you'd tell him that he's wrong.

> Of course 1920 pixels gives you
> better horizontal resolution than 1280 pixels - no one has ever
> doubted or even bothered to debate it! Therefore of course 1080i
> gives higher resolution than 720p. The perceived vertical resolution
> may be better one way or the other,

No, 1080i's vertical resolution is always greater. I think it only
depends on the twitter filter, and the value suggested in that book, and
the value I've seen Stephen Neal use is 0.7, and 0.7 x 1080 = 756 lines.
And twitter is inversely proportional to the number of lines, so twitter
should be less of a problem for HDTV than for SD, so I'd imagine that
that 0.7 value can be increased.

>> That effect ***IS*** twitter. And the reason why twitter is reduced
>> as the number of lines increases is because the size of the black
>> and white horizontal lines would have to be minute for HDTV for them
>> to flash.
>
> Steve, in one of your quotes from your book, he differentiated twitter
> and large area flicker. Both happen because of excess high vertical
> frequency components in the interlaced signal, but the perceived
> effect is slightly different. I've never bothered to differentiate
> between the two _until_ you posted that quote.
>
> Read this very carefully: the limit for any interlaced system is half
> the line count. That's the spacing (thus, frequency) at which a static
> image can make the entire screen flash at frame rate due to
> interlacing. Yes, that spacing is smaller for more lines, because it's
> proportional to the line spacing!!! However, if you need to limit the
> resolution by factor X relative to the line count to avoid twitter in
> SD, then you have to limit the resolution by exactly the same factor X
> relative to the line count to avoid twitter in HD.

I disagree. Look at how he words this:

"Twitter is introduced, however, by vertical detail whose scale
approaches the scan-line pitch. Twitter can be reduced to tolerable


levels by reducing the vertical detail somewhat, to perhaps 0.7 times."

Because he's using the terms "to tolerable levels" and "perhaps 0.7


times", then this is clearly a subjective issue and the 0.7 parameter
value can be varied depending on how perceptible the twitter is, and
because as has been stated many times the twitter is inversely
proportional to the number of lines, then this value can be increased.

Basically, this tiny-scale twitter is still there, but it's just either
not perceptible or only just perceptible.

>You keep trying to


> suggest there's some further gain - there isn't!


Says who?


> It's exactly the same
> proportionality - the only gain is that the lines are smaller in HD,
> thus the frequency is higher. Good! But that doesn't mean that a
> factor or 0.7 (say) with SD will become 0.9 with HD (as you have tried
> to suggest) - it won't.


It will.


> It'll still be 0.7 (if that is the compromise
> you choose; 0.7 still leaves visible twitter!).


Wrong.


>> I've added an image of a nearly-horizontal black line to the bottom
>> of this page:
>>
>> http://www.digitalradiotech.co.uk/hd_screen_sizes.htm
>>
>> The line spans 8 vertical lines for the resolution I can see, and
>> with interlaced scanning you'd see lines 1, 3, 5 and 7 in one field
>> and lines 2, 4, 6 and 8 in the next.
>>
>> If the number of horizontal lines increases then the amount missing
>> in each field is reduced. For example, for simplicity, assume the
>> number of lines increases by a factor of 2, then the line would span
>> 16 vertical lines, and in field 1 you'd see lines 1, 3, 5, 7, 9, 11,
>> 13 and 15, and in field 2 you'd see lines 2, 4, 6, 8, 10, 12, 14 and
>> 16. So you'd still only see half of the line at any one time, but
>> the amount missing in each place is twice as small, and as it's
>> twice as small then it won't be as readily perceptible.
>
> Yes, of course, because the limit scales with the line count. There
> are paragraphs where you've tried to scale it even further, which is
> what I was disagreeing with.

But when things get smaller they become less perceiveable/noticeable.

>> Anyway, back to watching TV on my computer monitor with its inherent
>> interlaced-to-progressive conversion. ;)
>
> Funny - I was just watching the interlaced-to-progressive conversion
> of a Samsung PAL progressive DVD player yesterday, and it was truly
> awful! It's great with film sources, but give it some interlaced
> video and it really sucks!

VHS? Who cares about VHS?

> Supply the interlaced signal to the
> progressive CRT or plasma I was testing and the results are much
> better, because the deinterlacing in these devices is better.
>
> That doesn't prove either point - it just amazed me how badly
> deinterlacing was carried out in that particular device.

It does show that it's far from impossible, and I would say that over
time then low-end equipment will even get decent i/p converters.

Just look at A/D converters. I'm sure that they used to be responsible
for a hell of a lot of nasty, nasty audio, but nowadays oversampling
DACs with the equivalent of 24-bit accuracy are cheap as chips (pardon
the pun) and they'll be standard in micro systems. It's just the way
technology goes. Another example is the DAB designers decision not to
put MP3 in DAB, and the decision was based on computational complexity.
Look at that decision now. That was a terrible decision.

DAB sounds worse than FM

unread,
Feb 9, 2005, 1:25:23 PM2/9/05
to
DAB sounds worse than FM wrote:

<snip>

Whoops! I hit send and OE said "You cannot delete this" or sumfink. Hit
send again, same thing. Microsoft, eh...

DAB sounds worse than FM

unread,
Feb 9, 2005, 1:24:02 PM2/9/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>> davidr...@postmaster.co.uk wrote:
>>> Did you see my latest "summary" reply in the last thread we
>>> discussed this in?
>>
>> I read it, but I felt that the term 'resolution' had been poorly
>> defined and badly understood, so I thought it was a good idea to
>> come to an agreement specifically about what resolution is, and
>> which of the formats has the higher resolution. I've seen you say
>> that 1080i doesn't have a higher resolution, and I think I've seen
>> Stephen Neal admit that 1080i has a slightly higher resolution, and
>> I don't think either are correct.
>
> I read your replies very carefully most of the time, but it doesn't
> seem that you do the same with other people's replies. Either that, or
> I'm very bad at writing! I was honestly amazed when you replied to my
> paragraph starting "Stephen has said..." with "don't spell my name
> like that" - it was obvious (to me at least!) that I was talking to
> _you_ about what someone _else_ (called Stephen - with a ph!) had
> said.

It was a mis-understanding, simple as that. We all do this from time to
time. I've seen you reply to people (digital radio-related) where I know
you've misunderstood their point, but it's not my place to butt-in.

> There are other places where I'm talking about one specific thing to
> address one specific point - and without quoting _everything_ I've
> tried my best to make it clear exactly what I'm addressing. But you
> argue against the specific thing in a very general sense as if I don't
> understand the bigger picture.

I've said on numerous occasions that I don't think you're using the
"right" definition of twitter, or if you are using the right definition
then my book has got it wrong.

> I must have written pages about the
> evils of interlacing, simply to answer _one_ point of yours: that
> deinterlacers will improve - imagine what a super computer and twenty
> PhDs could do - deinterlacing=sorted. To argue against this, I've
> tried to show situations where intelligence and computing power are
> irrelevant - interlacing has totally destroyed specific information in
> specific pictures.

One problem is that I think I've come up with a good idea of how to
"solve" this, but I don't want to say what the idea is in case I ever
want to try and develop the idea in the future. So, basically, I cannot
argue my case on this without explaining my idea. But I also don't think
my idea is all that revolutionary, and if I can think of it, then so can
many other people.

> However, the bigger picture (and I've mentioned this in my summary) is
> that interlacing _does_ offer gains.

Well, you've probably read Dave Farrance's posts claiming that
interlacing saves no bit rate whatsoever, and he won't listen to me, so
I wish you'd tell him that he's wrong.

> Of course 1920 pixels gives you
> better horizontal resolution than 1280 pixels - no one has ever
> doubted or even bothered to debate it! Therefore of course 1080i
> gives higher resolution than 720p. The perceived vertical resolution
> may be better one way or the other,

No, 1080i's vertical resolution is always greater. I think it only
depends on the twitter filter, and the value suggested in that book, and
the value I've seen Stephen Neal use is 0.7, and 0.7 x 1080 = 756 lines.
And twitter is inversely proportional to the number of lines, so twitter
should be less of a problem for HDTV than for SD, so I'd imagine that
that 0.7 value can be increased.

>> That effect ***IS*** twitter. And the reason why twitter is reduced
>> as the number of lines increases is because the size of the black
>> and white horizontal lines would have to be minute for HDTV for them
>> to flash.
>
> Steve, in one of your quotes from your book, he differentiated twitter
> and large area flicker. Both happen because of excess high vertical
> frequency components in the interlaced signal, but the perceived
> effect is slightly different. I've never bothered to differentiate
> between the two _until_ you posted that quote.
>
> Read this very carefully: the limit for any interlaced system is half
> the line count. That's the spacing (thus, frequency) at which a static
> image can make the entire screen flash at frame rate due to
> interlacing. Yes, that spacing is smaller for more lines, because it's
> proportional to the line spacing!!! However, if you need to limit the
> resolution by factor X relative to the line count to avoid twitter in
> SD, then you have to limit the resolution by exactly the same factor X
> relative to the line count to avoid twitter in HD.

I disagree. Look at how he words this:

"Twitter is introduced, however, by vertical detail whose scale
approaches the scan-line pitch. Twitter can be reduced to tolerable


levels by reducing the vertical detail somewhat, to perhaps 0.7 times."

Because he's using the terms "to tolerable levels" and "perhaps 0.7


times", then this is clearly a subjective issue and the 0.7 parameter
value can be varied depending on how perceptible the twitter is, and
because as has been stated many times the twitter is inversely
proportional to the number of lines, then this value can be increased.

Basically, this tiny-scale twitter is still there, but it's just either
not perceptible or only just perceptible.

>You keep trying to


> suggest there's some further gain - there isn't!


Says who?


> It's exactly the same
> proportionality - the only gain is that the lines are smaller in HD,
> thus the frequency is higher. Good! But that doesn't mean that a
> factor or 0.7 (say) with SD will become 0.9 with HD (as you have tried
> to suggest) - it won't.


It will.


> It'll still be 0.7 (if that is the compromise
> you choose; 0.7 still leaves visible twitter!).


Wrong.


>> I've added an image of a nearly-horizontal black line to the bottom
>> of this page:
>>
>> http://www.digitalradiotech.co.uk/hd_screen_sizes.htm
>>
>> The line spans 8 vertical lines for the resolution I can see, and
>> with interlaced scanning you'd see lines 1, 3, 5 and 7 in one field
>> and lines 2, 4, 6 and 8 in the next.
>>
>> If the number of horizontal lines increases then the amount missing
>> in each field is reduced. For example, for simplicity, assume the
>> number of lines increases by a factor of 2, then the line would span
>> 16 vertical lines, and in field 1 you'd see lines 1, 3, 5, 7, 9, 11,
>> 13 and 15, and in field 2 you'd see lines 2, 4, 6, 8, 10, 12, 14 and
>> 16. So you'd still only see half of the line at any one time, but
>> the amount missing in each place is twice as small, and as it's
>> twice as small then it won't be as readily perceptible.
>
> Yes, of course, because the limit scales with the line count. There
> are paragraphs where you've tried to scale it even further, which is
> what I was disagreeing with.

But when things get smaller they become less perceiveable/noticeable.

>> Anyway, back to watching TV on my computer monitor with its inherent
>> interlaced-to-progressive conversion. ;)
>
> Funny - I was just watching the interlaced-to-progressive conversion
> of a Samsung PAL progressive DVD player yesterday, and it was truly
> awful! It's great with film sources, but give it some interlaced
> video and it really sucks!

VHS? Who cares about VHS?

> Supply the interlaced signal to the
> progressive CRT or plasma I was testing and the results are much
> better, because the deinterlacing in these devices is better.
>
> That doesn't prove either point - it just amazed me how badly
> deinterlacing was carried out in that particular device.

It does show that it's far from impossible, and I would say that over
time then low-end equipment will even get decent i/p converters.

Just look at A/D converters. I'm sure that they used to be responsible
for a hell of a lot of nasty, nasty audio, but nowadays oversampling
DACs with the equivalent of 24-bit accuracy are cheap as chips (pardon
the pun) and they'll be standard in micro systems. It's just the way
technology goes. Another example is the DAB designers decision not to
put MP3 in DAB, and the decision was based on computational complexity.
Look at that decision now. That was a terrible decision.

DAB sounds worse than FM

unread,
Feb 9, 2005, 1:36:00 PM2/9/05
to
davidr...@postmaster.co.uk wrote:
> Dave Farrance wrote:
>
>> If you added interlacing to the system, and found that it resulted
>> in a bitrate reduction without loss of quality, that would merely
>> demonstrate that the codec was not state-of-the-art, because the
>> interlacing could be logically grouped with the codec. In other
>> words, with a state-of-the-art codec, you can't get a bitrate
>> reduction without loss of quality through interlacing.
>>
>> Does that help?
>
> It helps me to realise that I'm sane! ;-)


I'm gob-smacked that you're entertaining this buffoon!


> though, I guess, by definition, the best we have at any given
> time is always "state of the art",


Exactly, and I'm afraid he thought this was possible *now*, which makes
his suggestion ridiculous.

DAB sounds worse than FM

unread,
Feb 9, 2005, 2:47:56 PM2/9/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>> davidr...@postmaster.co.uk wrote:

>>> There's a bit of common sense to be applied too though!
>>
>> Excuse me? You've not provided a single reason why
>> progressive-scanned streams should require a lower bit rate other
>> than the number of pixels encoded per second is lower. That isn't an
>> efficiency gain at all, and it is an efficiency gain that you need
>> to find to prove this assertion.
>
> I vaguely remember going through four different possible types of
> video material (still picture, constantly changing picture, some
> movement, something else) and suggesting very simply how you could
> include the gains due to interlacing within the codec itself,
> avoiding interlacing.
>
> You replied that H.264 didn't work like that.


I didn't say it didn't work like that. You were suggesting that for
moving objects that you could lower the bit rate so that the bit rate of
1080p could be reduced to the bit rate required for 1080i. The thing
that doesn't work with this scenario is that the vast majority of data
consumed is for encoding things that are moving, so you'd be effectively
just drastically degrading the picture quality.


> Seeing how you've just
> replied to Dave Farrance's succinct re-stating of this issue, I don't
> think it will help either of us if I go through it again.


True.


>>>> I've read the majority of this book about H.264 and MPEG-4 video
>>>> coding:
>>>>
>>>> http://tinyurl.com/6na24
>>>>
>>>> and I have not seen one reference to these new codecs performing
>>>> better on progressive streams than on interlaced streams. Moreover,
>>>> H.264 (which is the video codec that will be used in future) works
>>>> identically on interlaced and progressive images in that it works
>>>> identically on the macroblocks and sub-macroblocks presented to it.
>>>
>>> Do you mean it treats the frames as if they were progressive (so it
>>> just encodes two fields as one picture), or it treats subsequent
>>> fields as if they were subsequent progressive frames, possibly
>>> accounting for the vertical displacement (so it encodes two fields
>>> as two pictures). Or a combination of both. (DV uses a combination
>>> of both, IIRC, depending on the content, but it's hardly an
>>> intelligent codec).
>>
>> I think it can do both.
>
> I love that "think"!


From:

http://www.ebu.ch/trev_293-schaefer.pdf

Page 2:

"The two fields of an interlaced frame, which are separated in time by a
field period (half the time of a frame period), may be coded separately
as two field pictures or together as a frame picture."

I knew the two fields could be coded separately. What I *thought* was
that that they can also be coded together, and they can, as stated
above. What I don't know is how the decoder handles the latter
situation, because I've not got to that bit in my book.


> We're all idiots who shouldn't dare to comment on
> this, but you've read a book about video coding, and you even _think_
> you know how existing codecs handle interlacing - but heaven forbid
> anyone else should suggest that future codecs might not benefit from
> interlacing!


I don't have an argument that 1080p is the ideal format, but I just
don't think we're going to see it in Europe, or not on DTT. And you've
also said that you think we're more likely to get 1080p if we go via
720p, but the difference in bit rate is greater between 1080p and 720p
than it is to go from 1080i to 1080p, so I think we're less likely to
see 1080p if we go via 720p.


>> What I meant, though, is that the encoder takes 16 x 16 pixel
>> macroblocks (or sub-macroblocks) and encodes them, whether they're
>> interlaced or progressive, and it doesn't do anything different on
>> either format, because once it's being encoded it doesn't know that
>> there's any difference.
>
> That's how I read some of the MPEG-2 and Dirac stuff, though I vaguely
> thought MPEG-2 could switch to field based if it wanted to. Dirac
> can't yet. DV can.
>
> If you think about what this means is happening inside the codec -
> especially what information it will be throwing away, and think about
> what your deinterlacer is going to have left to work with, then you
> realise that my suggestion that a codec should be able to to better
> isn't that far fetched.
>
>
> Stop reading books and actually think through what will happen! Think
> about what an interlaced frame (or two subsequent interlaced fields)
> look like to a lossy coder, and then you should realise why it seems
> to incredible to everyone else that it's better to interlace and then
> give the codec this rubbish, than to give the codec the full
> progressive picture and let it choose it's own compromises.


I don't disagree that the encoder is the best place to select what to
and what not to keep / encode. My objection is that 720p is the lower
quality route, and it is the route favoured by the broadcasters because
it allows lower bit rates. And I'm not at all convinced that you can
effectively halve the bit rate of 1080p and keep the same quality as
1080i.

Here's my overview opinion on the 3 formats:

720p - significantly lower resolution than 1080i, less detail /
sharpness, improved motion portrayal, slightly lower bit rate than 1080i

1080i - significantly higher resolution than 720p, more detail /
sharpness, degraded motion portrayal, slightly higher bit rate than 720p

1080p - the best of both worlds, but it needs a far higher bit rate than
either of the other formats

And because of 1080p's bit rate requirements then I don't think it's
feasible, and nor do I think that tweaking the codec in whatever way you
tweak it is going to get the bit rate reduced by enough while
maintaining the same quality as 1080i.


> If it can't do at least as well as interlacing+codec then it's just
> not
> that bright!
>
> Or more likely, it's refusing to trash the image in the way that
> interlacing does, because it's not quite bright enough to realise the
> ways in which fast motion can have reduced resolution, while other
> fast changes can't.
>
>> It's not just at very high bit rate. Taking figures from the SVT
>> WideXGA picture quality test results:
>>
>> New Mobile - 720p is better at the lowest tested bit rate of 10 Mbps,
>
>> 1080i is better at 12Mbps and above
>>
>> Stockholm Plan - 1080i is better at all bit rates
>>
>> Knightsfields - 720p is better at the lowest tested bit rate of 10
>> Mbps, 1080i is better at about 12Mbps and above
>>
>> Park Run - 720p is better at all bit rates
>>
>> And remember that I propose that 720p be used for sport.
>
> Would you like me to upload crappy MPEGs of the material somewhere so
> you can at least see what it contains?


Yeah, if you like.


> There's almost nothing
> happening in three of the clips! If you think Park run is sport, then
> it's obvious you haven't watched it! ;-)


If they're running and the background is changing quickly, then it might
as well be sport.


>> No, I haven't pointed to any test that shows that interlacing is
>> better with the new codecs, because I haven't looked, because my
>> understanding is that the encoding "engine" works identically for
>> both interlaced and progressive pictures. Moreover, I don't trust
>> any forthcoming EBU test comparing the 2 formats, because it's
>> trivially easy to rig such a test by choosing the right source
>> material, just as I don't trust Ofcom's and the DAB industry's blind
>> listening test for a new MPEG encoder.
>
> Paranoia is showing here!


What, you think it's paranoid not to trust broadcasters? Is it paranoid
to disagree with the DAB broadcasters' view that DAB provides "superb
digital quality sound"?


> I have to tell you, looking at the SVT
> material that the 720p material is worse (format aside) than the other
> stuff - maybe it was filmed slightly later in the day? Maybe the
> camera man wasn't quite sure how to set the camera aperture
> correction for that format. I don't know. I can tell you though: that
> SVT test was certainly not rigged to favour 720p!!!


I should hope not seeing as it was well and trully trounced.


> Once 1080p50 production equipment is available, then a really fair
> test can be carried out.


Unbiased source material? You really think so?

Dave Farrance

unread,
Feb 9, 2005, 2:54:48 PM2/9/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote:

>Dave Farrance wrote:
>"It is trivial to prove that in the world of advanced codecs and
>progressive displays that there is no bitrate reduction gained from
>interlacing."
>
>So, all I need to do is to provide a single counter-example to *prove*
>that your example is false. So here it is:

Except that the interlacing can still be grouped the with codec in any
example that you give which ends with a progressive display.

>> No it didn't, and this is the fundamental point here. It didn't show
>> that there would be a bitrate reduction for progressive compared to
>> interlaced, because I hadn't got that far. I was leaving that for a
>> later argument.
>
>
>Excuse me? This was your assertion:
>
>"It is trivial to prove that in the world of advanced codecs and
>progressive displays that there is no bitrate reduction gained from
>interlacing."

Quite so... Hence the next paragraph which put it in terms that are
perfectly clear.

>> It merely proved what I said, which was there was no bitrate reduction
>> for interlaced compared to progressive. There is a difference. In
>> other words it proved that progressive could have the same bitrate as
>> interlaced.
>
>I've just had a look at your post, and you do not mention the image
>dimensions anywhere, i.e. you don't say whether you're comparing 1080i
>with 1080p or 1080i with 720p.

Same resolution, since the interlacing is merely being grouped with the
compression without changing the function in the steps that I gave.


>
>If you compare 1080i with 1080p, then all else being equal (same
>advanced encoder set up identically) then the bit rate saving for
>interlaced will be approximately a factor of 2.

With the overall codec that is the combination of the interlacing and
the compression, there's no bitrate change. Thus that becomes, in
effect, the state-of-the-art codec.

>> If you added interlacing to the system, and found that it resulted in
>> a bitrate reduction without loss of quality, that would merely
>> demonstrate that the codec was not state-of-the-art,

>Sorry, this is preposterous. I honestly do not think you know what you
>are saying, because if you did then you would know how utterly
>ridiculous that comment is.

No. As is clarified with the rest of the paragraph:

>> because the
>> interlacing could be logically grouped with the codec. In other
>> words, with a state-of-the-art codec, you can't get a bitrate
>> reduction without loss of quality through interlacing.

<abuse deleted>

--
Dave Farrance

Dave Farrance

unread,
Feb 9, 2005, 3:13:48 PM2/9/05
to
"davidr...@postmaster.co.uk" <davidr...@postmaster.co.uk> wrote:

>Dave Farrance wrote:
>> If you added interlacing to the system, and found that it resulted in a
>> bitrate reduction without loss of quality, that would merely demonstrate
>> that the codec was not state-of-the-art, because the interlacing could
>> be logically grouped with the codec. In other words, with a
>> state-of-the-art codec, you can't get a bitrate reduction without loss
>> of quality through interlacing.
>>
>> Does that help?
>
>It helps me to realise that I'm sane! ;-)
>
>It may well be that we don't have such state-of-the-art codecs for
>years (though, I guess, by definition, the best we have at any given
>time is always "state of the art", but sometimes what we have is still
>rubbish!). However, the principle you show is so blinding simple that I
>can't see any logical or even practical argument against it.

It's easy enough to argue against it if you happen to be someone that
doesn't understand it, though ;)

It's a basic principle that I frequently have to use in my job as an
electronic engineer. i.e Take two black boxes with interconnections
between each other and to the rest of the world, each having a complete
functional description in terms of its interconnections, and then draw a
new black box around both of them. A functional description for the new
box can be derived from the previous functional descriptions, except
that you can now completely disregard the internal interconnections
between the two previous boxes, because they are not part of the new
functional description.

It's intuitively obvious to most scientists and design engineers, but
not to the General Public. It seems that the human brain doesn't work
that way, unless you've developed the appropriate thought processes. So
I have to say that it's understandable that most people will never get
it.

>...


>So putting PAL into an MPEG encoder as coded luma-only is just silly.
>It might work, but _if_ it does, and _if_ it gives efficiency/quality
>gains, then that just _proves_ that the MPEG encoder is handling colour
>compression _very_ inefficiently.
>
>Draw your own comparison with interlacing! ;-)

Yep :)

--
Dave Farrance

DAB sounds worse than FM

unread,
Feb 9, 2005, 4:20:28 PM2/9/05
to
Dave Farrance wrote:
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> Dave Farrance wrote:
>> "It is trivial to prove that in the world of advanced codecs and
>> progressive displays that there is no bitrate reduction gained from
>> interlacing."
>>
>> So, all I need to do is to provide a single counter-example to
>> *prove* that your example is false. So here it is:
>
> Except that the interlacing can still be grouped the with codec in any
> example that you give which ends with a progressive display.


So what?


>>> No it didn't, and this is the fundamental point here. It didn't show
>>> that there would be a bitrate reduction for progressive compared to
>>> interlaced, because I hadn't got that far. I was leaving that for a
>>> later argument.
>>
>>
>> Excuse me? This was your assertion:
>>
>> "It is trivial to prove that in the world of advanced codecs and
>> progressive displays that there is no bitrate reduction gained from
>> interlacing."
>
> Quite so... Hence the next paragraph which put it in terms that are
> perfectly clear.
>
>>> It merely proved what I said, which was there was no bitrate
>>> reduction for interlaced compared to progressive. There is a
>>> difference. In other words it proved that progressive could have
>>> the same bitrate as interlaced.
>>
>> I've just had a look at your post, and you do not mention the image
>> dimensions anywhere, i.e. you don't say whether you're comparing
>> 1080i with 1080p or 1080i with 720p.
>
> Same resolution, since the interlacing is merely being grouped with
> the compression without changing the function in the steps that I
> gave.


I just don't believe what I'm reading here. You are actually suggesting
that 1080p can be transmitted at exactly the same bit rate as 1080i
without any loss in quality. If you, or anybody else, achieved that,
then they would become a very, very, very rich man.


>> If you compare 1080i with 1080p, then all else being equal (same
>> advanced encoder set up identically) then the bit rate saving for
>> interlaced will be approximately a factor of 2.
>
> With the overall codec that is the combination of the interlacing and
> the compression, there's no bitrate change.


So, you're proposing changing the block diagram of the codec, and just
this little change reduces bit rate by half without changing the level
of picture quality?

You also expect us to just take it as read that your scheme halves the
bit rate requirements, even though you have not given ANY explanation as
to why this would be the case. It just is.

Okay, I'm going to change a block diagram too:

Original block diagram:

Petrol => conventional petrol engine => exhaust => exhaust gases to
atmosphere

Modified block diagram:

Petrol => conventional petrol engine & exhaust => solid gold bars

It really is as simple as that!

Why does it work? Just take my word for it, it's state-of-the-art, so it
works superbly.


> Thus that becomes, in
> effect, the state-of-the-art codec.
>
>>> If you added interlacing to the system, and found that it resulted
>>> in a bitrate reduction without loss of quality, that would merely
>>> demonstrate that the codec was not state-of-the-art,
>
>> Sorry, this is preposterous. I honestly do not think you know what
>> you are saying, because if you did then you would know how utterly
>> ridiculous that comment is.
>
> No. As is clarified with the rest of the paragraph:


It would help if you could clarify exactly how and why there would be a
halving of the bit rate requirement for 1080p. Until you do, then all
you've achieved is posting something along the lines of a perpetual
motion machine.

DAB sounds worse than FM

unread,
Feb 9, 2005, 5:11:24 PM2/9/05
to
Dave Farrance wrote:
> "davidr...@postmaster.co.uk" <davidr...@postmaster.co.uk>
> wrote:
>
>> Dave Farrance wrote:
>>> If you added interlacing to the system, and found that it resulted
>>> in a bitrate reduction without loss of quality, that would merely
>>> demonstrate that the codec was not state-of-the-art, because the
>>> interlacing could be logically grouped with the codec. In other
>>> words, with a state-of-the-art codec, you can't get a bitrate
>>> reduction without loss of quality through interlacing.
>>>
>>> Does that help?
>>
>> It helps me to realise that I'm sane! ;-)
>>
>> It may well be that we don't have such state-of-the-art codecs for
>> years (though, I guess, by definition, the best we have at any given
>> time is always "state of the art", but sometimes what we have is
>> still rubbish!). However, the principle you show is so blinding
>> simple that I can't see any logical or even practical argument
>> against it.
>
> It's easy enough to argue against it if you happen to be someone that
> doesn't understand it, though ;)


Doesn't understand WHAT? Your proposal currently consists of changing:

Source -> Interlace -> Compress

to

Source -> Interlace&Compress

and you claim that there is no loss in quality even when using the same
bit rate.

YOU HAVE PROVIDED ABSOLUTELY NO DESCRIPTION OR EXPLANATION AS TO WHY
THIS SCHEME SAVES 50% OF THE BIT RATE THAT 1080p CURRENTLY CONSUMES.

Until you do that, your "scheme" remains in the perpetual motion machine
category.


> It's a basic principle that I frequently have to use in my job as an
> electronic engineer. i.e Take two black boxes with interconnections
> between each other and to the rest of the world, each having a
> complete functional description in terms of its interconnections, and
> then draw a new black box around both of them. A functional
> description for the new box can be derived from the previous
> functional descriptions, except that you can now completely disregard
> the internal interconnections between the two previous boxes, because
> they are not part of the new functional description.


Yeah, super. Just like if you've got a negative feedback control system:


input
||
\/
O>>>>>>>>>>>>>>>>>>>>> G >>>>>>>>>>>>>>>>>>>>>>>>>
|
|
|
|
<<<<<<<<<<<<<<<<<<<<< H <<<<<<<<<<<<<<<<<<

you can combine that all into one equation that describes its transfer
function, T:

T = G / (1 + G H)

Big deal. What has that got to do with digital video codecs????????
Nothing whatsoever, that's what.


>> So putting PAL into an MPEG encoder as coded luma-only is just silly.
>> It might work, but _if_ it does, and _if_ it gives efficiency/quality
>> gains, then that just _proves_ that the MPEG encoder is handling
>> colour compression _very_ inefficiently.
>>
>> Draw your own comparison with interlacing! ;-)
>
> Yep :)


Uh?

Dave Farrance

unread,
Feb 10, 2005, 3:45:42 AM2/10/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote:

>So, you're proposing changing the block diagram of the codec, and just
>this little change reduces bit rate by half without changing the level
>of picture quality?

No. I said there was no bitrate change if you change the diagram.

Maybe if I try rewriting my point yet another way, it will help. I'll
start off by phrasing it as a question:

Is it possible that a codec in a progressive system can compress the
bitrate down to that achieved by an interlaced and compressed system
without losing any quality.

The answer is yes, because all you've got to do is draw a box around the
interlacing and compression, and call THAT a codec, because what you've
got in that box IS a codec, logically speaking. This does not imply any
change of function or bitrate, just a change in the way that you name
the parts of the system. No change in technology - this is merely a
logical argument.

I don't think that there's anything to gain by trying to explain that
further, and it was only supposed to be the first step in the argument
about why progressive is better.

I'll now go on to step two in the argument. If you consider the
interlacing and compression as a single functional block, I.e. the new
codec, then you no longer need to preserve the format of the data
between the original interlacing and the original compression, because
that's now completely internal to the block. That is, by optimising the
component count within the new block, by having it handle its internal
data in a different way, you could retain exactly the same input/output,
but the uncompressed interlaced signal can completely disappear in the
process.

Step three in the argument (which has already been given by David
Robinson) is that if you optimise a block in that way, you can nearly
always IMPROVE its functionality. In this case it seems reasonable that
if the new codec makes use of the half of the lines that it had
previously throw away, and use those for motion estimation and edge
detection etc., then it could make a better job of the quality and
compression.

The same argument applies to the decompression/de-interlacing block -
the new decoding codec.

And given the way that you've answered my previous explanations, I'd
better reiterate that steps one (and step two) in the argument does not
imply a bitrate change, or a quality change, just that a progressive
system can have the SAME bitrate/quality as an interlaced system. It's
step THREE that explains why the progressive system can be better.

--
Dave Farrance

Dave Farrance

unread,
Feb 10, 2005, 3:55:24 AM2/10/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote:

>Doesn't understand WHAT? Your proposal currently consists of changing:
>
>Source -> Interlace -> Compress
>
>to
>
>Source -> Interlace&Compress
>
>and you claim that there is no loss in quality even when using the same
>bit rate.
>
>YOU HAVE PROVIDED ABSOLUTELY NO DESCRIPTION OR EXPLANATION AS TO WHY
>THIS SCHEME SAVES 50% OF THE BIT RATE THAT 1080p CURRENTLY CONSUMES.

Draw a box around the interlacing and compression and call that a codec.
I can't put it any simpler than that. No change in bitrate or
functionality. It's just that logically speaking, you now have a
progressive system by renaming the parts.

--
Dave Farrance

Stone Free

unread,
Feb 10, 2005, 7:58:17 AM2/10/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote in news:6zsOd.399
$w%6....@newsfe5-win.ntli.net:


>>> Anyway, back to watching TV on my computer monitor with its inherent
>>> interlaced-to-progressive conversion. ;)
>>
>> Funny - I was just watching the interlaced-to-progressive conversion
>> of a Samsung PAL progressive DVD player yesterday, and it was truly
>> awful! It's great with film sources, but give it some interlaced
>> video and it really sucks!
>
>
> VHS? Who cares about VHS?
>
>

Er, I'm confused where exactly in that paragraph did he mention VHS.

He was saying that giving the DVD player a disk with interlaced video
encoded on the disk rather than film, that it did a terrible job.

Christopher Key

unread,
Feb 10, 2005, 8:19:28 AM2/10/05
to
Dave Farrance wrote:

> Here's the proof...
>
> An interlacing system with compression looks like this:
>
> Source -> Interlace -> Compress -> Transport -> Decompress ->
> De-interlace -> Display
>
> Now group the interlacing and compression blocks together, and the
> decompression and de-interlacing blocks together:
>
> Source -> Interlace&Compress -> Transport -> Decompress&De-interlace
> -> Display.
>
> Now we can see that the Interlacing & Compression block is itself a
> compression codec by definition. i.e. it takes the source video and
> compresses it with some loss to a reduced bitrate. The same applies to
> the decompression and de-interlacing block. So it reduces conceptually
> to a progressive system with the same transported bitrate.
>
> Source -> Compress -> Transport -> Decompress -> Display
>
> QED.
>


This sounds like one of the most sensible things I've read so far. I'm
amazed that the interlaced / progressive argument is still running for HD,
and that the above hasn't long since been the way that modern codecs
operate.

Clearly the job of any codec is to pass the visual information through a
bandwidth limited channel maintaining the highest percieved quality using
whatever means it can. Surely, for some theoretical, 'optimal', codec, the
best technique is to supply it with the best possible picture, and to let it
decide how best to compress it and what to discard. It might be that for a
few frames, the best approach *is* to apply interlacing, but to force this
upon the codec seems utter madness. The number of times where it is best to
just interlace the entire picture as opposed to even just some region, or
something far more subtle vanishingly small. Any compression prior to the
codec can at best only maintain the percieved quality. Anything that was
'non optimal', ie something that the codec wouldn't have done will clearly
result in a reduction in quality. If it was 'optimal' then the codec would
have done it anyway, and there is no point in having the extra step.

The above all relates to a theorical codec that is compressing for one
specific type of display, but I can see no reason why the results can't be
applied to a codec under development today. It surely can't be beyond the
whit of developers to detect in real time whether it is best to apply
interlacing or not, or even to develop a codec that uses it selectively
throughout the picture. If this really is asking too much, then surely
having it as part of the specification, but simply as an on/off option for
the compressor would allow for it to become automatic without affecting the
decompressor at all.

Finally, I suppose this is all a bit of a moot point if the codecs that are
to be used for HD are already decided, but would hopefully still be an
applicable strategy when 2nd generation HD is under development.


Chris Key


Kevin Bracey

unread,
Feb 10, 2005, 8:18:59 AM2/10/05
to
In message <Xns95F983F40FE94sp...@216.168.3.50>
Stone Free <spam_KillKillKillstone_fr...@hotmail.com> wrote:

> "DAB sounds worse than FM" <dab...@low.quality> wrote in news:6zsOd.399
> $w%6....@newsfe5-win.ntli.net:
>

> >> Funny - I was just watching the interlaced-to-progressive conversion
> >> of a Samsung PAL progressive DVD player yesterday, and it was truly
> >> awful! It's great with film sources, but give it some interlaced
> >> video and it really sucks!
> >
> > VHS? Who cares about VHS?
> >
> Er, I'm confused where exactly in that paragraph did he mention VHS.
>
> He was saying that giving the DVD player a disk with interlaced video
> encoded on the disk rather than film, that it did a terrible job.

Poor Steve's not too hot on the reading comprehension. I suspect stopping
to properly read the post he's replying to breaks into the rhythm of his
rant.

--
Kevin Bracey, Principal Software Engineer
Tematic Ltd Tel: +44 (0) 1223 503464
182-190 Newmarket Road Fax: +44 (0) 1728 727430
Cambridge, CB5 8HE, United Kingdom WWW: http://www.tematic.com/

DAB sounds worse than FM

unread,
Feb 10, 2005, 8:20:00 AM2/10/05
to
Dave Farrance wrote:
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> So, you're proposing changing the block diagram of the codec, and
>> just this little change reduces bit rate by half without changing
>> the level of picture quality?
>
> No. I said there was no bitrate change if you change the diagram.
>
> Maybe if I try rewriting my point yet another way, it will help. I'll
> start off by phrasing it as a question:
>
> Is it possible that a codec in a progressive system can compress the
> bitrate down to that achieved by an interlaced and compressed system
> without losing any quality.
>
> The answer is yes, because all you've got to do is draw a box around
> the interlacing and compression, and call THAT a codec, because what
> you've got in that box IS a codec, logically speaking. This does not
> imply any change of function or bitrate, just a change in the way
> that you name the parts of the system. No change in technology - this
> is merely a logical argument.


Look, let's go back to what you initially asserted:

"It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing.

Here's the proof...

An interlacing system with compression looks like this:

Source -> Interlace -> Compress -> Transport -> Decompress ->
De-interlace -> Display

Now group the interlacing and compression blocks together, and the
decompression and de-interlacing blocks together:

Source -> Interlace&Compress -> Transport -> Decompress&De-interlace ->
Display.

Now we can see that the Interlacing & Compression block is itself a


compression codec by definition. i.e. it takes the source video and
compresses it with some loss to a reduced bitrate. The same applies to
the decompression and de-interlacing block. So it reduces conceptually
to a progressive system with the same transported bitrate.

Source -> Compress -> Transport -> Decompress -> Display

QED."


Firstly, here's a dictionary definition of the word 'proof':

http://www.onelook.com/?w=proof&ls=a

"noun: a formal series of statements showing that if one thing is true
something else necessarily follows from it"

You drawing a box around the interlacer and codec and renaming that a
codec and then saying that "it reduces conceptually to a progressive
system" does not constitute proof that the bit rate of
progressive-scanned video can provide the same level of quality, at the
same bit rate, and at the same resolution as an interlaced system.

I'm sorry, but it really does not constitute ANY proof whatsoever.

If you want to try and prove this assertion then you need to provide
some evidence, or logical arguments, that shows how and why the combined
interlacer and codec performs so well.


> I don't think that there's anything to gain by trying to explain that
> further, and it was only supposed to be the first step in the argument
> about why progressive is better.
>
> I'll now go on to step two in the argument.


It would actually help if you could realise that step one of the
argument has not been accepted.


> If you consider the
> interlacing and compression as a single functional block, I.e. the new
> codec, then you no longer need to preserve the format of the data
> between the original interlacing and the original compression, because
> that's now completely internal to the block.


It already is like this, because the codec works on blocks of pixels
identically whether the source is interlaced or progressive.


> That is, by optimising
> the component count within the new block,


Optimising the component count? This is not an electronic circuit.


> by having it handle its
> internal data in a different way, you could retain exactly the same
> input/output, but the uncompressed interlaced signal can completely
> disappear in the process.
>
> Step three in the argument (which has already been given by David
> Robinson) is that if you optimise a block in that way, you can nearly
> always IMPROVE its functionality. In this case it seems reasonable
> that if the new codec makes use of the half of the lines that it had
> previously throw away, and use those for motion estimation and edge
> detection etc., then it could make a better job of the quality and
> compression.


Have you ever stopped to consider that the new H.264 video codec is only
twice as efficient as MPEG-2, despite the fact that MPEG-2 is about 10
years old, and that the improvement in compression-efficiency that you
actually suggest is trivial to attain is actually FAR from being a
triviality?


> The same argument applies to the decompression/de-interlacing block -
> the new decoding codec.


Erm, the term for it is "decoder", not "decoding codec". Do you not know
what "codec" stands for?


> And given the way that you've answered my previous explanations, I'd
> better reiterate that steps one (and step two) in the argument does
> not imply a bitrate change, or a quality change, just that a
> progressive system can have the SAME bitrate/quality as an interlaced
> system. It's step THREE that explains why the progressive system can
> be better.


I suggest that before you move on to step 3 you actually consider
whether step 1 is feasible. Here's a hypothetical problem:

H.264 consumes the vast majority of its bit rate by encoding motion
vectors and the transformed and quantised prediction residuals -- i.e.
it consumes the vast majority of its bit rate due to objects moving. Say
you have a block of 32 x 16 pixels that all move, say, upwards by 20
lines from one frame to the next, and you have to encode this movement.
Assuming that the encoder chooses not to split these 2 macroblocks (a
macroblock is a block of 16 x 16 samples) into sub-macroblocks it'll be
encoded as follows:

Interlaced
-----------

The 16 pixels in each row for all the even numbered lines would be
combined into a macroblock; the motion vector will be calculated; the
prediction will be subtracted from the reference pixels; the prediction
residual (the result after subtracting the prediction) will be
transformed using a DCT, quantised and variable length coded.

Then 20 ms later the same happens for the odd-numbered lines.

So, in 20 ms (from time = 0 to time = 20 ms), 2 motion vectors have been
encoded and 2 macroblocks of prediction residuals have been encoded.


Progressive
------------

The 16 pixels in the first 16 rows would be combined into a macroblock;
the motion vector will be calculated; the prediction will be subtracted
from the reference pixels; the prediction residual (the result after
subtracting the prediction) will be transformed using a DCT, quantised
and variable length coded.

Then the same happens for the macroblock containing pixels in the 17th
to 32nd rows.

Then 20 ms later the same process occurs for both macroblocks.

So, in 20 ms (from time = 0 to time = 20 ms), 4 motion vectors have been
encoded and 4 macroblocks of prediction residuals have been encoded.


Differences
-----------

There are 4 motion vectors that need to be calculated for progressive
and only 2 motion vectors for interlaced.

There are 4 macroblock prediction residuals that need to be encoded for
progressive and only 2 macroblock prediction residuals for intelaced.

So, all else being equal, interlaced requires half the bit rate compared
to progressive.

And your assumption is that interlaced saves no bit rate whatsoever even
for the same 1920x1080 resolution.


QUESTION:

Please explain how your scheme improves compression-efficiency by a
factor of 2 while not reducing quality.

In other words: how does your scheme achieve the Holy Grail of video
coding when the best brains in video coding have failed?

When answering, bear in mind that after the prediction residuals have
been DCT-transformed they are lossless (entropy) encoded, and the same
lossless encoding is applied identically to both interlaced and
progressive macroblocks -- because the lossless encoder doesn't have a
clue whether the source was interlaced or progressive.

Also bear in mind that any other post-DCT transformation processes will
also apply identically to both interlace- or progressive-sourced
streams.

Also bear in mind the following fact:

http://www.vcodex.com/h264_transform.pdf

page 4:

"The wide range of quantizer step sizes makes it possible for an encoder
to accurately and flexibly control the trade-off between bit rate and
quality."

and halving the post-quantizer bit rate will drastically degrade the
picture quality.

I await with baited breath the details of your historic coding scheme.

DAB sounds worse than FM

unread,
Feb 10, 2005, 8:30:11 AM2/10/05
to


Using progressive for a given resolution and a given picture quality
approximately doubles the bit rate compared to using interlaced. Drawing
a box around the interlacing and compression and calling that a codec
doesn't explain why your new scheme halves the bit rate for a given
resolution and given level of picture quality.

Explain how your new scheme achieves the Holy Grail.

DAB sounds worse than FM

unread,
Feb 10, 2005, 8:35:17 AM2/10/05
to
Christopher Key wrote:
> Dave Farrance wrote:
>
>> Here's the proof...
>>
>> An interlacing system with compression looks like this:
>>
>> Source -> Interlace -> Compress -> Transport -> Decompress ->
>> De-interlace -> Display
>>
>> Now group the interlacing and compression blocks together, and the
>> decompression and de-interlacing blocks together:
>>
>> Source -> Interlace&Compress -> Transport -> Decompress&De-interlace
>> -> Display.
>>
>> Now we can see that the Interlacing & Compression block is itself a
>> compression codec by definition. i.e. it takes the source video and
>> compresses it with some loss to a reduced bitrate. The same applies
>> to the decompression and de-interlacing block. So it reduces
>> conceptually to a progressive system with the same transported
>> bitrate. Source -> Compress -> Transport -> Decompress -> Display
>>
>> QED.
>>
>
>
> This sounds like one of the most sensible things I've read so far.


It would be great if it worked, but it doesn't work. That just happens
to be a slight snag with this proposal.......

To use an analogy, it's like having a Rolls Royce with a Mini engine in
it; on the outside it seems fantastic, but its performance would be
dreadful.

DAB sounds worse than FM

unread,
Feb 10, 2005, 8:38:59 AM2/10/05
to


Granted. I took: "give it some interlaced video and it really sucks" to
mean that he was playing back VHS. Not sure why I thought that.

DAB sounds worse than FM

unread,
Feb 10, 2005, 8:45:03 AM2/10/05
to
Kevin Bracey wrote:
> In message <Xns95F983F40FE94sp...@216.168.3.50>
> Stone Free
> <spam_KillKillKillstone_fr...@hotmail.com>
> wrote:
>
>> "DAB sounds worse than FM" <dab...@low.quality> wrote in
>> news:6zsOd.399 $w%6....@newsfe5-win.ntli.net:
>>
>>>> Funny - I was just watching the interlaced-to-progressive
>>>> conversion of a Samsung PAL progressive DVD player yesterday, and
>>>> it was truly awful! It's great with film sources, but give it some
>>>> interlaced video and it really sucks!
>>>
>>> VHS? Who cares about VHS?
>>>
>> Er, I'm confused where exactly in that paragraph did he mention VHS.
>>
>> He was saying that giving the DVD player a disk with interlaced video
>> encoded on the disk rather than film, that it did a terrible job.
>
> Poor Steve's not too hot on the reading comprehension.


How do you explain my 2 x 1st class degrees + MSc if my reading and
comprehension aren't too hot?


> I suspect stopping to properly read the post he's replying to breaks
> into the rhythm of
> his rant.


I accept that I mis-interpreted what he said, and in hindsight I agree
it was rather dozy.

I reckon I'd probably just read one of Dave Farrance's cretinous posts
and was in a state of shock.

Christopher Key

unread,
Feb 10, 2005, 9:43:06 AM2/10/05
to
DAB sounds worse than FM wrote:

> Using progressive for a given resolution and a given picture quality
> approximately doubles the bit rate compared to using interlaced.
> Drawing a box around the interlacing and compression and calling that
> a codec doesn't explain why your new scheme halves the bit rate for a
> given resolution and given level of picture quality.
>
> Explain how your new scheme achieves the Holy Grail.

I can only assume that you're misunderstandig what Dave is saying,
specifically that:

1080p compressed by some theoretical codec will always match or outperform
1080i compressed into the same bandwidth.


Chris Key


Christopher Key

unread,
Feb 10, 2005, 9:59:39 AM2/10/05
to
DAB sounds worse than FM wrote:

> It would be great if it worked, but it doesn't work. That just happens
> to be a slight snag with this proposal.......
>
> To use an analogy, it's like having a Rolls Royce with a Mini engine
> in it; on the outside it seems fantastic, but its performance would be
> dreadful.

Sorry, I really don't follow this analogy.

I think a better one may be as follows:

John wants to send a spoken message in English to Bob who also only speaks
English. The carrier of the message however is Francois, who only speaks
French.

Now, say to send his message, John had it translated into German, then from
German into French. Francois would be able to understand it, and relay it to
Bob, who could then translate it back to English. You would however expect
the meaning to be somewhat lost, and any of the finer nuances in the
original would certainly have gone.

Would it not be far better for John to have had the message translated
directly into French in the first place. This wouldn't restore the original
English, but it would certainly keep far more of the sense of the original
message.

Does this make sense?


Chris Key


Andrew Hodgkinson

unread,
Feb 10, 2005, 10:07:01 AM2/10/05
to
DAB sounds worse than FM wrote:

> Using progressive for a given resolution and a given picture quality
> approximately doubles the bit rate compared to using interlaced.

You'd be right were it not for the confusion that's going on about the
deinterlacing stage. Because our hypothetical display device (TV) in our
hypothetical example does deinterlacing, it is showing the viewer 1080p
video, not 1080i. Both examples start with 1080p, and both end with 1080p.

Imagine you start with 1080p - 50 frames a second at 1920 by 1080 each.
You want to broadcast this at some target data rate. You have two ways
of doing it.

One is to take that 1920x1080x50 frames/sec content and encode it with
some lossy CODEC. Let's say MPEG 2. You broadcast the data stream. The
TV at the other end decodes the MPEG 2 stream and displays the output
frames directly.

Now let's add some new software to both our MPEG 2 encoder, and the TV's
MPEG 2 decoder. In the encoder, the extra software takes the odd
numbered 1920x1080 progressive frames, and throws away every odd
numbered line in that frame. It then sends this on for MPEG encoding. It
takes the even numbered frames, and throws away every even numbered
line. It sends this for MPEG encoding. In effect, we encode 1920x540x50
frames/sec video. Let's call this new software in the encoder an
"interlacer".

In the TV's decoder, the standard MPEG 2 component decompresses frames
spitting out 1920x540 resolution pictures 50 times a second. Our new bit
of software takes these, say, in pairs, and analyzes motion between
them. It uses motion prediction and interpolation algorithms to
reconstruct a full 1920x1080 frame - one full frame every 50th of a
second. Obviously, half of each frame is a guess, but an educated one.
We thus show on screen full 1920x1080x50 progressive frames/second
video. We call the new software in our TV a "deinterlacer".

Thus our two options are BOTH this:

1080p video -> encode -> broadcast -> decode -> 1080p video

It's just that option two includes a rather odd encoder and decoder,
where the lossy encoding includes a step of throwing away half the lines
in each frame, and the decoder therefore includes a step of
reconstructing those lines based on image processing algorithms.

This is what the OP was saying. Interlace and deinterlace can be
considered part of the CODECs. We still start with THE SAME format, and
displaying THE SAME format. It's just that one of our CODECs takes the
rather crude method of throwing away half of the picture lines in each
frame, leaving the decoder with a bit of nasty guessing game to put them
back again.

If we had, say, 16Mbit/sec to play with, in our first example we'd have
to encode 50 lots of 1920 x 1080 frames in 16Mbit/sec. In our second
example we'd also have to encode 50 lots of 1920 x 1080 frames in
16Mbit/sec. It's just that in our second case, we begin our encoding
process by throwing half of the information away outright. Compare this
to our first example, where we try to use more modern methods of lossy
encoding, taking the whole picture into consideration and working out
what parts can be thrown away with minimal visible impact, rather than
just some dumb "chunk half away" approach.

Now, which method gives the better picture quality is a good question.
Certainly, both have artifacts. A progressive encoder and decoder, based
on MPEG 2 for example, will have all the familiar macroblocking
problems. An encoder and decoder which included interlace and
deinterlace steps would presumably display less macroblocking, because
half the data was thrown away to start with so there's less picture left
to encode, but it'd have a whole host of new artifacts related to
deinterlacing (or if you display the interlaced video directly, you've
visibly reduced temporal resolution and line twitter). Again, we're all
familiar with such artifacts.

There is no reason to suppose that deinterlacing will not become in time
more advanced. However there is also no reason to suppose that CODECs
themselves would not become more advanced. And would it not make more
sense to spend all the time on one problem, rather than splitting
research engineers between two tasks - making better encoders, or making
better deinterlacers?

--
TTFN, Andrew.

"Hold on tight, lad, and think of Lancashire Hotpot!" - A Grand Day Out

DAB sounds worse than FM

unread,
Feb 10, 2005, 10:18:48 AM2/10/05
to
Christopher Key wrote:
> DAB sounds worse than FM wrote:
>
>> It would be great if it worked, but it doesn't work. That just
>> happens to be a slight snag with this proposal.......
>>
>> To use an analogy, it's like having a Rolls Royce with a Mini engine
>> in it; on the outside it seems fantastic, but its performance would
>> be dreadful.
>
> Sorry, I really don't follow this analogy.


The analogy is that Dave Farrance's scheme looks great from a distance,
but if you actually tried to use this scheme it would perform very
poorly.

Interlacing halves the bit rate relative to progressive for a given
resolution, so Dave Farrance's scheme needs to double the compression
efficiency, and when you consider that the new H.264 video codec doubles
the compression efficiency relative to MPEG-2 and MPEG-2 is about 10
years old, then Dave Farrance's scheme is, I'm afraid, too good to be
true.


> I think a better one may be as follows:
>
> John wants to send a spoken message in English to Bob who also only
> speaks English. The carrier of the message however is Francois, who
> only speaks French.
>
> Now, say to send his message, John had it translated into German,
> then from German into French. Francois would be able to understand
> it, and relay it to Bob, who could then translate it back to English.
> You would however expect the meaning to be somewhat lost, and any of
> the finer nuances in the original would certainly have gone.
>
> Would it not be far better for John to have had the message translated
> directly into French in the first place. This wouldn't restore the
> original English, but it would certainly keep far more of the sense
> of the original message.
>
> Does this make sense?


No.

Christopher Key

unread,
Feb 10, 2005, 10:33:57 AM2/10/05
to
DAB sounds worse than FM wrote:
> Christopher Key wrote:
>> DAB sounds worse than FM wrote:
>>
>>> It would be great if it worked, but it doesn't work. That just
>>> happens to be a slight snag with this proposal.......
>>>
>>> To use an analogy, it's like having a Rolls Royce with a Mini engine
>>> in it; on the outside it seems fantastic, but its performance would
>>> be dreadful.
>>
>> Sorry, I really don't follow this analogy.
>
>
> The analogy is that Dave Farrance's scheme looks great from a
> distance, but if you actually tried to use this scheme it would
> perform very poorly.
>
> Interlacing halves the bit rate relative to progressive for a given
> resolution, so Dave Farrance's scheme needs to double the compression
> efficiency, and when you consider that the new H.264 video codec
> doubles the compression efficiency relative to MPEG-2 and MPEG-2 is
> about 10 years old, then Dave Farrance's scheme is, I'm afraid, too
> good to be true.
>

Dave's scheme quite trivially doubles the efficiency by simply ditching half
of the lines, specifcally by interlacing. All that is happening is that the
interlacing / deinterlacing process is being subsumed into the codec.
Improvements are however possible, as the combination of the two processes
allows for potentially better efficiency.

>
>> I think a better one may be as follows:
>>
>> John wants to send a spoken message in English to Bob who also only
>> speaks English. The carrier of the message however is Francois, who
>> only speaks French.
>>
>> Now, say to send his message, John had it translated into German,
>> then from German into French. Francois would be able to understand
>> it, and relay it to Bob, who could then translate it back to English.
>> You would however expect the meaning to be somewhat lost, and any of
>> the finer nuances in the original would certainly have gone.
>>
>> Would it not be far better for John to have had the message
>> translated directly into French in the first place. This wouldn't
>> restore the original English, but it would certainly keep far more
>> of the sense of the original message.
>>
>> Does this make sense?
>
>
> No.

Which bits are causing the problem?

Chris Key


DAB sounds worse than FM

unread,
Feb 10, 2005, 11:12:22 AM2/10/05
to
Andrew Hodgkinson wrote:
> DAB sounds worse than FM wrote:
>
>> Using progressive for a given resolution and a given picture quality
>> approximately doubles the bit rate compared to using interlaced.
>
> You'd be right were it not for the confusion that's going on about the
> deinterlacing stage. Because our hypothetical display device (TV) in
> our hypothetical example does deinterlacing, it is showing the viewer
> 1080p video, not 1080i. Both examples start with 1080p, and both end
> with 1080p.


No, I know this is what he means.


Yes, I know what he meant. What I'm disputing is his claim that
integrating the interlacing with the codec can miraculously save 50% of
the bit rate and still end up with 1080p at the far end. And what really
pisses me off is that he just will not give ANY details about WHY this
would provide the same level of picture quality at the same
resolution -- interlacing may be crude, but it's bloody effective.


> If we had, say, 16Mbit/sec to play with, in our first example we'd
> have to encode 50 lots of 1920 x 1080 frames in 16Mbit/sec. In our
> second example we'd also have to encode 50 lots of 1920 x 1080 frames
> in 16Mbit/sec. It's just that in our second case, we begin our
> encoding process by throwing half of the information away outright.
> Compare this to our first example, where we try to use more modern
> methods of lossy encoding,


This is the problem, though, what are these "modern methods of lossy
encoding" that miraculously halve the bit rate for the same picture
quality??


> taking the whole picture into
> consideration and working out what parts can be thrown away with
> minimal visible impact, rather than just some dumb "chunk half away"
> approach.


But the problem is that the "dumb chuck half away" approach does halve
the bit rate, whereas 1080p even with the interlacer combined with the
codec has to get by with half of the bit rate per frame.


> Now, which method gives the better picture quality is a good question.


The vast majority of the bit rate consumed by the H.264 encoder will be
due to encoding motion vectors and prediction residuals. You have HALF
of the bit rate to play with, because you're NOT throwing away half the
lines. This WILL inevitably result in a drastic reduction in picture
quality compared to when interlacing is used.


<snip>

> There is no reason to suppose that deinterlacing will not become in
> time more advanced. However there is also no reason to suppose that
> CODECs themselves would not become more advanced. And would it not
> make more sense to spend all the time on one problem, rather than
> splitting research engineers between two tasks - making better
> encoders, or making better deinterlacers?


Don't get me wrong, I think 1080p is the ideal format, and it is simply
Dave Farrance's argument that is consider to be ludicrous, and not in
small part due to the fact that he does not or can not provide any
details other than to say that drawing a new frigging block diagram
automatically saves 50% bit rate!

Just consider that the last 10 years of video codec research has
produced H.264 which is twice as efficient as MPEG-2, and now I'm
expected to believe that you can halve the bit rate again with H.264?
Not a chance.

You, literally, do not get something for nothing.

DAB sounds worse than FM

unread,
Feb 10, 2005, 11:12:57 AM2/10/05
to


There's no mis-understanding, he's just wrong. Sorry.

DAB sounds worse than FM

unread,
Feb 10, 2005, 11:58:11 AM2/10/05
to


Right, this is actually the idea I had a few days ago but didn't want to
mention it on here in case I ever decided to try and develop it myself,
but now it's been mentioned then I might as well anyway.

Basically, though, I didn't even understand WTF Dave Farrance was going
on about, partly because he wouldn't expand on his scheme at all other
than to say re-drawing the frigging block diagram automatically halves
the bit rate without the quality suffering.

This scheme is basically interlacing + side-channel information to help
a de-interlacer reconstruct the frame intelligently rather than throwing
half the information away and expecting a de-interlacer to make its best
attempt with information that is missing. But you don't get something
for nothing, and this scheme will not be able to provide the same
picture quality as pure interlaced at a given picture quality and at the
same bit rate. It really is expecting something for absolutely nothing,
and you can't do that. The side-channel information will require a
certain amount of bit rate, and the pertinent question is just how much
the bit rate would be?

You've also got to consider that 1920x1080 is 2.25 times 1280x720, so
there's an inherent 12.5% (2.25 / 2 = 1.125) bit rate penalty. So,
firstly the public service broadcasters that the EBU represents would
have to kiss goodbye to the bit rate savings and accept the bit rate for
the interlacer side-channel info.

The overall conclusion, though, is that 1280x720 is a waste of time.


>>> I think a better one may be as follows:
>>>
>>> John wants to send a spoken message in English to Bob who also only
>>> speaks English. The carrier of the message however is Francois, who
>>> only speaks French.
>>>
>>> Now, say to send his message, John had it translated into German,
>>> then from German into French. Francois would be able to understand
>>> it, and relay it to Bob, who could then translate it back to
>>> English. You would however expect the meaning to be somewhat lost,
>>> and any of the finer nuances in the original would certainly have
>>> gone. Would it not be far better for John to have had the message
>>> translated directly into French in the first place. This wouldn't
>>> restore the original English, but it would certainly keep far more
>>> of the sense of the original message.
>>>
>>> Does this make sense?
>>
>>
>> No.
>
> Which bits are causing the problem?


It isn't analogical to the situation that is being discussed. Sorry.

DAB sounds worse than FM

unread,
Feb 10, 2005, 12:07:43 PM2/10/05
to
DAB sounds worse than FM wrote:

> Right, this is actually the idea I had a few days ago but didn't want
> to mention it on here in case I ever decided to try and develop it
> myself, but now it's been mentioned then I might as well anyway.


In case anybody doubts whether I did have this idea, I wrote this on the
3rd Feb:

"Yeah, forgive me if I don't go into detail about i/p conversion, cos I
think I've come up with a good idea."

Chris

unread,
Feb 10, 2005, 7:25:19 PM2/10/05
to
Why all this bother about interlaced formats? Its pointless!

Sports - use 720p50 for fast motion reproduction
Films & drama - use 1080p25, or p24 to match film rates

Internal production - use 1080p50, scaling down to 720p50 or decimation
to 1080p25 for broadcast as appropriate.

1080p25 = 1920x1080x25 = 51,840,000 pixels/second
1080p24 = 1920x1080x24 = 49,766,400 pixels/second
720p50 = 1280x720x50 = 46,080,000 pixels/second

1080i50 is a dead-end format, and makes too much work for the display
panel, which would be more than likely progressively scanned anyway.

Christopher Key

unread,
Feb 10, 2005, 7:28:29 PM2/10/05
to
DAB sounds worse than FM wrote:
> DAB sounds worse than FM wrote:
>
>> Right, this is actually the idea I had a few days ago but didn't want
>> to mention it on here in case I ever decided to try and develop it
>> myself, but now it's been mentioned then I might as well anyway.
>
>
> In case anybody doubts whether I did have this idea, I wrote this on
> the 3rd Feb:
>
> "Yeah, forgive me if I don't go into detail about i/p conversion, cos
> I think I've come up with a good idea."

If you ever try to develop it, then sincerely best of luck with it.

Chris Key


DAB sounds worse than FM

unread,
Feb 10, 2005, 8:06:34 PM2/10/05
to
Chris wrote:

> 1080i50 is a dead-end format, and makes too much work for the display
> panel, which would be more than likely progressively scanned anyway.


Countries that have adopted 1080i:

USA, Japan, Korea, China, Australia


Countries that have adopted 720p:

None. But the BBC want to use it.


Total population of countries that have adopted 1080i = 1713 million
(29% of world's population)

(according to:
http://www.photius.com/wfb1999/rankings/population_0.html)


Draw your own conclusions.

DAB sounds worse than FM

unread,
Feb 10, 2005, 8:14:07 PM2/10/05
to


I probably won't, but thanks anyway.

davidr...@postmaster.co.uk

unread,
Feb 11, 2005, 4:50:05 AM2/11/05
to
DAB sounds worse than FM wrote:

> Yes, I know what he meant. What I'm disputing is his claim that
> integrating the interlacing with the codec can miraculously save 50%
of
> the bit rate and still end up with 1080p at the far end.

He's not saving anything, he hasn't changed anything, he hasn't
suggested anything! All he's said is that, as step one of his
_explanation_, group the interlacing+encoder and the
decoder+deinterlacing together. Nothing changes. Not a sausage. You're
_still_ sending 1080i, the output is _still_ 1080p (because its
deinterlaced) and the bitrate is exactly the same.

There are now at least three posts explaining this, and at least seven
posts from you saying it's a load of rubbish. It can't be a load of
rubbish - Dave hasn't _done_ anything or _changed_ yet - all he's done
is put the decoder and deinterlacer in the same box, but the devices
themselves are the same as they were.

I think he was being quite polite to you when he suggested it was a
concept that the General Public might not grasp, because actually it's
a dead easy concept that you can see any day in Currys! (Though he was
talking about diagrams, rather than reality). You take two devices and
shove them in the same box.

Initially, you don't change either of them, but over time (if you'd let
him get to steps 2 and 3), you can optimise both of them and cut out
any unnecessary bits.

Cheers,
David.

davidr...@postmaster.co.uk

unread,
Feb 11, 2005, 5:01:19 AM2/11/05
to
DAB sounds worse than FM wrote:
> davidr...@postmaster.co.uk wrote:

> > Read this very carefully: the limit for any interlaced system is
half
> > the line count. That's the spacing (thus, frequency) at which a
static
> > image can make the entire screen flash at frame rate due to
> > interlacing. Yes, that spacing is smaller for more lines, because
it's
> > proportional to the line spacing!!! However, if you need to limit
the
> > resolution by factor X relative to the line count to avoid twitter
in
> > SD, then you have to limit the resolution by exactly the same
factor X
> > relative to the line count to avoid twitter in HD.
>
> I disagree.

If you disagree that a pattern of horizontal lines white/black on
alternate lines gives rise to a flashing screen when interlaced, then
you're disagreeing with reality.

You need a 0.5 factor to prevent this - the 0.7 factor will actually
allow it to happen a bit, but because such patterns are rare (!!!) it's
a good subjective trade off to give a subjectively sharer picture with
a little flashing/twittering sometimes. It's true that you might chose
a different subjective trade off for a different line count, and that
_for_a_given_picture_ twitter will be less for a higher line count
(haven't we been here before?!) - but that doesn't change the
resolution limit. And, of course, with all those lines and an otherwise
gorgeous (virtually faultless) still picture you might decide that
twitter (even small amounts) is subjectively even more annoying than
with SD, and decide to clamp down on it even more, and use 0.5!

> > It's exactly the same
> > proportionality - the only gain is that the lines are smaller in
HD,
> > thus the frequency is higher. Good! But that doesn't mean that a
> > factor or 0.7 (say) with SD will become 0.9 with HD (as you have
tried
> > to suggest) - it won't.
>
> It will.

Then when there's no high frequency components in the image, it'll look
fine, and when there are loads it'll flash like mad!

> > It'll still be 0.7 (if that is the compromise
> > you choose; 0.7 still leaves visible twitter!).
>
> Wrong.

Well, there's no arguing against that, is there? I've watched the
result on a TV and seen that it's a problem, but you've misunderstood a
book and decided that it's not. What else can I say?

Cheers,
David.

Kevin Bracey

unread,
Feb 11, 2005, 7:57:59 AM2/11/05
to
In message <1108116079.1...@z14g2000cwz.googlegroups.com>
"davidr...@postmaster.co.uk" <davidr...@postmaster.co.uk> wrote:

Tell me about it. We produce boxes that will (among other things) often want
to display web pages on a TV screen. Because of flicker caused by
interlacing, you can't just display a frame as is. The boxes usually apply a
flicker filter to the image to overcome this, which consists of applying a
filter such that:

Pixel(x,y) = 1/4*Pixel(x,y-1) + 1/2*Pixel(x,y) + 1/4*Pixel(x,y+1)

This loses a lot of vertical detail, but there's no practical alternative if
you don't want all the sharp edges in typical graphics to flicker horribly.

If you happen to have a 480p or 576p progressive display, tough - you're not
going to get 480 or 576 lines of detail because it's already been filtered
for the benefit of interlaced displays.

If we move on to a 1080i system, we'll still have to apply the same filter,
which will still lose the same percentage of resolution. The only difference
would be that images could be displayed "larger" to get more detail (ie
scaled to occupy more lines - using the 1080 lines to gain resolution rather
than just screen area, contrary to what computers have usually done when
going from 640x480 to 1280x1024).

But if we chose instead to output 720p, we could output it totally
unfiltered, giving a full 720 lines of resolution, as well as much smoother
scrolling etc. This would definitely be preferable, and is almost certainly
the route we'd go down for our boxes.

DAB sounds worse than FM

unread,
Feb 11, 2005, 9:31:07 AM2/11/05
to
Kevin Bracey wrote:
> In message <1108116079.1...@z14g2000cwz.googlegroups.com>
> "davidr...@postmaster.co.uk"
> <davidr...@postmaster.co.uk> wrote:
>
>> DAB sounds worse than FM wrote:
>>>> It'll still be 0.7 (if that is the compromise
>>>> you choose; 0.7 still leaves visible twitter!).
>>>
>>> Wrong.
>>
>> Well, there's no arguing against that, is there? I've watched the
>> result on a TV and seen that it's a problem, but you've
>> misunderstood a book and decided that it's not. What else can I say?
>
> Tell me about it. We produce boxes that will (among other things)
> often want
> to display web pages on a TV screen.


You might have a relevant point if we were talking about displaying web
pages on TV screens; we're not, we're talking about displaying broadcast
TV on TV screens.

DAB sounds worse than FM

unread,
Feb 11, 2005, 10:05:58 AM2/11/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>> davidr...@postmaster.co.uk wrote:
>
>>> Read this very carefully: the limit for any interlaced system is
>>> half the line count. That's the spacing (thus, frequency) at which
>>> a static image can make the entire screen flash at frame rate due to
>>> interlacing. Yes, that spacing is smaller for more lines, because
>>> it's proportional to the line spacing!!! However, if you need to
>>> limit the resolution by factor X relative to the line count to
>>> avoid twitter in SD, then you have to limit the resolution by
>>> exactly the same factor X relative to the line count to avoid
>>> twitter in HD.
>>
>> I disagree.
>
> If you disagree that a pattern of horizontal lines white/black on
> alternate lines gives rise to a flashing screen when interlaced, then
> you're disagreeing with reality.
>
> You need a 0.5 factor to prevent this - the 0.7 factor will actually
> allow it to happen a bit, but because such patterns are rare (!!!)
> it's a good subjective trade off to give a subjectively sharer
> picture with a little flashing/twittering sometimes. It's true that
> you might chose a different subjective trade off for a different line
> count, and that _for_a_given_picture_ twitter will be less for a
> higher line count (haven't we been here before?!) - but that doesn't
> change the resolution limit.


Yes, we have been here many times before, because I've quoted from my
book many times before: You apply a twitter filter that Charles Poynton
says changes the resolution by a factor of about 0.7.


> And, of course, with all those lines and
> an otherwise gorgeous (virtually faultless) still picture you might
> decide that twitter (even small amounts) is subjectively even more
> annoying than with SD, and decide to clamp down on it even more, and
> use 0.5!


As I've quoted many times: twitter is inversely proportional to line
count. So, if you increase the number of lines the twitter obviously
reduces. The only common-sense conclusion is that the with HDTV, because
twitter is improved, the interlace factor can be increased. But you're
suggesting that it should decrease!


>>> It's exactly the same
>>> proportionality - the only gain is that the lines are smaller in HD,
>>> thus the frequency is higher. Good! But that doesn't mean that a
>>> factor or 0.7 (say) with SD will become 0.9 with HD (as you have
>>> tried to suggest) - it won't.
>>
>> It will.
>
> Then when there's no high frequency components in the image, it'll
> look fine, and when there are loads it'll flash like mad!


I've just played a file that was recorded off Euro1080 on satellite, and
there's masses of detail in it (although it plays upside down for some
unknown reason) and they use 1080i, and I did not see any twitter
whatsoever, let alone "flashing like mad".


>>> It'll still be 0.7 (if that is the compromise
>>> you choose; 0.7 still leaves visible twitter!).
>>
>> Wrong.
>
> Well, there's no arguing against that, is there? I've watched the
> result on a TV and seen that it's a problem, but you've misunderstood
> a book and decided that it's not. What else can I say?


*You* are accusing *me* of misunderstanding the quotes in my
book???????????

Here's the quotes you're claiming *I've* mis-understood:

"Twitter is introduced, however, by vertical detail whose scale
approaches the scan-line pitch. Twitter can be reduced to tolerable
levels by reducing the vertical detail somwhat, to perhaps 0.7 times."

and

"Twitter and scan-line visibility are inversely proportional to the
count of image rows"

I'm not really sure what there is to mis-understand about the above
quotes?

How on earth can you argue that twitter increases when you go from SD to
HD when it is stated that twitter is inversely proportional to the
number of lines?????????????????????

Kevin Bracey

unread,
Feb 11, 2005, 10:16:11 AM2/11/05
to
In message <Lk3Pd.257$vc3...@newsfe5-gui.ntli.net>

"DAB sounds worse than FM" <dab...@low.quality> wrote:

> Kevin Bracey wrote:
> > In message <1108116079.1...@z14g2000cwz.googlegroups.com>
> > "davidr...@postmaster.co.uk"
> > <davidr...@postmaster.co.uk> wrote:
> >
> >> DAB sounds worse than FM wrote:
> >>>> It'll still be 0.7 (if that is the compromise
> >>>> you choose; 0.7 still leaves visible twitter!).
> >>>
> >>> Wrong.
> >>
> >> Well, there's no arguing against that, is there? I've watched the
> >> result on a TV and seen that it's a problem, but you've
> >> misunderstood a book and decided that it's not. What else can I say?
> >
> > Tell me about it. We produce boxes that will (among other things) often
> > want to display web pages on a TV screen.
>
> You might have a relevant point if we were talking about displaying web
> pages on TV screens; we're not, we're talking about displaying broadcast
> TV on TV screens.

Well, there's no problem displaying broadcast TV only because the
broadcasters are prefiltering the pictures to reduce the vertical resolution.

It's quite instructive to see such a device in an unfiltered state - it shows
exactly how bad the flicker problem is if you don't have vertical filtering,
and illustrates that the filtering must be already done at source in
broadcast TV pictures.

It appears you don't fully appreciate how necessary filtering is when you
interlace a progressive picture. You have to remove a significant proportion
of the vertical high-frequency information, regardless of the total number of
lines.

The only way to avoid this is if you know that all interlaced display
devices will have their own filtering built in. If it could be specified that
all 1080i devices that actually are interlaced (if there are any?) must
include their own flicker filters, then the filter could be left out at the
broadcast end. That would add the need for a field buffer and digital
processing into the interlaced set.

But that would be barmy. You'd be sending a signal that wasn't suitable for
direct display either on interlaced or progressive devices. At least the
other way it's optimal for 1080-line CRTs.

Kevin Bracey

unread,
Feb 11, 2005, 10:40:37 AM2/11/05
to
In message <qR3Pd.282$vc3...@newsfe5-gui.ntli.net>

"DAB sounds worse than FM" <dab...@low.quality> wrote:

> As I've quoted many times: twitter is inversely proportional to line
> count.

That's wrong, unless you've got a very narrow definition of "twitter".

If you define twitter only as the vertical judder caused by a single sharp
vertical light->dark transition, then maybe it's true. Example:

field 1 field 2
********* *********
********* *********
********* ********* x
--------- ----> x ---------
--------- ---------
--------- ---------

The "x" marks the location of the apparent boundary between the regions,
which bounces up and down at 25Hz.

As the gap between lines decreases, the size of the "bounce" will decrease.
So going from 576 lines to 1080 lines, changing no other parameters, will
reduce the visibility of that effect, as you say.

But 1080i screens will tend to be bigger than 576i screens - the trend has
certainly been upwards for many years, anyway. That is working in the
opposite direction, making the judder more visible. A 38-inch 1080i set would
have the same apparent judder as a 20-inch 576i set.

[ Just to spell it out for Steve, what with the reading comprehension
difficulties, I'm not saying the judder will be more visible on a typical
1080i set than a typical 576i set, I'm just saying that a typical 1080i set
will be bigger than a typical 576i set, so the judder will not typically be
reduced by as much as 576/1080 ].

But most importantly, twitter is about more than just single edges. In the
pathological case, given a high-frequency vertical grating (or a weatherman's
jacket) with frequency Fs/2:

field 1 field 2
********* *********
--------- ---------
********* *********
--------- ----> ---------
********* *********
--------- ---------


That results in a flat 25Hz strobe over the whole area, and is totally
independent of the number of lines. Hope you haven't got any epileptics
watching.

To remove the potential for that effect requires a vertical filter to reduce
the resolution to 0.7-odd (to reduce it) or 0.5 (if you want to totally
eliminate it). It doesn't matter how big or what resolution your picture is.

Really, this is pretty basic stuff, and I'm amazed you're finding it so hard
to comprehend what David Robinson is saying. I'm mainly attempting this as a
personal test of my technical writing skills - if I can get DSWTFM to
understand this, then I'm sorted.

DAB sounds worse than FM

unread,
Feb 11, 2005, 11:11:07 AM2/11/05
to
Kevin Bracey wrote:
> In message <Lk3Pd.257$vc3...@newsfe5-gui.ntli.net>
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> Kevin Bracey wrote:
>>> In message <1108116079.1...@z14g2000cwz.googlegroups.com>
>>> "davidr...@postmaster.co.uk"
>>> <davidr...@postmaster.co.uk> wrote:
>>>
>>>> DAB sounds worse than FM wrote:
>>>>>> It'll still be 0.7 (if that is the compromise
>>>>>> you choose; 0.7 still leaves visible twitter!).
>>>>>
>>>>> Wrong.
>>>>
>>>> Well, there's no arguing against that, is there? I've watched the
>>>> result on a TV and seen that it's a problem, but you've
>>>> misunderstood a book and decided that it's not. What else can I
>>>> say?
>>>
>>> Tell me about it. We produce boxes that will (among other things)
>>> often want to display web pages on a TV screen.
>>
>> You might have a relevant point if we were talking about displaying
>> web pages on TV screens; we're not, we're talking about displaying
>> broadcast TV on TV screens.
>
> Well, there's no problem displaying broadcast TV only because the
> broadcasters are prefiltering the pictures to reduce the vertical
> resolution.


Correct.


> It's quite instructive to see such a device in an unfiltered state -
> it shows exactly how bad the flicker problem is if you don't have
> vertical filtering,


Stephen Neal has already said (and I think I've read a similar comment
in my book) that CGI needs more twitter filtering that typical broadcast
video.


> and illustrates that the filtering must be
> already done at source in broadcast TV pictures.


This has never been in doubt.


> It appears you don't fully appreciate how necessary filtering is when
> you interlace a progressive picture.


How, exactly, have you come to this conclusion? I have quoted from my
book on numerous occasions that you need to vertically filter the source
material and suffer a 0.7 times loss in vertical resolution. How TF that
is not appreciating how necessary filtering is, then I don't know WTF
is?


> You have to remove a significant
> proportion of the vertical high-frequency information, regardless of
> the total number of lines.


Twitter is inversely proportional to the number of lines. How many times
does that need to be repeated?


> The only way to avoid this is if you know that all interlaced display
> devices will have their own filtering built in.


No, you pre-filter at the broadcast end.


> If it could be
> specified that all 1080i devices that actually are interlaced (if
> there are any?) must include their own flicker filters, then the
> filter could be left out at the broadcast end.


Twitter filtering will be done at the broadcast end.


> That would add the
> need for a field buffer and digital processing into the interlaced
> set.
>
> But that would be barmy. You'd be sending a signal that wasn't
> suitable for direct display either on interlaced or progressive
> devices. At least the other way it's optimal for 1080-line CRTs.


Twitter filtering will be done at the broadcast end.

DAB sounds worse than FM

unread,
Feb 11, 2005, 11:17:37 AM2/11/05
to
Kevin Bracey wrote:
> In message <qR3Pd.282$vc3...@newsfe5-gui.ntli.net>
> "DAB sounds worse than FM" <dab...@low.quality> wrote:

> [ Just to spell it out for Steve, what with the reading comprehension
> difficulties

> Really, this is pretty basic stuff, and I'm amazed you're finding it


> so hard to comprehend what David Robinson is saying. I'm mainly
> attempting this as a personal test of my technical writing skills -
> if I can get DSWTFM to understand this, then I'm sorted.


Oh dear. Bye.

Kevin Bracey

unread,
Feb 11, 2005, 11:48:47 AM2/11/05
to
In message <vO4Pd.294$vc3...@newsfe5-gui.ntli.net>

"DAB sounds worse than FM" <dab...@low.quality> wrote:

> Twitter is inversely proportional to the number of lines. How many times
> does that need to be repeated?

Veracity is not increased by number of repetitions.

We've done our best to explain why it isn't inversely proportional. How about
you explain why it is? I'd be particularly interested to know how alternating
line patterns are less of a problem with more lines.

DAB sounds worse than FM

unread,
Feb 11, 2005, 12:00:36 PM2/11/05
to
Kevin Bracey wrote:

> But 1080i screens will tend to be bigger than 576i screens - the
> trend has certainly been upwards for many years, anyway. That is
> working in the opposite direction, making the judder more visible. A
> 38-inch 1080i set would have the same apparent judder as a 20-inch
> 576i set.


Who has, or buys, a 20" TV set for their main set these days?

A reasonably good indicator of what people are buying is the number of
models in a given area. For instance, have a look at what widescreen TVs
24-7 electrical sell:

http://www.247electrical.co.uk/epages/twentyfourseven.filereader?420cdc3c000b5c7a273f0a020f02055b+EN/catalogs/tv

(if that link goes out of date then go to Audio Visual => TVs and select
Widescreen TVs in teh drop down list)

10 x 21" - 24"
56 x 25" - 28"
58 x 29" - 32"
8 x 33" - 36"

(Note to pedants: those numbers might not be exact because I counted
them quickly)

The vast majority are 28" or 32", so let's say the average widescreen TV
is about 30" and typical DTT transmissions are 720 x 576 resolution.

The area of a 30" widescreen TV is

30^2 . sin (arctan(9/16)) x cos (arctan(9/16)) = 900 x 0.4273 = 385 sq.
inches

So, pixel density is 720 x 576 / 385 = 1077 pixels/sq. inch

For the same pixel density for HD -- and hence same level of twitter
visibility -- you'd need a screen area of:

1920 x 1080 / 1077 = 1925 sq. inches

Screen diagonal = sqrt (1925 / 0.4273) = 67"

Therefore, any TVs with diagonal screen sizes less than 67" will have
less twitter visibility than for SDTV. Or, in other words, twitter will
be reduced for 1080i unless someone has thought that it would be a good
idea to buy a TV set the size of a fking wall.

DAB sounds worse than FM

unread,
Feb 11, 2005, 12:01:34 PM2/11/05
to
Kevin Bracey wrote:
> In message <vO4Pd.294$vc3...@newsfe5-gui.ntli.net>
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> Twitter is inversely proportional to the number of lines. How many
>> times does that need to be repeated?
>
> Veracity is not increased by number of repetitions.
>
> We've done our best to explain why it isn't inversely proportional.
> How about you explain why it is? I'd be particularly interested to
> know how alternating line patterns are less of a problem with more
> lines.


See other post.

Googolplex

unread,
Feb 11, 2005, 12:22:37 PM2/11/05
to
DAB sounds worse than FM wrote:
> Googolplex wrote:
>
>>davidr...@postmaster.co.uk wrote:
>>
>>>DAB sounds worse than FM wrote:
>>
>>>>David, I cannot keep copying paragraphs from my book, because it
>>>>takes ages. *Please* do some research into what twitter is. I
>>>>respect your views far more than virtually everybody else on
>>>>these groups
>>>
>>>
>>>Yes, I know, which probably is the only reason we can have this
>>>discussion without you telling me to fk off! Which is quite worrying
>>>really!
>>>
>>
>>You just know that if we were still in the pre-colour era, with PAL
>>about to be introduced, that DSWTFM would be campaigning
>
>
>
> Firstly, are you actually aware that David Robinson completely agrees
> with me about the state of the audio quality on DAB? It seems not.
>
>

Yes I am, as am I (w.r.t the state of the audio quality on non-speech
channels on DAB). But I wasn't talking about DAB.

>
>>that PAL is
>>crap because it gives artifacts and reduces the luma bandwidth
>>available. Not only that but it affects viewers who wish to remain
>>watching in black and white! He'd probably be campaigning for all new
>>colour TV receivers to be equipped with two tuners so as to allow the
>>chroma information to be broadcast on a different UHF channel to the
>>luma, halving the number of available UHF channels at a stroke. And
>>no-one would be disagreeing that PAL has some downsides, but they
>>would be tirelessly pointing out to him that it is a compromise.
>
>
>
> Don't even attempt to patronise me.
>
> You paint me as being someone who is unreasonable. If you look at this
> page:
>
> http://www.digitalradiotech.co.uk/wasted_dab_multiplex_capacity.htm
>
> read the following sentence (below the local DAB mux table):
>
> "So, out of a total of 183 stations that could be using 160kbps or above
> (including 3 BBC stations and the Digital One stations), we currently
> have only 6 stations using 160kbps or above!!"
>
> Compromise? Where is the compromise when THE SPACE IS NOT USED?
>

Again, who was talking about DAB? I was referring to your hatred of 720p.

Kevin Bracey

unread,
Feb 11, 2005, 12:41:45 PM2/11/05
to
In message <Uw5Pd.310$vc3...@newsfe5-gui.ntli.net>

"DAB sounds worse than FM" <dab...@low.quality> wrote:

> Kevin Bracey wrote:
>
> > But 1080i screens will tend to be bigger than 576i screens - the
> > trend has certainly been upwards for many years, anyway. That is
> > working in the opposite direction, making the judder more visible. A
> > 38-inch 1080i set would have the same apparent judder as a 20-inch
> > 576i set.
>
> Who has, or buys, a 20" TV set for their main set these days?

> [snip].

Sigh. Even when I put in a sentence trying to deter him from spouting off on
a bizarre pedantic tangent missing the point, he still does.

Allow me to repeat that sentence from the post you're replying to.

[ Just to spell it out for Steve, what with the reading comprehension
difficulties, I'm not saying the judder will be more visible on a typical
1080i set than a typical 576i set, I'm just saying that a typical 1080i set
will be bigger than a typical 576i set, so the judder will not typically be
reduced by as much as 576/1080 ].

And of course, you totally ignored the most important part of the post, which
was the second half.

davidr...@postmaster.co.uk

unread,
Feb 11, 2005, 12:54:31 PM2/11/05
to
DAB sounds worse than FM wrote:
> Kevin Bracey wrote:
> > In message <vO4Pd.294$vc3...@newsfe5-gui.ntli.net>
> > "DAB sounds worse than FM" <dab...@low.quality> wrote:
> >
> >> Twitter is inversely proportional to the number of lines. How many
> >> times does that need to be repeated?
> >
> > Veracity is not increased by number of repetitions.
> >
> > We've done our best to explain why it isn't inversely proportional.
> > How about you explain why it is? I'd be particularly interested to
> > know how alternating line patterns are less of a problem with more
> > lines.
>
> See other post.

Which one? I've mentioned the "black/white every other horizontal line
interlaced = flashing screen" vertical resolution limit in at least 3
posts, and you haven't answered it at all.

Here Kevin mentions it again. Rather than quoting from your book, how
about trying to understand the issue?

Cheers,
David.

DAB sounds worse than FM

unread,
Feb 11, 2005, 1:00:35 PM2/11/05
to
Kevin Bracey wrote:
> In message <Uw5Pd.310$vc3...@newsfe5-gui.ntli.net>
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> Kevin Bracey wrote:
>>
>>> But 1080i screens will tend to be bigger than 576i screens - the
>>> trend has certainly been upwards for many years, anyway. That is
>>> working in the opposite direction, making the judder more visible. A
>>> 38-inch 1080i set would have the same apparent judder as a 20-inch
>>> 576i set.
>>
>> Who has, or buys, a 20" TV set for their main set these days?
>
>> [snip].
>
> Sigh. Even when I put in a sentence trying to deter him from spouting
> off on a bizarre pedantic tangent missing the point, he still does.


Bizarre pedantic tangent? We are supposed to be on about twitter, no? I
don't think you seem to have grasped what the phrase "going off on a
tangent" means.


> Allow me to repeat that sentence from the post you're replying to.
>
> [ Just to spell it out for Steve, what with the reading comprehension
> difficulties,


BTW, my qualifications will be superior to yours, which implies that I
am very likely to be superior to you at reading comprehension. Hope that
helps.


> I'm not saying the judder will be more visible on a
> typical 1080i set than a typical 576i set, I'm just saying that a
> typical 1080i set will be bigger than a typical 576i set, so the
> judder will not typically be reduced by as much as 576/1080 ].


Given the same viewing distance, then it is the pixel density that is
the important parameter to show whether twitter is increased or
diminished. Hence showing you that the pixel density is significantly
higher for HDTV than it is for SDTV, unless someone buys a TV the size
of a wall.

If you cannot figure that out, then you must be thick.


> And of course, you totally ignored the most important part of the
> post, which was the second half.


Well, I ignored the dreadful diagrams in both the first and second
halves of the post.

I noticed this:

"That results in a flat 25Hz strobe over the whole area, and is totally
independent of the number of lines. Hope you haven't got any epileptics
watching."

which was to do with the 2nd diagram, but because the diagram was so bad
then I ignored that, too.

Then I saw this:

"To remove the potential for that effect requires a vertical filter to
reduce
the resolution to 0.7-odd (to reduce it) or 0.5 (if you want to totally
eliminate it). It doesn't matter how big or what resolution your picture
is."

And decided that that is the typical anti-interlacing bullshit party
line, which didn't warrant a reply.

Sorry about that.

Kevin Bracey

unread,
Feb 11, 2005, 12:58:48 PM2/11/05
to
In message <Ox5Pd.311$vc3...@newsfe5-gui.ntli.net>

"DAB sounds worse than FM" <dab...@low.quality> wrote:

> Kevin Bracey wrote:
> > In message <vO4Pd.294$vc3...@newsfe5-gui.ntli.net>
> > "DAB sounds worse than FM" <dab...@low.quality> wrote:
> >
> >> Twitter is inversely proportional to the number of lines. How many
> >> times does that need to be repeated?
> >
> > Veracity is not increased by number of repetitions.
> >
> > We've done our best to explain why it isn't inversely proportional.
> > How about you explain why it is? I'd be particularly interested to
> > know how alternating line patterns are less of a problem with more
> > lines.
>
> See other post.

The one where you totally ignored the issue of alternating line patterns and
went off on a tangent about screen measurements? Or did I miss one where
you've addressed this?

davidr...@postmaster.co.uk

unread,
Feb 11, 2005, 1:14:29 PM2/11/05
to
DAB sounds worse than FM wrote:

> As I've quoted many times: twitter is inversely proportional to line
> count. So, if you increase the number of lines the twitter obviously
> reduces. The only common-sense conclusion is that the with HDTV,
because
> twitter is improved, the interlace factor can be increased. But
you're
> suggesting that it should decrease!

It's only inversely proportional for a given image; if you reduce the
filter and have an image with significant high frequency components
relative to the line count (i.e. so the filter is actually needed)
you'll have _more_ twitter _if_ you reduce the filter as you have
suggested.

> I've just played a file that was recorded off Euro1080 on satellite,
and
> there's masses of detail in it (although it plays upside down for
some
> unknown reason) and they use 1080i, and I did not see any twitter
> whatsoever, let alone "flashing like mad".

Why, do you have a 1920x1080 interlaced display? If you are
deinterlacing, then there could be no flashing at all. Are you certain
that Euro1080 are allowing 0.9x the vertical line frequency through
anyway? I would expect them to be using a sensible vertical filter, not
the 0.9 you suggest. They may have even more filtering than you expect
in both directions to help the MPEG codec.

> *You* are accusing *me* of misunderstanding the quotes in my
> book???????????
>
> Here's the quotes you're claiming *I've* mis-understood:
>
> "Twitter is introduced, however, by vertical detail whose scale
> approaches the scan-line pitch. Twitter can be reduced to tolerable
> levels by reducing the vertical detail somwhat, to perhaps 0.7
times."
>
> and
>
> "Twitter and scan-line visibility are inversely proportional to the
> count of image rows"
>
> I'm not really sure what there is to mis-understand about the above
> quotes?

Easy: the second quote is for a given image. It means you can keep the
same image, use more lines, keep the anti-twitter factor the same, and
the twitter will be less. When you talk about X being proportional to
Y, it's assumed that any other factors are held constant.

You're claiming that X is still proportional to Y, even if we mess
around with two other factors at the same time!

> How on earth can you argue that twitter increases when you go from SD
to
> HD when it is stated that twitter is inversely proportional to the
> number of lines?????????????????????

I didn't Steve. I made an example suggesting it may be better to
eradicate it as a subjectively preferable solution with otherwise
excellent HD material, requiring a more effective filter. I didn't say
you had to, I said you could do.

However, you have claimed that you can have a less effective filter,
maintaining an even higher resolution than the increase in line count
would suggest, and twitter will not get any worse than SD. This is
simply junk. It'll work for pictures with little high frequency content
(where the filter is doing nothing anyway) but near and at the limiting
case, it's obvious what a less effective filter is going to do - it's
going to make the screen flash!

I've given you the example three times, Kevin has given it to you
twice.

I've even forgotten the relevance of this - I think you were trying to
pretend the vertical resolution of 1080i approached 1080p, soundly
beating 720p - whereas the vertical resolution of 1080i is probably
similar to 720p.

Cheers,
David.

DAB sounds worse than FM

unread,
Feb 11, 2005, 1:23:36 PM2/11/05
to
Kevin Bracey wrote:
> In message <Ox5Pd.311$vc3...@newsfe5-gui.ntli.net>
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> Kevin Bracey wrote:
>>> In message <vO4Pd.294$vc3...@newsfe5-gui.ntli.net>
>>> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>>>
>>>> Twitter is inversely proportional to the number of lines. How many
>>>> times does that need to be repeated?
>>>
>>> Veracity is not increased by number of repetitions.
>>>
>>> We've done our best to explain why it isn't inversely proportional.
>>> How about you explain why it is? I'd be particularly interested to
>>> know how alternating line patterns are less of a problem with more
>>> lines.
>>
>> See other post.
>
> The one where you totally ignored the issue of alternating line
> patterns and went off on a tangent about screen measurements? Or did
> I miss one where you've addressed this?


I'd happily respond if you could describe WTF you're going on about
better than:

"If you define twitter only as the vertical judder caused by a single
sharp
vertical light->dark transition, then maybe it's true. Example:

field 1 field 2
********* *********
********* *********
********* ********* x
--------- ----> x ---------
--------- ---------
--------- ---------


The "x" marks the location of the apparent boundary between the regions,
which bounces up and down at 25Hz."

Are you referring to the same effect as on the images of the Park Run on
here:

http://www.ebu.ch/trev_301-editorial.html

If so, that is not twitter.

If you're going on about a vertical light to dark transition that is not
moving, then why would anything change between fields, e.g.:


vertical line:

|
|
|
|
|
|
|

If the lines doesn't move then you will see:

|
|
|
|
|
|
|

That is, no twitter, no artefacts whatsoever.

Basically, explain better what you're trying to convince me of, or just
sack it.

Oh, and slagging me off won't get you anywhere cos all I'll do is do the
same back. But to be honest it's a fking waste of my time, so fking
leave it out.

DAB sounds worse than FM

unread,
Feb 11, 2005, 2:02:47 PM2/11/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>> Kevin Bracey wrote:
>>> In message <vO4Pd.294$vc3...@newsfe5-gui.ntli.net>
>>> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>>>
>>>> Twitter is inversely proportional to the number of lines. How many
>>>> times does that need to be repeated?
>>>
>>> Veracity is not increased by number of repetitions.
>>>
>>> We've done our best to explain why it isn't inversely proportional.
>>> How about you explain why it is? I'd be particularly interested to
>>> know how alternating line patterns are less of a problem with more
>>> lines.
>>
>> See other post.
>
> Which one? I've mentioned the "black/white every other horizontal line
> interlaced = flashing screen" vertical resolution limit in at least 3
> posts, and you haven't answered it at all.


The twitter filter is used precisely to mitigate against this kind of
event. Here, read an alternative description:

http://www.videosystems.com/e-newsletters/HDVatWork_1_10_05/

"When a CCD adds row-pairs, the image is essentially passed through a
low-pass filter. (A filter that removes high frequencies representing
fine detail.) This filter helps prevent "interline flicker." Interline
flicker is "aliasing" that occurs when horizontal image detail (signal
frequency) approaches that of the CCD row-structure (sampling
frequency).

In plain English, this means that when a sharp, horizontal edge (or
line) is imaged, you may see a 30Hz flicker on it. The flicker is caused
by the horizontal edge being represented by a single interlace scan-line
that is displayed 30 times per second.

If a horizontal edge moves vertically, the flicker moves up or down the
screen causing "interline twitter." Both twitter and flicker are
reduced, but not eliminated, when horizontal edges (or lines) are
softened by the low-pass filter. Essentially an edge is smeared across
two lines so that during display, the edge is displayed at 60Hz. Twitter
and flicker are two of the most objectionable artifacts of interlace
scanning.

Unfortunately, a price must be paid for the benefits of row-pair
summing. The low-pass filter decreases an image's effective vertical
resolution by about 25 percent-from 540 lines to about 405 lines per
field. Therefore, the FX1/Z1's effective vertical resolution should be
about 810 lines when operating in 1080i60 mode. And, indeed, the
measured resolution of the FX1 is almost 800 lines."


> Here Kevin mentions it again. Rather than quoting from your book, how
> about trying to understand the issue?


I do understand that the twitter filter is used to limit the fine
detail. What else is the 0.7 factor for??

DAB sounds worse than FM

unread,
Feb 11, 2005, 2:49:49 PM2/11/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>
>> As I've quoted many times: twitter is inversely proportional to line
>> count. So, if you increase the number of lines the twitter obviously
>> reduces. The only common-sense conclusion is that the with HDTV,
>> because twitter is improved, the interlace factor can be increased.
>> But you're suggesting that it should decrease!
>
> It's only inversely proportional for a given image; if you reduce the
> filter and have an image with significant high frequency components
> relative to the line count (i.e. so the filter is actually needed)
> you'll have _more_ twitter _if_ you reduce the filter as you have
> suggested.


The way I visualise it is that when you have fine detail then that needs
to be lowpass filtered to remove some of the detail so that the twitter
isn't objectionable. But when the number of rows increases then the fine
detail that would give rise to twitter becomes smaller, i.e. it is
inversely proportional to the number of rows. So, if the twitter was
tolerable with 575 lines, then, for the same amount of tolerability, the
lowpass filter can be relaxed when there are more lines.


>> I've just played a file that was recorded off Euro1080 on satellite,
>> and there's masses of detail in it (although it plays upside down
>> for some unknown reason) and they use 1080i, and I did not see any
>> twitter whatsoever, let alone "flashing like mad".
>
> Why, do you have a 1920x1080 interlaced display?


My monitor is progressive, obviously.


> If you are
> deinterlacing, then there could be no flashing at all.


When does this flashing occur then?


> Are you certain
> that Euro1080 are allowing 0.9x the vertical line frequency through
> anyway?


I didn't say they were using a value of 0.9x.


> I would expect them to be using a sensible vertical filter,
> not the 0.9 you suggest.


How the fk do you know what they're using?


> They may have even more filtering than you
> expect in both directions to help the MPEG codec.


And they might not.


>> *You* are accusing *me* of misunderstanding the quotes in my
>> book???????????
>>
>> Here's the quotes you're claiming *I've* mis-understood:
>>
>> "Twitter is introduced, however, by vertical detail whose scale
>> approaches the scan-line pitch. Twitter can be reduced to tolerable
>> levels by reducing the vertical detail somwhat, to perhaps 0.7
>> times."
>>
>> and
>>
>> "Twitter and scan-line visibility are inversely proportional to the
>> count of image rows"
>>
>> I'm not really sure what there is to mis-understand about the above
>> quotes?
>
> Easy: the second quote is for a given image. It means you can keep the
> same image, use more lines, keep the anti-twitter factor the same, and
> the twitter will be less.


Who says that the twitter needs to be less?


> When you talk about X being proportional to
> Y, it's assumed that any other factors are held constant.


Yes, but there's a 3-way trade-off between the amount of twitter, number
of lines, and interlace factor (to limit the vertical resolution), and
given that twitter occurs on a probabilistic basis (i.e. you will only
perceive twitter a certain proportion of the time), then it's not for
you to say that twitter has to be reduced in proportion with the
increase in the number of lines, because it might be better (and I think
it would be better) to relax the twitter filter to increase vertical
resolution. Basically, you can increase both twitter and vertical
resolution if you don't relax it too far. That would probably be the
best way to go.


> You're claiming that X is still proportional to Y,


The distance between scan-lines reduces proportionally with the increase
in the number of lines. So, if some pixels effectively move up and down
and that can be perceived, then if the distance between scan-lines is
reduced proportionally to the increase in the number of lines, then the
twitter is reduced.


> even if we mess
> around with two other factors at the same time!


3- (or more) way trade-offs are very common.


>> How on earth can you argue that twitter increases when you go from
>> SD to HD when it is stated that twitter is inversely proportional to
>> the number of lines?????????????????????
>
> I didn't Steve. I made an example suggesting it may be better to
> eradicate it as a subjectively preferable solution with otherwise
> excellent HD material, requiring a more effective filter. I didn't say
> you had to, I said you could do.


Right, I agree, you could. But it's not necessarily what you want to do.
It's a 3-way trade-off, and if the twitter becomes significantly less
perceivable due to the narrower line-pitch, then it is certainly not out
of the question that the vertical filter can be relaxed *and* the
twitter significantly reduced.


> However, you have claimed that you can have a less effective filter,
> maintaining an even higher resolution than the increase in line count
> would suggest, and twitter will not get any worse than SD.


Correct.


> This is
> simply junk.


It's a 3-way trade-off and with 3-way trade-offs, if one parameter (the
number of lines) significantly improves, then the other 2 parameters can
both be improved. You don't have to improve just one and keep the other
constant.


> It'll work for pictures with little high frequency
> content (where the filter is doing nothing anyway) but near and at
> the limiting case, it's obvious what a less effective filter is going
> to do - it's going to make the screen flash!


So you keep saying.


> I've given you the example three times, Kevin has given it to you
> twice.


That example doesn't take into consideration the vertical lowpass
filter.


> I've even forgotten the relevance of this - I think you were trying to
> pretend the vertical resolution of 1080i approached 1080p


Pretend? Do me a fking favour. I have never said that 1080i has the same
vertical resolution as 1080p.


>, soundly beating 720p - whereas the vertical resolution of 1080i is
>probably
> similar to 720p.


Using a figure for the interlace factor of 0.7, then the vertical
resolution would be

1080 x 0.7 = 756

Alternatively:

"Unfortunately, a price must be paid for the benefits of row-pair
summing. The low-pass filter decreases an image's effective vertical
resolution by about 25 percent-from 540 lines to about 405 lines per
field. Therefore, the FX1/Z1's effective vertical resolution should be
about 810 lines when operating in 1080i60 mode. And, indeed, the
measured resolution of the FX1 is almost 800 lines."

which would put the figure at 0.75, and

1080 x 0.75 = 810

((756 - 720)/720) x 100 = 5% improvement

((810 - 720)/720) x 100 = 12.5% improvement

You may call it "similar", I call it lower.

Dave Farrance

unread,
Feb 11, 2005, 3:01:31 PM2/11/05
to
Andrew Hodgkinson <ahod...@rowing.org.uk> wrote:

>This is what the OP was saying. Interlace and deinterlace can be
>considered part of the CODECs. We still start with THE SAME format, and
>displaying THE SAME format. It's just that one of our CODECs takes the
>rather crude method of throwing away half of the picture lines in each
>frame, leaving the decoder with a bit of nasty guessing game to put them
>back again.

Thanks for your exposition - quite a model of clarity. If the
comprehension wasn't there in one case, for whatever reason, then it was
no reflection on the writing. :)

--
Dave Farrance

Dave Farrance

unread,
Feb 11, 2005, 4:13:59 PM2/11/05
to
"Christopher Key" <cj...@cam.ac.uk> wrote:

>This sounds like one of the most sensible things I've read so far. I'm
>amazed that the interlaced / progressive argument is still running for HD,
>and that the above hasn't long since been the way that modern codecs
>operate.

Thanks. It isn't actually the way that the any advanced video codecs
operate, or even a desirable way; it was just an illustration of
theoretical capabilities.

>... Anything that was
>'non optimal', ie something that the codec wouldn't have done will clearly
>result in a reduction in quality. If it was 'optimal' then the codec would
>have done it anyway, and there is no point in having the extra step.

Exactly so :)
>
>The above all relates to a theorical codec that is compressing for one
>specific type of display, but I can see no reason why the results can't be
>applied to a codec under development today. It surely can't be beyond the
>whit of developers to detect in real time whether it is best to apply
>interlacing or not, or even to develop a codec that uses it selectively
>throughout the picture. If this really is asking too much, then surely
>having it as part of the specification, but simply as an on/off option for
>the compressor would allow for it to become automatic without affecting the
>decompressor at all.

Actually, I wonder if we're already there as regards a baseline
specification. H.264's data symbol definitions are fixed, of course, but
it's a rich enough symbol set, that the current encoder prediction
schemes only begin to tap its potential. Now that it's been adopted by
broadcasters, it'll be interesting to see how it develops.

--
Dave Farrance

DAB sounds worse than FM

unread,
Feb 11, 2005, 4:18:39 PM2/11/05
to
Dave Farrance wrote:
> Andrew Hodgkinson <ahod...@rowing.org.uk> wrote:
>
>> This is what the OP was saying. Interlace and deinterlace can be
>> considered part of the CODECs. We still start with THE SAME format,
>> and displaying THE SAME format. It's just that one of our CODECs
>> takes the rather crude method of throwing away half of the picture
>> lines in each frame, leaving the decoder with a bit of nasty
>> guessing game to put them back again.
>
> Thanks for your exposition - quite a model of clarity.


Indeed it was. It was basically the absolute opposite of your posts
where you simply said that re-drawing the block diagram proves that you
can provide the same level of picture quality at the same resolution for
progressive as for interlaced. Total, absolute, complete and utter
nonsense.


> If the comprehension wasn't there in one case, for whatever reason,
> then it
> was no reflection on the writing. :)


I take it that you now accept that your original scheme (that I will
codename Alchemy) was absurd and that your supposed proof was never in
fact a proof, but rather a figment of your imagination?

Dave Farrance

unread,
Feb 11, 2005, 5:15:17 PM2/11/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote:

>I take it that you now accept that your original scheme (that I will
>codename Alchemy) was absurd and that your supposed proof was never in
>fact a proof, but rather a figment of your imagination?

Curious.

Well, thanks David, Andrew, and Christopher for your heroic attempts to
explain my reasoning to Mr FM, but I'd guess that we're dealing with an
"issue" a bit more fundamental than poor reading comprehension here, so
any further attempts are likely to be wasted.

--
Dave Farrance

DAB sounds worse than FM

unread,
Feb 12, 2005, 8:54:42 AM2/12/05
to
Dave Farrance wrote:
> "Christopher Key" <cj...@cam.ac.uk> wrote:

>> The above all relates to a theorical codec that is compressing for
>> one specific type of display, but I can see no reason why the
>> results can't be applied to a codec under development today. It
>> surely can't be beyond the whit of developers to detect in real time
>> whether it is best to apply interlacing or not, or even to develop a
>> codec that uses it selectively throughout the picture. If this
>> really is asking too much, then surely having it as part of the
>> specification, but simply as an on/off option for the compressor
>> would allow for it to become automatic without affecting the
>> decompressor at all.
>
> Actually, I wonder if we're already there as regards a baseline
> specification. H.264's data symbol definitions are fixed, of course,
> but it's a rich enough symbol set, that the current encoder prediction
> schemes only begin to tap its potential.


"H.264's data symbol definitions are fixed"
"it's a rich enough symbol set"


I find that an imposter can always be spotted by his use of incorrect
terminology.

DAB sounds worse than FM

unread,
Feb 12, 2005, 9:01:34 AM2/12/05
to


Absolutely! The fundamental issue here is:

"It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing."

and you then went on to reduce the bit rate by using interlacing, but
passed-off your scheme as being progressive!! AND, your scheme would not
reduce the bit rate to the same level as pure interlacing anyway.

In other words: nothing that you have said has been correct!!! To me,
that is a fundamental issue.

Dave Farrance

unread,
Feb 12, 2005, 12:38:15 PM2/12/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote:

>Dave Farrance wrote:
>> Actually, I wonder if we're already there as regards a baseline
>> specification. H.264's data symbol definitions are fixed, of course,
>> but it's a rich enough symbol set, that the current encoder prediction
>> schemes only begin to tap its potential.
>
>"H.264's data symbol definitions are fixed"
>"it's a rich enough symbol set"
>
>I find that an imposter can always be spotted by his use of incorrect
>terminology.

Please understand that I'm not trying to be rude here, but would I be
right in assuming that all of your knowledge is academic, and that
you've never actually done any engineering work in this field?

It might help if you understood that those of us that have done design
work with video processing (or any design engineering task) often
consider it from a more abstract viewpoint than you're used to. We have
to, because we must go through all the requirements for a design before
tackling the design work itself, and that tends to enforce an
understanding of what is necessary and what is arbitrary. Not that it's
anything that need concern you, but I thought that I should point out
where we're coming from.

--
Dave Farrance


davidr...@postmaster.co.uk

unread,
Feb 12, 2005, 1:20:46 PM2/12/05
to
DAB sounds worse than FM wrote:

> BTW, my qualifications will be superior to yours, which implies that
I
> am very likely to be superior to you at reading comprehension. Hope
that
> helps.

I'm sure anyone who doesn't know you is never going to believe that you
have "two first class degrees and a masters" from the evidence they see
on here.

Anyone with an ounce of tact, diplomacy, intelligence, understanding,
compassion, respect, humility, or even kindness is unlikely to write
what you've just written.

Which begs the question: why do you do it? It's not like it's going to
win the argument - it might make the person you're arguing with go
away, but not because they're wrong or are intimidated by your
qualifications.

We seem to have gone a long way from interlacing. If you understand
what Dave was originally trying to say with his "draw a box around the
codec and interlacing", what's your answer to it?

Cheers,
David.

DAB sounds worse than FM

unread,
Feb 12, 2005, 1:41:54 PM2/12/05
to
Dave Farrance wrote:
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> Dave Farrance wrote:
>>> Actually, I wonder if we're already there as regards a baseline
>>> specification. H.264's data symbol definitions are fixed, of course,
>>> but it's a rich enough symbol set, that the current encoder
>>> prediction schemes only begin to tap its potential.
>>
>> "H.264's data symbol definitions are fixed"
>> "it's a rich enough symbol set"
>>
>> I find that an imposter can always be spotted by his use of incorrect
>> terminology.
>
> Please understand that I'm not trying to be rude here, but would I be
> right in assuming that all of your knowledge is academic, and that
> you've never actually done any engineering work in this field?


Correct. But at least the academic work that I've done gives me a good
foundation to understand digital video, whereas I do not believe you
have such knowledge. The reason I say this is that you do not seem able
to provide any details about your scheme other than to say that
re-drawing a block diagram solves all ills.


> It might help if you understood that those of us that have done design
> work with video processing


Please say exactly what video processing work you have done.


> (or any design engineering task) often
> consider it from a more abstract viewpoint than you're used to.


Look, this is your original claim:

"It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing."

That claim is false, and you have never given any evidence to suggest
that it is true. Forget abstract viewpoints. Here's an analogy to your
claim:

Nearly all music stations on DAB use 128kbps, which provides low audio
quality, and using the MP2 codec on DAB requires in the region of
192kbps for good quality audio. Let's say that the audio going into the
encoder is sampled at 16-bit accuracy and the audio is lowpass filtered
with a cutoff frequency of 16kHz. So, we've got a block diagram as
follows:

16kHz LPF => ADC => encoder => output

Now, to follow your analogy, simply re-drawing the above block diagram
will cure all DAB's audio quality problems at a stroke as follows:

16kHz LPF => ADC & encoder => output

But that achieves absolutely nothing.


> We
> have to, because we must go through all the requirements for a design
> before tackling the design work itself, and that tends to enforce an
> understanding of what is necessary and what is arbitrary. Not that
> it's anything that need concern you, but I thought that I should
> point out where we're coming from.


I was taught Project Management at uni, in fact, unfortunately, I got a
double helping. And our 4th year group project required us to do all the
project management mallarkey. So I know what the process is. But, put
simply, just stating a requirement of a system does not automatically
make it possible that you can achieve what is desired. This is what you
do not seem to be able to grasp. If things were as easy as you seem to
think they are, then re-drawing block diagrams could make petrol engines
environmentally-friendly, jets to be silent, etc etc. Some things just
are not possible no matter how much you would like them to be possible.

I repeat: you are wrong. But I'm also open-minded to new ideas. So, if
you think your scheme can do what you think it can do, then you should
be able to describe how it can be done. Is that too much trouble?

Dave Farrance

unread,
Feb 12, 2005, 3:17:29 PM2/12/05
to
"DAB sounds worse than FM" <dab...@low.quality> wrote:

>Dave Farrance wrote:
>> Please understand that I'm not trying to be rude here, but would I be
>> right in assuming that all of your knowledge is academic, and that
>> you've never actually done any engineering work in this field?
>
>Correct. But at least the academic work that I've done gives me a good
>foundation to understand digital video, whereas I do not believe you
>have such knowledge. The reason I say this is that you do not seem able
>to provide any details about your scheme other than to say that
>re-drawing a block diagram solves all ills.

It was really your misunderstanding of what I was saying that left me
with no other conclusion than the fact that you hadn't developed the
mind-set to do practical work in the field. Sorry, but there it is.

>Please say exactly what video processing work you have done.

The video processing design work that I've done has mostly been for
avionics - for example, DSP coding for data capture and formatting the
video output, and FPGA design for extracting the video data from RAM and
creating the drive waveforms for a CRT glass. I've also done a fair
amount of data communication design work, which is conceptually similar
to multimedia video delivery systems.

>I repeat: you are wrong. But I'm also open-minded to new ideas. So, if
>you think your scheme can do what you think it can do, then you should
>be able to describe how it can be done. Is that too much trouble?

The others posting in this thread did their best to explain it to you,
and still you didn't understand it. So as I've already said, any further
attempt would be wasted. So good luck with whatever your actual field of
work is, and I'm out of this thread.

--
Dave Farrance

Andrew Hodgkinson

unread,
Feb 12, 2005, 7:15:45 PM2/12/05
to
DAB sounds worse than FM wrote:

> 16kHz LPF => ADC => encoder => output

Well, OK, let's go with that analogy.

This thread has many arguments running through it. I'm not going to
discuss here whether 720p is better or worse than 1080i. I am going to
consider whether or not it's possible that one CODEC performs better
when interlacing is used as part of the method of data rate reduction,
whilst another CODEC performs better when interlace is not used as part
of the method of data rate reduction. For my purposes, "performs better"
means: a CODEC which provides at least equivalent or improved perceived
visible quality at either the same or a lower bitrate than the CODEC to
which is is compared.

Right. Let's say I have a piece of CD-sourced music (roughly 22kHz, or
1411.2kbps) and I want to encode it at 64kbps. I have two pieces of
software - Lame, an MP3 encoder, and the Ogg Vorbis AoTuV tuned encoder.

1) If I encode with Lame then the codec might, by default,
automatically low-pass filter the audio to 16kHz. This is because
the MP3 algorithms are pretty crude by today's standards (though
undoubtedly revolutionary in their day) and the quality management
algorithm determines that an unacceptable amount of audible
artifacts would be present at a full 22kHz. We end up with a 64kbps
MP3 file, which I play back, upsampling to 22kHz again so that
my external DAC can play the stream. The audio sounds a bit muffled,
but there aren't that many other intrusive audible artifacts.

- Now, I could use command line switches to force the MP3 encoder to
produce a 64kbps stream using full 22kHz data. But presumably, the
encoder's decision to low-pass filter the stream was based on a
belief that the result of not doing this filtering would be a lower
quality stream. It'd still be a 64kbps stream, just one with so
many high frequency "slushing" artifacts that it would be
unpleasant to listen to.

2) If I encode with AoTuV, the codec, as part of its operations, does
by default no low pass filtering. Its algorithms are more advanced
than MP3, and it can achieve a good quality rendition of the full
bandwidth of the music. I end up with a 64kbps Ogg Vorbis file - the
same bitrate as the MP3 file - but this is encoded at a full 22kHz.
On playback, no upsampling is required. The audio sounds passably
similar to the CD original; artifacts are noticeable, but not unduly
intrusive.

- I could use command line switches to force the encoder to low-pass
filter the data but this would needlessly reduce the quality of the
output. It'd still be a 64kbps stream with few intrusive artifacts,
bar one big new one - it'd sound more muffled, for no good reason.

How is it possible for both files to be at 64kbps, yet one had to
introduce a 16kHz filter whilst the other didn't? Simply, better
mathematics.

One CODEC performed better when a low pass filter was used. Another
CODEC performed better whan a low pass filter was not used. So now let's
return to the video domain.

Suppose I have some uncompressed 1080p material that I want to encode at
a bitrate of 10Mbps.

I could use an MPEG 2 encoder. For sake of argument, let's say that this
encoder decides to downconvert the video to 1080i in order to make its
life more easy. It is an inefficient encoder by modern standards; were
it to operate on full sized frames (1080p) rather than half sized frames
(by decimiation to 1080i) it would lead to very blocky, nasty quality
video on subsequent decompression. So the MPEG encoder decimates to
1080i, and I get my 10Mbps stream. When I play it back, I deinterlace it
back to 1080p, because my plasma panel can't display interlaced video
natively. I get video with a reasonable amount of blocky artifacts, but
marred by interlace and deinterlace artifacts.

As just mentioned, I could force the MPEG 2 encoder to operate on 1080p
material but the result would be very blocky and unpleasant to watch.

So, next I feed the same 1080p material into an H264 encoder. This
encoder does not decimate to 1080i, as the encoder's quality management
algorithms believe an acceptable result can be produced at full 1080p.
So I get my 10Mbps stream again, but this time I don't have to upconvert
it to progressive to display on my TV, as it's already in 1080p format.
I get video with - hopefully! - a reasonable amount of blocky artifacts,
but no deinterlace artifacts.

I could force the H264 encoder to operate on 1080i material and the
result would suffer from fewer direct compression artifacts, however
this would be outweighed by the artifacts introduced by interlacing and
deinterlacing.

How is it possible for two streams to be at 10Mbps, yet one had to use
interlace and deinterlace steps whilst the other didn't? The same way
that the Ogg Vorbis file didn't need low pass filtering: better mathematics.

Whether you prefer the MPEG 2 version with its interlace/deinterlace
artifacts, or the H264 version with just its own compression artifacts,
is in part down to personal opinion. There are also analysis tools that
can produce quantitative measurements of how far the decoded 1080p
result differs from the uncompressed 1080p source.

As the original poster said, there is not nor has there ever been ANY
ABSOLUTE TECHNICAL REQUIREMENT to use more bandwidth to broadcast 1080i
material than 1080p, because the encoders are lossy. They will operate
at whatever bitrate we ask them to, within their specified limits.

Better encoding makes it possible to convey more information in a given
amount of bandwidth. I can compress PNG images losslessly at ten
selectable compression levels, which gives me a range of different file
sizes, just because more advanced (and CPU intensive) maths happens to
be used for the cases that give smaller files. Similarly, I can easily
imagine something that would do better than MPEG in the same bandwidth,
allowing a perceived equivalent quality of video operating in 1080p
throughout the chain, compared with MPEG 2 which has to interlace and
deinterlace within the chain.

One last example. Suppose you were handed a black box into which you fed
uncompressed 1080p video. You were given another black box which had an
HDMI output, that supplied 1080p video too. The first black box
transmits the picture to the second. It transmits at 10Mbps. Would you
even *care* (at a shallow level) if our black boxes:

(1) interlace, encode with MPEG 2, transmit, decode with MPEG 2, and
deinterlace; or
(2) encode with H264, transmit, and decode with H264...

...so long as the perceived quality is the same? And would you not be
annoyed if someone only ever gave you 1080i material, because they
thought that you couldn't possibly transmit 1080p in 10Mbps - wouldn't
you rather they let your black boxes make that decision?

Interlace is just one of many ways of lossy encoding source data!
Something that converts 1080p to 1080i is a form of lossy encoder.
Something that converts 1080i to 1080p is a form of decoder. Something
that can do both is therefore a lossy CODEC.

Whether or not 720p in a given available amount of bandwidth produces
better video than 1080i in the same amount of bandwidth is an
interesting question. Whether or not a 1080p stream decimated to 1080i
and fed through MPEG 2 encoding produces better or worse results than
1080p fed directly through H264 (or DivX, or VP6, or Matroska, or
whatever) is another interesting question.

Without real-world material encoded into test streams and double blind
tested we just can't answer the above. Consequently, this aspect of the
current discussion is inherently pointless. No consensus can be reached
unless all our current subjective opinions on the matter happen by
chance to converge.

--
TTFN, Andrew

"Hold on tight, lad, and think of Lancashire Hotpot!" - A Grand Day Out

Andrew Hodgkinson

unread,
Feb 12, 2005, 7:37:53 PM2/12/05
to
Christopher Key wrote:

> 1080p compressed by some theoretical codec will always match or
> outperform 1080i compressed into the same bandwidth.

I actually disagreed with that for a while before I added some
qualifiers in my head. Would I be correct in rephrasing what you
said as follows?

There can in theory exist an encoder, such that 1080p compressed
by that encoder will always match or outperform 1080i compressed
by that encoder into the same bandwidth.

This is because in order to compare the results of compression of 1080i
versus 1080p by the encoder we'd have to use the same source material.
That source material would logically be 1080p, as 1080i is of a lower
resolution. The creation of the 1080i stream, by "pure interlacing" of
the 1080p source, can be considered part of a lossy encoding mechanism
in and of itself. But there may exist an encoding method that, shock
horror, has a better way than interlacing of squashing the 1080p source
data into the available bandwidth.

Note that this theoretical encoder is certainly *not* MPEG 2, but *may*
be H264 (I've not tried the latter, but have tried the former).

Andrew Hodgkinson

unread,
Feb 12, 2005, 8:25:21 PM2/12/05
to
DAB sounds worse than FM wrote:

> So, pixel density is 720 x 576 / 385 = 1077 pixels/sq. inch

Er, I thought we were talking about line twitter here...

> 1920 x 1080 / 1077 = 1925 sq. inches

...and there it is again. Where does the 720 in the first case, and the
1080 in the second case, come into it?

Suppose I had a telly that showed 576 scanlines, just like your first
example. But imagine it only showed, say, 100 pixels on each line. Would
it give less, or more line twitter than a 720 pixels-per-line set?

Surely it would give exactly the same amount.

> [ Snip proof saying a 30" SDTV has equivalent twitter to 67" HDTV ]

Perhaps I'm missing something important, but it seems to me that line
twitter has nothing to do with pixel density, and everything to do with
scanline density.

Now to compare this I could just do "1080/576 * 30" to get my HDTV
equivalent size to a 30" 16:9 SDTV display - 56.25". However, perhaps
that boiled down calculation could do with some explaining, so I'll go
around the houses by converting diagonals into widths and heights,
getting scanline densities, then going back to a diagonal again.

At 576 lines, the scanline density in lines per inch of a 16:9 TV is:

30" diagonal => sqrt((9 * 30)^2 / 337) = 14.71" high => 39.16

Therefore, 1080 scanlines / 39.16 scanlines per inch gives 27.58" high,
at 16:9 ratio that's (16 * 27.58) / 9 = 49.03" wide, for a diagonal of
exactly 56.25 inches (if you avoid rounding errors).

Some way off a 67" diagonal, but for sure, that's still a very big TV!
It looks like 42" is a pretty popular panel size at the moment (no stats
for that, it's just a guess), so for an equivalent viewing distance,
you'd definitely get some reduced twitter at 1080i versus 576i.

But this is all assuming you could actually find an interlace scanning
HDTV ready set in the first place, and further, an HDTV panel that
actually shows all 1080 scanlines without scaling :-)

Christopher Key

unread,
Feb 12, 2005, 9:33:32 PM2/12/05
to
Andrew Hodgkinson wrote:
> Christopher Key wrote:
>
>> 1080p compressed by some theoretical codec will always match or
>> outperform 1080i compressed into the same bandwidth.
>
> I actually disagreed with that for a while before I added some
> qualifiers in my head. Would I be correct in rephrasing what you
> said as follows?
>
> There can in theory exist an encoder, such that 1080p compressed
> by that encoder will always match or outperform 1080i compressed
> by that encoder into the same bandwidth.
>

I thought about the phrasing for a little while. I wasn't keen to
refer to the same encoder for both interlaced and progressive as
operating on different source material, they wouldn't necessarily
operate in the same manner. I think I'd probably rephrase it as:

There will in theory always exist an encoder such that 1080p


compressed by that encoder will always match or outperform

1080i compressed by any other encoder into the same
bandwidth, provided that both are to be displayed on identical
displays.

It can probably also be extended to give a far more general rule:

There will in theory always exist an encoder such that video


compressed by that encoder will always match or outperform

any video derived solely from the original and compressed by
any other encoder into the same bandwidth, provided that both
are to be displayed on identical displays.


> This is because in order to compare the results of compression of
> 1080i versus 1080p by the encoder we'd have to use the same source
> material. That source material would logically be 1080p, as 1080i is
> of a lower resolution. The creation of the 1080i stream, by "pure
> interlacing" of the 1080p source, can be considered part of a lossy
> encoding mechanism in and of itself. But there may exist an encoding
> method that, shock horror, has a better way than interlacing of
> squashing the 1080p source data into the available bandwidth.
>
> Note that this theoretical encoder is certainly *not* MPEG 2, but
> *may* be H264 (I've not tried the latter, but have tried the former).

I'd certainly agree with all the above. I'll have to have a look into the
workings of H264 to see what it's really capable of


Kennedy McEwen

unread,
Feb 13, 2005, 6:17:10 AM2/13/05
to
In article <cumea0$b35$1...@gemini.csx.cam.ac.uk>, Christopher Key
<cj...@cam.ac.uk> writes

>
>It can probably also be extended to give a far more general rule:
>
> There will in theory always exist an encoder such that video
> compressed by that encoder will always match or outperform
> any video derived solely from the original and compressed by
> any other encoder into the same bandwidth, provided that both
> are to be displayed on identical displays.
>
I am pretty certain that violates the second law of thermodynamics in
the case where the "other" decoder already achieves maximum entropy.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)

Christopher Key

unread,
Feb 13, 2005, 8:12:37 AM2/13/05
to
Kennedy McEwen wrote:
> In article <cumea0$b35$1...@gemini.csx.cam.ac.uk>, Christopher Key
> <cj...@cam.ac.uk> writes
>>
>> It can probably also be extended to give a far more general rule:
>>
>> There will in theory always exist an encoder such that video
>> compressed by that encoder will always match or outperform
>> any video derived solely from the original and compressed by
>> any other encoder into the same bandwidth, provided that both
>> are to be displayed on identical displays.
>>
> I am pretty certain that violates the second law of thermodynamics in
> the case where the "other" decoder already achieves maximum entropy.

I'm not sure that it does necessarily, as it states that it 'will always
*match* or outperform'. In the case where the combination of the transform
occurring to the video and the subsequent encoding is optimal, then clearly
the best that can be done is to match the performance. This is however
trivially achieved by simply performing exactly the same process.

I suppose that there is a slight problem with the wording however. In the
case where the transform is a null transform, then the encoders will be the
same. It might be better rephrased without the word 'other':

There will in theory always exist an encoder such that video
compressed by that encoder will always match or outperform
any video derived solely from the original and compressed by

any encoder into the same bandwidth, provided that both are


to be displayed on identical displays.

Chris Key


DAB sounds worse than FM

unread,
Feb 13, 2005, 1:19:29 PM2/13/05
to
Andrew Hodgkinson wrote:
> Christopher Key wrote:
>
>> 1080p compressed by some theoretical codec will always match or
>> outperform 1080i compressed into the same bandwidth.
>
> I actually disagreed with that for a while before I added some
> qualifiers in my head. Would I be correct in rephrasing what you
> said as follows?
>
> There can in theory exist an encoder, such that 1080p compressed
> by that encoder will always match or outperform 1080i compressed
> by that encoder into the same bandwidth.


There's an infinite number of algorithms that haven't been invented, so
yes, *in theory* that would be right.

But the goal-posts seem to have moved onto another pitch here. Let's
just repeat what Dave Farrance said originally:

"Me: >So, there may be arguments about the quality of i/p conversion for
>1080i, but as far as resolution and picture sharpness are concerned
>then 1080i is the clear winner, and by some margin.

Dave Farrance: Except that all the stuff you quoted predated advanced
codecs that can
compress as well as interlacing, only with better quality. Advanced
codecs can't handle pre-interlaced video very efficiently. A better
comparison would be between 1080i and 1080p, when compressed into the
same bandwidth with an advanced codec."

That speaks for itself. And he then had the cheek to try and make out
that he can prove that the current advanced video codecs provide the
same level of picture quality for 1080p as they do for 1080i for the
same bit rate!:

"It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing."

and his proof merely consisted of re-drawing the block diagram where the
interlacer and codec are combined, and this somehow proved that the
current advanced video codecs can perform in the way he suggests.


> This is because in order to compare the results of compression of
> 1080i versus 1080p by the encoder we'd have to use the same source
> material. That source material would logically be 1080p, as 1080i is
> of a lower resolution. The creation of the 1080i stream, by "pure
> interlacing" of the 1080p source, can be considered part of a lossy
> encoding mechanism in and of itself.


Yes, but throwing away half the lines does not consume *any* bits. And
there lies the problem. Interlacing throws away 50% of the lines and
does not consume any bits in the process. Progressive does not throw
away 50% of the lines. So, for a given-sized moving object, interlacing
has twice as many bits to encode the moving object as progressive does.
Bit rate is directly related to picture quality, so the picture quality
of progressive goes down for a given resolution.


> But there may exist an encoding
> method that, shock horror, has a better way than interlacing of
> squashing the 1080p source data into the available bandwidth.


Shannon proved that codes exist that allow the information transmission
rate to be arbitrarily close to Shannon's channel capacity. That was
about 60 years ago, and we still haven't found one yet.....


> Note that this theoretical encoder is certainly *not* MPEG 2, but
> *may* be H264 (I've not tried the latter, but have tried the former).


Here's the hypothetical problem that I asked Dave Farrance to comment
on, but nobody has tried to provide any explanation as to how the
problems with this scheme can be resolved. So rather than debating
hypothetical codecs that might not appear for 1, 2 or more decades, I
think it would be better to consider the H.264, because we will be using
it for the next decade or so once we actually start using it. (BTW, the
sarcasm isn't aimed at you; it was aimed at Dave Farrance, because he
simply would not provide any details about why his scheme provided
performance improvements that the best minds in video coding couldn't
manage.)

"Here's a hypothetical problem:

H.264 consumes the vast majority of its bit rate by encoding motion
vectors and the transformed and quantised prediction residuals -- i.e.
it consumes the vast majority of its bit rate due to objects moving. Say
you have a block of 32 x 16 pixels that all move, say, upwards by 20
lines from one frame to the next, and you have to encode this movement.
Assuming that the encoder chooses not to split these 2 macroblocks (a
macroblock is a block of 16 x 16 samples) into sub-macroblocks it'll be
encoded as follows:

Interlaced
-----------

The 16 pixels in each row for all the even numbered lines would be
combined into a macroblock; the motion vector will be calculated; the
prediction will be subtracted from the reference pixels; the prediction
residual (the result after subtracting the prediction) will be
transformed using a DCT, quantised and variable length coded.

Then 20 ms later the same happens for the odd-numbered lines.

So, in 20 ms (from time = 0 to time = 20 ms), 2 motion vectors have been
encoded and 2 macroblocks of prediction residuals have been encoded.


Progressive
------------

The 16 pixels in the first 16 rows would be combined into a macroblock;
the motion vector will be calculated; the prediction will be subtracted
from the reference pixels; the prediction residual (the result after
subtracting the prediction) will be transformed using a DCT, quantised
and variable length coded.

Then the same happens for the macroblock containing pixels in the 17th
to 32nd rows.

Then 20 ms later the same process occurs for both macroblocks.

So, in 20 ms (from time = 0 to time = 20 ms), 4 motion vectors have been
encoded and 4 macroblocks of prediction residuals have been encoded.


Differences
-----------

There are 4 motion vectors that need to be calculated for progressive
and only 2 motion vectors for interlaced.

There are 4 macroblock prediction residuals that need to be encoded for
progressive and only 2 macroblock prediction residuals for intelaced.

So, all else being equal, interlaced requires half the bit rate compared
to progressive.

And your assumption is that interlaced saves no bit rate whatsoever even
for the same 1920x1080 resolution.


QUESTION:

Please explain how your scheme improves compression-efficiency by a
factor of 2 while not reducing quality.

In other words: how does your scheme achieve the Holy Grail of video
coding when the best brains in video coding have failed?

When answering, bear in mind that after the prediction residuals have
been DCT-transformed they are lossless (entropy) encoded, and the same
lossless encoding is applied identically to both interlaced and
progressive macroblocks -- because the lossless encoder doesn't have a
clue whether the source was interlaced or progressive.

Also bear in mind that any other post-DCT transformation processes will
also apply identically to both interlace- or progressive-sourced
streams.

Also bear in mind the following fact:

http://www.vcodex.com/h264_transform.pdf

page 4:

"The wide range of quantizer step sizes makes it possible for an encoder
to accurately and flexibly control the trade-off between bit rate and
quality."

and halving the post-quantizer bit rate will drastically degrade the
picture quality."

DAB sounds worse than FM

unread,
Feb 13, 2005, 1:37:16 PM2/13/05
to
davidr...@postmaster.co.uk wrote:
> DAB sounds worse than FM wrote:
>
>> BTW, my qualifications will be superior to yours, which implies that
>> I am very likely to be superior to you at reading comprehension.
>> Hope that helps.
>
> I'm sure anyone who doesn't know you is never going to believe that
> you have "two first class degrees and a masters" from the evidence
> they see on here.
>
> Anyone with an ounce of tact, diplomacy, intelligence, understanding,
> compassion, respect, humility, or even kindness is unlikely to write
> what you've just written.
>
> Which begs the question: why do you do it?


That's simple: I only ever do it when someone is calling me thick.
Here's some of Kevin Bracey's comments in this thread so far:

"Poor Steve's not too hot on the reading comprehension."

"Just to spell it out for Steve, what with the reading comprehension
difficulties"

"Really, this is pretty basic stuff, and I'm amazed you're finding it so
hard
to comprehend what David Robinson is saying. I'm mainly attempting this
as a
personal test of my technical writing skills - if I can get DSWTFM to
understand this, then I'm sorted."

"Just to spell it out for Steve, what with the reading comprehension
difficulties"

If you think I should just allow Kevin Bracey to patronise me whenever
he feels like it, then you have got another thing coming.


> We seem to have gone a long way from interlacing. If you understand
> what Dave was originally trying to say with his "draw a box around the
> codec and interlacing", what's your answer to it?


Errrrrrrrrrrrrrrrrrrrrrrm, do you not bother to read my fking posts?

Here's some of the answers I've given to his "scheme":

"Here is your claim:

"It is trivial to prove that in the world of advanced codecs and
progressive displays that there is no bitrate reduction gained from
interlacing."

So, all I need to do is to provide a single counter-example to prove
that your example is false. So here it is:

You specifically claim that "is no bitrate reduction gained from
interlacing". So, comparing 1080i25 with 1080p50:

Because the H.264 codec consumes the vast majority of its bit rate by
encoding motion vectors and macroblock prediction residuals, and that
any moving object that covers a given number of pixels will require
twice as many macroblock prediction residuals and motion vectors for
1080p as for 1080i (simply due to the fact that an object's height in
pixels for interlaced is half that in any given field than it is for a
progressive frame), then 1080i saves 50% of the bit rate of 1080p, with
all else being equal.

If you read this introductory article on the new H.264/AVC video codec
(this is the advanced codec you're referring to, BTW, even though you
probably don't even know its name):

http://www.ebu.ch/trev_293-schaefer.pdf (543 KB)

it says this on page 1:

"The VCL design - as in any prior ITU-T and ISO/IEC JTC1 standard since
H.261 [2] - follows the so-called block-based hybrid video-coding
approach. The basic source-coding algorithm is a hybrid of inter-picture
prediction, to exploit the temporal statistical dependencies, and
transform coding of the prediction residual to exploit the spatial
statistical dependencies."

And on here:

http://www.vcodex.com/h264_overview.pdf

it says on page 1:

"The basic functional elements (prediction, transform, quantization,
entropy encoding) are little different from previous standards (MPEG1,
MPEG2, MPEG4, H.261, H.263); the important changes in H.264 occur in the
details of each functional element."

So, basically, the new advanced video codec that you're referring to
works in an identical manner as the MPEG-2 codec, but its efficiency
improvements are due to:

"There is no single coding element in the VCL that provides the majority
of the dramatic improvement in compression efficiency, in relation to
prior video coding standards. Rather, it is the plurality of smaller
improvements that add up to the significant gain."

And if there was a significant difference in the handling of interlaced
and progressive images, then you really would expect that article about
H.264 in the EBU Tech Review to mention it. Searching that document for
the terms: "interlace", "interlaced" or "interlacing" produces 2 "hits"
for the word "interlaced":

"Technical overview of H.264/AVC

The H.264/AVC design [2] supports the coding of video (in 4:2:0 chroma
format) that contains either progressive or interlaced frames, which may
be mixed together in the same sequence. Generally, a frame of video
contains two interleaved fields, the top and the bottom field. The two
fields of an interlaced frame, which are separated in time by a field
period (half the time of a frame period), may be coded separately as two
field pictures or together as a frame picture. A progressive frame
should always be coded as a single frame picture; however, it is still
considered to consist of two fields at the same instant in time."

Where's the mention of this big improvement in coding efficiency?

Again, quoting from:

http://www.vcodex.com/h264_overview.pdf

"the important changes in H.264 occur in the details of each functional
element"

But these functional elements work IDENTICALLY on macroblocks whether
the pixels are from an interlaced field or a progressive frame. To these
functional elements, a macroblock is simply a matrix of numbers. Nothing
more, nothing less.

An example of the improvement in efficiency is the use of arithmetic
coding instead of Huffman coding for the lossless entropy coding.
Arithmetic coding is more efficient than Huffman coding. And that's it.
The arithmetic coding isn't more efficient for progressive than for
interlaced, because it's just a routine for encoding data efficiently.
It does not and can not differentiate between whether the numbers it is
encoding are from a progressive or interlaced source."

-----------------------------------------------------------------------------

"I've just had a look at your post, and you do not mention the image
dimensions anywhere, i.e. you don't say whether you're comparing 1080i
with 1080p or 1080i with 720p.

If you compare 1080i with 1080p, then all else being equal (same
advanced encoder set up identically) then the bit rate saving for
interlaced will be approximately a factor of 2.

If you compare 1080i with 720p then due to the differences in image size
there is a bit rate saving for 720p due to less pixels having to be
encoded:

1920 x 1080 x 25 = 51.84 Mpixels/s

1289 x 720 x 50 = 46.08 Mpixels/s

51.84 / 46.08 = 1.125

i.e. 1080i requires approximately a 12.5% increase in bit rate.

But you're comparing apples with oranges, because 1080i has 2.25 times
as many pixels on the screen than 720p -- that's where you get the
12.5%, because 2.25 / 2 = 1.125."

----------------------------------------------------------------------------

"It would help if you could clarify exactly how and why there would be a
halving of the bit rate requirement for 1080p. Until you do, then all
you've achieved is posting something along the lines of a perpetual
motion machine."

---------------------------------------------------------------------------

"Firstly, here's a dictionary definition of the word 'proof':

http://www.onelook.com/?w=proof&ls=a

"noun: a formal series of statements showing that if one thing is true
something else necessarily follows from it"

You drawing a box around the interlacer and codec and renaming that a
codec and then saying that "it reduces conceptually to a progressive
system" does not constitute proof that the bit rate of
progressive-scanned video can provide the same level of quality, at the
same bit rate, and at the same resolution as an interlaced system.

I'm sorry, but it really does not constitute ANY proof whatsoever.

If you want to try and prove this assertion then you need to provide
some evidence, or logical arguments, that shows how and why the combined
interlacer and codec performs so well."

-----------------------------------------------------------------------------

and there's other posts which I can't be arsed copying.

I also posted this hypothetical problem that NOBODY has bothered to
address:

Interlaced
-----------


Progressive
------------


Differences
-----------


QUESTION:

http://www.vcodex.com/h264_transform.pdf

page 4:

I await with baited breath the details of your historic coding scheme."


And you have the cheek to say I haven't fking answered his half-baked
scheme???

DAB sounds worse than FM

unread,
Feb 13, 2005, 1:48:52 PM2/13/05
to
Dave Farrance wrote:
> "DAB sounds worse than FM" <dab...@low.quality> wrote:
>
>> Dave Farrance wrote:
>>> Please understand that I'm not trying to be rude here, but would I
>>> be right in assuming that all of your knowledge is academic, and
>>> that you've never actually done any engineering work in this field?
>>
>> Correct. But at least the academic work that I've done gives me a
>> good foundation to understand digital video, whereas I do not
>> believe you have such knowledge. The reason I say this is that you
>> do not seem able to provide any details about your scheme other than
>> to say that re-drawing a block diagram solves all ills.
>
> It was really your misunderstanding of what I was saying that left me
> with no other conclusion than the fact that you hadn't developed the
> mind-set to do practical work in the field. Sorry, but there it is.


I couldn't care less what mind-set you think I have or have not. This is
what you said at the start of this post:

"Me: > So, there may be arguments about the quality of i/p conversion
for
> 1080i, but as far as resolution and picture sharpness are concerned
> then
> 1080i is the clear winner, and by some margin.

You: Except that all the stuff you quoted predated advanced codecs that

can
compress as well as interlacing, only with better quality. Advanced
codecs can't handle pre-interlaced video very efficiently. A better
comparison would be between 1080i and 1080p, when compressed into the
same bandwidth with an advanced codec."

But advanced codecs cannot compress as well as interlace only with
better quality for a given resolution. That was your assertion, and it
was wrong. That should have been end of story, but you've continued to
talk nonsense throughout without giving a shred of information backing
up your assertion that these advanced video codecs work the way you say
they do or perform as you seem to think they will.

>> I repeat: you are wrong. But I'm also open-minded to new ideas. So,
>> if you think your scheme can do what you think it can do, then you
>> should be able to describe how it can be done. Is that too much
>> trouble?
>
> The others posting in this thread did their best to explain it to you,
> and still you didn't understand it.


I don't have a problem understanding things; I have a problem accepting
claims that you have made that I consider to be totally and utterly
incorrect.


> So as I've already said, any
> further attempt would be wasted. So good luck with whatever your
> actual field of work is, and I'm out of this thread.


That would be very convenient. If you had any confidence in your
convictions, then you wouldn't duck out of this thread.


--

davidr...@postmaster.co.uk

unread,
Feb 14, 2005, 5:23:13 AM2/14/05
to
I'll tell you what Steve - you've got a very high bordem threshold
(that's not meant to be an insult - I'm suggesting everyone else has
got bored and you're still going strong!).

DAB sounds worse than FM wrote:

> davidr...@postmaster.co.uk wrote:

> > Which begs the question: why do you do it?
>
> That's simple: I only ever do it when someone is calling me thick.
> Here's some of Kevin Bracey's comments in this thread so far:
>
> "Poor Steve's not too hot on the reading comprehension."

[etc.]

but you didn't understand what he said!

> Here's some of the answers I've given to his "scheme":
>
> "Here is your claim:
>
> "It is trivial to prove that in the world of advanced codecs and
> progressive displays that there is no bitrate reduction gained from
> interlacing."
>
> So, all I need to do is to provide a single counter-example to prove
> that your example is false. So here it is:

It's not a very well worded sentence you've picked to argue against. Or
maybe too it's too well worded!

He's showing that any "advanced" codec should not gain anything from
independent interlacing _because_, at the simplest level, you could let
the codec do the interlacing itself (step 1), and then _it_ could
decide when it was better to interlace and better not to (step 2).
Finally, the approach could be further refined until you could hardly
call it interlacing at all (step 3).

This is the concept. What follows is that any codec which gains from
blind interlacing being carried out separately simply isn't "advanced",
if you want to use the word that way.

However, your example is interesting...

> I also posted this hypothetical problem that NOBODY has bothered to
> address:

(Well, you know, some of us take days off! ;-) )

That's true in that scenario. However, note that the progressive
version uses twice the bitrate _because_ it sends twice as much
information. That means (with twice the information (!!!) ), you're
actually sending a higher quality image (and if you are not, then the
information should be thrown away instead, otherwise just how stupid is
this codec?!). The question, surely, is whether we can do anything
within a video codec that will allow us to at least match the interlace
bitrate and quality without actually using interlacing, accepting that
the interlaced version actually does send less information than the (in
your example, higher bitrate) progressive version.

For separate interlacing to be useful, you either need (a) a stupid
codec, or (b) an "advanced" codec where there is _never_ anything
better you can do than to interlace.

But if you find (b), then I'd suggest you're not looking at an advanced
codec!

Take your above example. How shall we solve it? OK - trivial idea -
halve the vertical resolution of the motion vectors and prediction
residuals in the progressive version. That will match the interlaced
quality and bitrate perfectly, won't it? Or - another trivial idea -
don't send the prediction residuals or motion vectors for every other
frame, and interpolate the motion vectors you do have in the decoder.
That might be a good approach for some content, and a terrible approach
for other content. If it's just a smooth pan, it'll work perfectly -
and better than the interlaced version unless you have a _perfect_
deinterlacer.


Does this make any sense? Do you realise that, by definition, an
advanced codec cannot benefit from separate interlacing? Because even a
quite stupid codec would benefit from being able to switch between
interlaced and progressive on a content dependent basis, and a more
advanced codec could use all kind of tricks (even interlacing - for a
parts of the frame!!!) to send all the information as efficiently as
possible.

Forcing the codec to use interlacing all the time (and only letting it
"see" the interlaced version) must be a handicap - unless the codec is
too stupid to do better than interlacing.

> QUESTION:
>
> Please explain how your scheme improves compression-efficiency by a
> factor of 2 while not reducing quality.

Did I pass? Hint: the answer isn't No! ;-) Hint 2: (in case you missed
it) the fully progressive version you propose uses twice the bitrate
because it sends twice the information - which is not necessary to
match the quality of the "half the information already thrown away"
interlaced version! So all you are saying is that the way you describe
an "advanced" codec, it is not able to throw away half the
information in a more efficient way than interlacing before hand. This
says something about interlacing, but more about the codec!

> I await with baited breath the details of your historic coding
scheme.

Sorry, I can't claim to have contributed to any coding schemes,
historic or otherwise.

> And you have the cheek to say I haven't fking answered his half-baked

> scheme???

It was more a question of how the fundamentals of electronic design and
circuit theory can be wrong. Remember, Dave clarified the statement
that an optimal codec couldn't be worse than interlace+existing codec.
It could well be better. As explained above, it hinges on one question:
Is it _ever_ better to do something with the video data other than to
interlace it?

You appear to be in a minority of one when answering that question!

If it is ever better to do something other than interlace, you had
better give the codec the progressive version and let it decide what to
do!

If the codec cannot decide what to do, then it is not a very good
codec!

Cheers,
David.

DAB sounds worse than FM

unread,
Feb 14, 2005, 9:00:27 AM2/14/05
to
Stevie sounds worse than AM wrote:


http://www.bartleby.com/59/3/imitationist.html

Kevin Bracey

unread,
Feb 14, 2005, 8:05:08 AM2/14/05
to
In message <IK6Pd.1760$ss....@newsfe1-gui.ntli.net>

"DAB sounds worse than FM" <dab...@low.quality> wrote:

> Kevin Bracey wrote:
> >
> > The one where you totally ignored the issue of alternating line
> > patterns and went off on a tangent about screen measurements? Or did
> > I miss one where you've addressed this?
>
> I'd happily respond if you could describe WTF you're going on about
> better than:
>
> "If you define twitter only as the vertical judder caused by a single
> sharp
> vertical light->dark transition, then maybe it's true. Example:
>
> field 1 field 2
> ********* *********
> ********* *********
> ********* ********* x
> --------- ----> x ---------
> --------- ---------
> --------- ---------
>
>
> The "x" marks the location of the apparent boundary between the regions,
> which bounces up and down at 25Hz."
>
> Are you referring to the same effect as on the images of the Park Run on
> here:
>
> http://www.ebu.ch/trev_301-editorial.html
>
> If so, that is not twitter.

I assume you mean Figure 3? That's combing on a moving image, and no, that's
not what I'm talking about.

> If you're going on about a vertical light to dark transition that is not
> moving, then why would anything change between fields, e.g.:
>
> vertical line:
>
> |
> |
>
> If the lines doesn't move then you will see:
>
> |
> |
>
> That is, no twitter, no artefacts whatsoever.

Okay, terminology problem here. By "vertical light->dark transition" I meant
a light area vertically above a dark area - the boundary being horizontal, as
in the diagram above. Sorry if that wasn't clear. Refer back to my diagram
above. If the diagram isn't clear, maybe you're using a proportional font -
select a monospaced one.

The left hand side shows a small pixel grid of the whole (progressive) frame.
It is a stationary image, with the top half being light and the bottom half
being dark. The transition is sharp and unfiltered. When interlacing occurs,
the picture will be sent as two alternating fields, shown on the right. As
the fields alternate, the boundary between the light and the dark will bounce
up and down. As the line pitch decreases so will this effect decrease.

Now, I've dug out my Poynton.

If you look at Poynton's definition of twitter in the Raster Scanning
chapter, he describes both this problem of "extremely rapid up-and down
motion" and the worse one of flicker over a large area. His Figure 6.6 is
showing the same thing I was trying to show with my second diagram in the
previous post. In his test scene, the first field picks out all the black
lines, the second field picks out all the white lines. Thus the resulting
image will flash between fields of black and white.

I was trying to point out this particular example as a showing up the logical
fallacy in your belief that the 0.7 filter factor can be reduced for images
with more lines. That problem of alternating line patterns remains the same
whether it's a 20 line image like his diagram or a 1080 line image like HDTV.
The same level of inter-line filtering is required to eliminate that effect.
This was an attempt at proof by contradiction - showing an effect that is
truly independent of number of lines.

I now think I can do better. I can show the mathematical fallacy in your
argument.

The line visibility of a 20-line image is vastly higher than a 1080-line
image - the lines are fatter. And thus the visibility of the vertical judder
of a single boundary is also higher. But if we apply the "same" 0.7 filter to
both - you're actually filtering the 20-line image much more in spatial
terms. The filter smooths out detail smaller than 1/14th of the screen height
for the 20-line picture and detail smaller than 1/756th of the screen height
for the 1080-line picture.

So the apparent inconsistency between Poynton's statement "Twitter and
scan-line visibility are inversely proportional to the count of image rows"
and the "fixed" 0.7 filter factor is resolved. When you filter a 1080 line
image by 0.7 you're applying a much finer filter (low pass below Height/756)
than if you filter a 576 line image by 0.7 (low pass below Height/403). Your
twitter visibility is inversely proportional, BUT SO IS THE 0.7 FILTER
VISIBILITY.

--
Kevin Bracey, Principal Software Engineer
Tematic Ltd Tel: +44 (0) 1223 503464
182-190 Newmarket Road Fax: +44 (0) 1728 727430
Cambridge, CB5 8HE, United Kingdom WWW: http://www.tematic.com/

DAB sounds worse than FM

unread,
Feb 14, 2005, 7:50:53 AM2/14/05
to
davidr...@postmaster.co.uk wrote:
> I'll tell you what Steve - you've got a very high bordem threshold
> (that's not meant to be an insult - I'm suggesting everyone else has
> got bored and you're still going strong!).


I'm bored too. And spending hours arguing over such things is not a
constructive use of my time, so I'm out of here.

Stevie sounds worse than AM

unread,
Feb 14, 2005, 7:54:17 AM2/14/05
to
davidr...@postmaster.co.uk wrote

> I'll tell you what Steve - you've got a very high bordem threshold
> (that's not meant to be an insult - I'm suggesting everyone else has
> got bored and you're still going strong!).

Heeheehee

Stevie IS Ferrous Cranus

http://redwing.hutman.net/%7Emreed/warriorshtm/ferouscranus.htm

DAB sounds worse than FM

unread,
Feb 14, 2005, 12:09:42 PM2/14/05
to


Okay.


>> If you're going on about a vertical light to dark transition that is
>> not moving, then why would anything change between fields, e.g.:
>>
>> vertical line:
>>
>>>
>>>
>>
>> If the lines doesn't move then you will see:
>>
>>>
>>>
>>
>> That is, no twitter, no artefacts whatsoever.
>
> Okay, terminology problem here. By "vertical light->dark transition"
> I meant a light area vertically above a dark area - the boundary
> being horizontal, as in the diagram above. Sorry if that wasn't
> clear. Refer back to my diagram above. If the diagram isn't clear,
> maybe you're using a proportional font - select a monospaced one.


Yes, I was viewing it in OE. On here it is clear:

http://tinyurl.com/64kjc

So, you've got 6 lines. If the light-dark transition isn't moving, then
you will always see:

*********
*********
*********
---------
---------
---------


> The left hand side shows a small pixel grid of the whole
> (progressive) frame. It is a stationary image, with the top half
> being light and the bottom half being dark. The transition is sharp
> and unfiltered.


But it's unfiltered. In reality, a vertical lowpass filter is used.


> When interlacing occurs, the picture will be sent as
> two alternating fields, shown on the right. As the fields alternate,
> the boundary between the light and the dark will bounce up and down.
> As the line pitch decreases so will this effect decrease.


Exactly my point.


> Now, I've dug out my Poynton.
>
> If you look at Poynton's definition of twitter in the Raster Scanning
> chapter, he describes both this problem of "extremely rapid up-and
> down motion" and the worse one of flicker over a large area.


Yes.


I started a thread on sci.image.processing:

http://tinyurl.com/5vuha

and I'm afraid they disagree with you, with the assumption being that
the screen occuppies the same field of vision for SD as for HD -- hence
my calculation of pixel density, which you accused of being
"irrelevant".

Basically, it's all very well giving these extreme examples, but in
reality when do you ever see alternating black and white lines? And if
there were alternating black and white lines that are each, say, the
height of 2 scan-lines, then can the human visual system actually
resolve such high spatial frequencies? I think the answer is probably
not. If you've got Matlab, run this script:

alt_lines=zeros(200,400);
for i=1:200
if mod(i,2)==0
alt_lines(i,:)=ones(1,400);
else
alt_lines(i,:)=zeros(1,400);
end
end

imshow(alt_lines)
imwrite(alt_lines,'alternating_lines.jpg','jpg')

If you haven't got Matlab, here's the JPEG the above script creates:

http://www.digitalradiotech.co.uk/images/alternating_lines.jpg

Looking at that image from a distance of about 2 feet on my 17" CRT
monitor (Trinitron tube, so good for detail) at a resolution of 1280x960
just shows a grey rectangle. At 1 foot away you can sort of see that the
image is made up of horizontal lines, but you would definitely say that
they're all grey lines (unless you knew otherwise). And only when you're
very, very close to the image can you see that there may be alternating
lines, but it still looks overwhelmingly grey.

So, what would happen if there was alternating black and white lines on
HDTV scanned using interlacing? Unless you were watching the TV with
your head about 1 foot away from the screen then I think you'd just see
a big grey screen.

Kevin Bracey

unread,
Feb 14, 2005, 12:51:19 PM2/14/05
to
In message <qX4Qd.198$YO3...@newsfe4-gui.ntli.net>

"DAB sounds worse than FM" <dab...@low.quality> wrote:

> Kevin Bracey wrote:
> >
> > [Poynton's] Figure 6.6 is showing the same thing I was trying to show

You've only got 1 reply addressing that on there, and I believe he's missed
the point that the 0.7 filter is already a finer filter on a HDTV picture -
like the twitter, it's already been reduced in effect by the tighter line
pitch of HDTV. It would be an error to reduce its effect again.

So for HDTV you have twitter of smaller angular size, so you apply a low-pass
filter with a cutoff of smaller angular size, but that smaller angular size
is still 1.4 scanlines, because the scanlines are smaller.

I'm going to cross-post this to sci.image.processing so they can have
visibility of this discussion.

> http://www.digitalradiotech.co.uk/images/alternating_lines.jpg
>
> Looking at that image from a distance of about 2 feet on my 17" CRT
> monitor (Trinitron tube, so good for detail) at a resolution of 1280x960
> just shows a grey rectangle. At 1 foot away you can sort of see that the
> image is made up of horizontal lines, but you would definitely say that
> they're all grey lines (unless you knew otherwise). And only when you're
> very, very close to the image can you see that there may be alternating
> lines, but it still looks overwhelmingly grey.

Indeed, but that's a progressive display, so there's no problem.

> So, what would happen if there was alternating black and white lines on
> HDTV scanned using interlacing? Unless you were watching the TV with
> your head about 1 foot away from the screen then I think you'd just see
> a big grey screen.

No, you still don't get this. When the interlacing happens, in this worst
case, the black lines all end up on one field, the white lines all end up on
the other field. Thus the whole screen is white for 1/50 of a second, then
black for 1/50 of a second, repeated.

In this case the line pitch does not matter, because all that ever appears on
the screen is flat white or flat black. The screen never actually displays an
alternate black/white striped pattern, so the viewer's ability to discern any
such lines is irrelevant.

It is loading more messages.
0 new messages