Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Star Wars II HDCAM explained (finally a definitive answer)

103 views
Skip to first unread message

Michele Tavares

unread,
May 5, 2002, 1:10:30 AM5/5/02
to
I hopse this resolves a lot of crap for many people.

I attended the STAR WARS thing in Indianapolis today & got to ask a
blunt technical question to both JOHN KNOLL (inventor of Photoshop and
head visual FX supervisor) and the producer Rick MCCallum.

The high def footage was heavily processed with a custom made FILM
EMULATOR. This explains why the footage looks so much better than any
other High Def footage transferred to a 35mm print (IE "Ali"). We even
saw the 8 minute demo reel from SHO WEST on the DLP Texas Instruments
projector. It looked phenomenal, and it had a 35mm films tock look to
it, which prompted the question.

They did in fact use a process of emulating several different film
stocks on the High Def footage - specifially to deal with DEPTH OF
FIELD LIMITATIONS ON THE CAMERA.

So the panavision lenses are not the end all, be al of the "look" of
Star Wars. but a high end, proprietary film look process involving
film grain, color changes, and blurring were the secret recipe to
ILM's latest masterpiece.

Personally, I wanted to know technically WHY it looked so good, but
the reality is that I think the STORY is more important than the
format shot. This looks like a much better story than Phantom Menace.

- M
www.sonnyboo.com

Gary Eickmeier

unread,
May 5, 2002, 1:34:51 AM5/5/02
to

Michele Tavares wrote:

> So the panavision lenses are not the end all, be al of the "look" of
> Star Wars. but a high end, proprietary film look process involving
> film grain, color changes, and blurring were the secret recipe to
> ILM's latest masterpiece.
>
> Personally, I wanted to know technically WHY it looked so good, but
> the reality is that I think the STORY is more important than the
> format shot. This looks like a much better story than Phantom Menace.

So you add grain, change color, and blur the image to make it look
better? How does that work? Actually, I was hoping they wouldn't mick
it up like that, to see just how a movie would look in video vs film.
But if they think they want to emulate film instead of improve upon
it, then we're sunk. In any case, I am going to try like hell to see
it projected digitally at the Pleasure Island 24 near Disney. Should
be interesting.

Gary Eickmeier

Tony Raven

unread,
May 5, 2002, 4:21:45 AM5/5/02
to

"Michele Tavares" <mtav...@tavtel.com> wrote in message
news:186b40a.02050...@posting.google.com...

>
> So the panavision lenses are not the end all, be al of the "look" of
> Star Wars. but a high end, proprietary film look process involving
> film grain, color changes, and blurring were the secret recipe to
> ILM's latest masterpiece.
>

I'm inspired and writing some software to add hiss, crackle and scratch
noises to my CD's when I play them. Boy will those CD's sound good when
I've finished.

Tony

fin...@lickin-good.com

unread,
May 5, 2002, 4:19:53 AM5/5/02
to
Adobe Photoshop came from the Aldus Photostyler software originally
developed by ULEAD. Adobe has nothing to do with it beyond the fact that
they developed it further. According to ULEAD Knoll had nothing to do with
Photoshop in the beginning. Inventor NO, develorper mayber YES. Get your
facts straight.


"Michele Tavares" <mtav...@tavtel.com> wrote in message
news:186b40a.02050...@posting.google.com...

Stephane Brousseau

unread,
May 5, 2002, 5:46:42 AM5/5/02
to
<fin...@lickin-good.com> wrote in message news:JW5B8.206190> "Michele

Tavares" <mtav...@tavtel.com> wrote in message
> news:186b40a.02050...@posting.google.com...
> > I hopse this resolves a lot of crap for many people.
> >
> > I attended the STAR WARS thing in Indianapolis today & got to ask a
> > blunt technical question to both JOHN KNOLL (inventor of Photoshop and
> > head visual FX supervisor) and the producer Rick MCCallum.
>
> Adobe Photoshop came from the Aldus Photostyler software originally
> developed by ULEAD. Adobe has nothing to do with it beyond the fact that
> they developed it further. According to ULEAD Knoll had nothing to do with
> Photoshop in the beginning. Inventor NO, develorper mayber YES.
>
> Get your facts straight.

photoshop was already at version 3.0
when Adobe aquired Aldus and killed photostyler (1994)


http://www.storyphoto.com/multimedia/multimedia_photoshop.html

John Fromes

unread,
May 5, 2002, 11:56:41 AM5/5/02
to
<fin...@lickin-good.com> wrote in message news:<JW5B8.206190$nc.27...@typhoon.tampabay.rr.com>...

> Adobe Photoshop came from the Aldus Photostyler software originally
> developed by ULEAD. Adobe has nothing to do with it beyond the fact that
> they developed it further. According to ULEAD Knoll had nothing to do with
> Photoshop in the beginning. Inventor NO, develorper mayber YES. Get your
> facts straight.

Why don't you get yor head out of your ass & try to follow what the
point of the story was ?

r.crowley

unread,
May 5, 2002, 2:04:34 PM5/5/02
to
"Tony Raven" <ju...@raven-family.com> wrote

> I'm inspired and writing some software to add hiss, crackle and scratch
> noises to my CD's when I play them. Boy will those CD's sound good when
> I've finished.

If you want to "intercut" them seamlessly with old black vinyl, then that is
exactly right!

Which is the point of "processing" HD video when intercutting it with film,
isn't it?


Erik Harris

unread,
May 5, 2002, 3:41:25 PM5/5/02
to

No kidding. After seeing his reply, I had to re-read the first paragraph of
the original post just to see how his reply had _ANY_ relevance at all to the
original post. Since the PhotoShop thing was just a minor point in the
beginning, I didn't even catch it on the first skim. :)

Erik Harris n$wsr$ader@$harrishom$.com
http://www.eharrishome.com ICQ: 2610172

To avoid Spam-bots, my address at the top is INCORRECT.
Change each dollar sign to an "e".

Erik Harris

unread,
May 5, 2002, 3:46:53 PM5/5/02
to

Wasn't "When Clones Attack!" shot entirely on HD, and not intercut with film?
If so, why degrade the quality to make it look more like film? Similarly,
why limit yourself to 24fps when shooting in HD, when the eventual transition
to HD seemingly opens the door to better frame rates? I never understood the
film industries insistance on sticking to that even when new standards come
along, given that 24fps make for really crappy looking pans (watching full
screen pans is almost painful to my eyes in some movies).

I'm not trying to be a contrary smartass, here. I'm honestly curious what
the point is of trying to make a high tech production look like an old
film-based low frame rate production. It makes sense to drop down to 24fps
for the majority of the theaters still using film reels, but it seems like it
would make sense to shoot at a higher frame rate and display at a higher
frame rate in the few lucky theaters equipped with high end DLP systems, and
to keep the HD quality instead of making it look like film for those
theaters.

David McCall

unread,
May 5, 2002, 5:57:03 PM5/5/02
to
Every time a new technology comes along, it seems to be important
to attempt to emulate an older technology i.e.. Harpsichord tried to
sound like a lute, a pipe organ tried to sound like an orchestra,
electronic organs (predecessors of today's synthesizers) tried to
sound like pipe organs, synthesizers tried to sound like harpsichords.

Some people just have a hard time accepting new technology for
what it is, instead of just adopting it as a new technology.

David


Simon Bunker

unread,
May 5, 2002, 9:36:38 PM5/5/02
to
I wonder if adding some noise would get rid of some aliasing artefacts
maybe? Any digital system always suffers from alising - could be that it is
too good.

I think 24P systems have become popular as it is a known standard and
people know how to deal with it. It's a good starting point at least
otherwise we get the crazy video systems (moving into MPEG too) with several
different frame rates.

Besides these camera can only do progressive at 24 - 30 frames per second
and then drop down to traditional interlaced (yuck) above that don't they?

Simon

--
http://www.rendermania.com/
UIN 11123737

"Erik Harris" <n$wsr$ader@$harrishom$.com> wrote in message
news:qp2bdug2hnatlk42a...@4ax.com...

Michele Tavares

unread,
May 5, 2002, 11:38:48 PM5/5/02
to
> Wasn't "When Clones Attack!" shot entirely on HD, and not intercut with film?
> If so, why degrade the quality to make it look more like film?

It is purely an AESTHETIC that the audience is used to & expects. Have
you ever noticed any deviation fromt he norm generally is not accepted
? By making the HD footage look like film, they have made the movie
look more like what audiences expect, not a VIDEO look.

And the photpshop thing is what is quoted on a STARWARS.COM
documentary, sorry I didn't do my homework before quoting someone
else.

It is my belief that some people are more interested in self
gratification than adding anything to the argument at hand (VIDEO vs.
FILM)

- M.

www.sonnyboo.com

David Crossman

unread,
May 5, 2002, 5:13:24 PM5/5/02
to
In an article, Erik Harris <n$wsr$ader@$harrishom$.com> wrote:

> I'm not trying to be a contrary smartass, here. I'm honestly
> curious what the point is of trying to make a high tech
> production look like an old film-based low frame rate production.
> It makes sense to drop down to 24fps for the majority of the
> theaters still using film reels, but it seems like it
> would make sense to shoot at a higher frame rate and display
> at a higher frame rate in the few lucky theaters equipped with
> high end DLP systems, and to keep the HD quality instead of making
> it look like film for those theaters.

I seem to remember the guy who 'invented' IMAX saying that he
had done some research that indicated a frame rate of between
50 and 60 fps was the optimum in delivering a maximum something
to audience's brains. (Can someone help me out here?)

--
David Crossman DGGB, LRPS
-------------------------
http://www.dareks.fsnet.co.uk/

Jon Carroll

unread,
May 6, 2002, 3:39:30 AM5/6/02
to
On Sun, 5 May 2002 22:13:24 +0100, David Crossman
<da...@NOdareks.fsnet.co.ukyah> wrote:

>
>I seem to remember the guy who 'invented' IMAX saying that he
>had done some research that indicated a frame rate of between
>50 and 60 fps was the optimum in delivering a maximum something
>to audience's brains. (Can someone help me out here?)

wasn't that showscan?

--
-Jon Carroll
dra...@infi.net
"But HoHos are a vital part of my cognitive process!!"- Xander

Larry Wilson

unread,
May 6, 2002, 11:01:25 AM5/6/02
to

"Michele Tavares" <mtav...@tavtel.com> wrote in message
news:186b40a.02050...@posting.google.com...

So basically, it's what I suspected; 24p looks good on screen--but only
after a whole lot of jiggery-pokery on Lucasfilm's part.

It'll be interesting if George Lucas' muscle is enough to get DLP systems
into more theatres. From what I understand, he won't let theatres have
Episode III unless they're DLP-equipped. At the rate DLP's rollout is going,
it's not going to happen.

Maybe he'll drop his expectations like he did with Episode I (theatres
getting it had to only have digital sound instead of both THX certification
AND digital sound).


Michele Tavares

unread,
May 6, 2002, 1:44:09 PM5/6/02
to
> It'll be interesting if George Lucas' muscle is enough to get DLP systems
> into more theatres. From what I understand, he won't let theatres have
> Episode III unless they're DLP-equipped. At the rate DLP's rollout is going,
> it's not going to happen.

They are NOT forcing Episode III to be digital only. This was
addressed. They want more DLP projectors out there, but they are NOT
going to force anyone.

- M.
www.sonnyboo.com

Erik Harris

unread,
May 6, 2002, 5:25:19 PM5/6/02
to
On 5 May 2002 20:38:48 -0700, mtav...@tavtel.com (Michele Tavares) wrote:

>> Wasn't "When Clones Attack!" shot entirely on HD, and not intercut with film?
>> If so, why degrade the quality to make it look more like film?

>It is purely an AESTHETIC that the audience is used to & expects.

With all due respect, that in itself is a lame excuse. The audience is used
to and expects this because that's all that's used, and all that's available
in movie theaters. It's silly to say that if something _better_ is available
that it's better to stick with the older, lower quality standards because of
audience expectations. With this rationale, CD's and DVD's would have never
been put into production, because the audience was used to and expected the
quality of audio and video cassette tapes. HDTV wouldn't be starting to
slowly catch on, because people are used to NTSC in the US (and PAL or SECAM
elsewhere). Microwave ovens would never have gone anywhere, because people
were used to reheating something taking 10 minutes or so, and boiling water
requiring a pan and stove.

There have been plenty of mosts listing some good answers to my question, and
I appreciate them.

>And the photpshop thing is what is quoted on a STARWARS.COM
>documentary, sorry I didn't do my homework before quoting someone
>else.

>It is my belief that some people are more interested in self
>gratification than adding anything to the argument at hand (VIDEO vs.
>FILM)

No need to loose your venom on me, Michele. I didn't take issue with the
PhotoShop comment. That was someone else. My question was an honest
question. I made that clear enough in my post that everyone except you
seemed to understand the spirit in which it was asked.

I don't care if you don't do your homework before quoting someone else, but I
DO care if you don't do your homework and start attacking ME because of
something someone ELSE said. If you can't keep track of who wrote what
(which is fine), don't go making ASSumptions that might possibly result in
you flaming someone who doesn't deserve it.

Erik Harris

unread,
May 6, 2002, 5:29:43 PM5/6/02
to
On Mon, 06 May 2002 15:01:25 GMT, "Larry Wilson" <duk...@hotmail.com> wrote:

>It'll be interesting if George Lucas' muscle is enough to get DLP systems
>into more theatres. From what I understand, he won't let theatres have
>Episode III unless they're DLP-equipped. At the rate DLP's rollout is going,
>it's not going to happen.

Of course it's not gonna happen. He'll realize how much more money he can
make by not doing that, and he'll cave in. Similar rumors were flying about
Episode I only being released to THX certified theaters (Talk about a scam!
"I'll only let you pay to rent this reel if you first pay my company a
fortune to put our THX logo on it"). This rumor was everywhere, confirmed
from all sorts of independent sources, and then it just kinda disappeared.

>Maybe he'll drop his expectations like he did with Episode I (theatres
>getting it had to only have digital sound instead of both THX certification
>AND digital sound).

Did he even stick to his guns on the digital sound requirement? I'm pretty
sure it played in Ithaca, NY, which has two of the worst theaters I've ever
been in. They're hardly even equipped with _stereo_ sound, as it is. :) (I
was at school there when IV-VI were re-released, and after seeing IV in
Ithaca, I went to Rochester to see the other two). Maybe one of the theaters
there has digital sound but just really crappy speakers and got away with it,
I dunno..

Erik Harris

unread,
May 6, 2002, 5:17:37 PM5/6/02
to
On Sun, 5 May 2002 22:13:24 +0100, David Crossman
<da...@NOdareks.fsnet.co.ukyah> wrote:

I've heard similar numbers, though I never heard it attributed to any one
person in particular. In any case, 24fps is definitely NOT optimum, as
anyone with half-decent eyes can tell when watching a scene pan in a movie
theater. :)

Mike Kujbida

unread,
May 6, 2002, 10:02:14 PM5/6/02
to

"Jon Carroll" <dra...@infi.net> wrote in message
news:3cdd3321...@news.infi.net...

> On Sun, 5 May 2002 22:13:24 +0100, David Crossman
> <da...@NOdareks.fsnet.co.ukyah> wrote:
>
> >
> >I seem to remember the guy who 'invented' IMAX saying that he
> >had done some research that indicated a frame rate of between
> >50 and 60 fps was the optimum in delivering a maximum something
> >to audience's brains. (Can someone help me out here?)
>
> wasn't that showscan?
>

Douglas Trumbull was playing around with that at least 20 years ago. As I
recall, the initial problem was that it was too lifelike (is that possible)
and people watching the movie were getting physically ill. The technology
is apparently still being used for things like amusement rides and
simulators Do a net and newsgroup search for it and you'll get several
hits.

Mike Kujbida


David Mullen

unread,
May 6, 2002, 11:07:54 PM5/6/02
to
> >> Wasn't "When Clones Attack!" shot entirely on HD, and not intercut with
film?
> >> If so, why degrade the quality to make it look more like film?
>
> >It is purely an AESTHETIC that the audience is used to & expects.
>
> With all due respect, that in itself is a lame excuse. The audience is
used
> to and expects this because that's all that's used, and all that's
available
> in movie theaters.

When your talking about applying some sort of "film look" processing to a
digital image, even though this can be considered "degradation", to the
minds of those who prefer a film look, the attitude isn't that they are
destroying the image but making it more aesthetically pleasing, so therefore
it is not a degradation except in the most technical sense of the word.

Look, DP's use softer lenses or diffusion filters or smoke, etc. all the
time when shooting and don't really consider this "degradation" but the
application of photographic style in order to create the proper look and
mood that the story requires. The point of making movies is NOT to show off
a technology but to tell stories. Otherwise, all movies would have to look
like Kodak advertisements.

As for whether Lucas should have gone for a more direct, "raw" HD image
versus processing it to look more like film, one argument is simply that he
didn't want the look to drift TOO far from that established by earlier "Star
Wars" films.

I also don't consider such image processing in post to "improve" the HD
image to be "cheating" because no doubt every HD post house will be offering
similar processes to clients within a few years. All that ultimately
matters is if you like the end results anyway, not what tricks you employed
to get there.

David Mullen


Su

unread,
May 6, 2002, 11:52:33 PM5/6/02
to
Bravo. Hear hear.

Every effects house does something to the footage they work on. The fact that
someone at ILM gave the process a name "FILM EMULATOR" does not make it
super-special. Sure, it's proprietary, but pixels are pixels and there's only
so much you can do to them. Anybody can (and everybody will) figure out how to
do whatever John Knoll was talking about.

Su

MK70

unread,
May 7, 2002, 1:13:31 PM5/7/02
to
Don't forget some of the old technology was indeed replaced by new
technology.

Black plastic record > Audio CD
Walkman > MP3s
VHS Camcorder > DV Camcorder
APS Film Camera > Digital Camera
CRT Monitor > LCD Monitor

Actually, I don't think HDCAM is very high-tech,
The CCD is actually the same CCD that we are using in Digital Camera.
The resolution of HDCAM is mediocre if you compare it with Digital Camera.
Just a few years back, a Kodak Digital Camera was selling at $25,000 plus.
Now you can get a DC with the same resolution for less than $2,500.
In addition, CMOS is coming to challenge the quality of CCD.

If the price HDCAM become so affordable,
who else will shot full feature 35mm film?


Alan Boucek

unread,
May 7, 2002, 1:58:39 PM5/7/02
to
Exactly,

We do this all the time. We do all sorts of things to make CG elements
fit into photographed plates, or to make all CG shots that cut into
photographic shots. It's art, not science. Any experienced compositor
with a few readily available pieces of software can come up with a
pretty good film-look post process.

The point of post-processing HD images is to do the kinds of things
that DPs do with film cameras. Since you can't do most of that
in-camera with the current HD systems, you do it in post.


Su <s...@bake.org> wrote in message news:<3CD7500C...@bake.org>...

John S. Dyson

unread,
May 7, 2002, 3:50:01 PM5/7/02
to

"MK70" <no...@nomail.com> wrote in message news:ab91hp$9h9$1...@news.ctimail.com...

> > Every time a new technology comes along, it seems to be important
> > to attempt to emulate an older technology i.e.. Harpsichord tried to
> > sound like a lute, a pipe organ tried to sound like an orchestra,
> > electronic organs (predecessors of today's synthesizers) tried to
> > sound like pipe organs, synthesizers tried to sound like harpsichords.
> >
> > Some people just have a hard time accepting new technology for
> > what it is, instead of just adopting it as a new technology.
> >
> > David
> >
> Don't forget some of the old technology was indeed replaced by new
> technology.
>
> Black plastic record > Audio CD
> Walkman > MP3s
> VHS Camcorder > DV Camcorder
> APS Film Camera > Digital Camera
> CRT Monitor > LCD Monitor
>
> Actually, I don't think HDCAM is very high-tech,
> The CCD is actually the same CCD that we are using in Digital Camera.
>
Digital Camera CCDs don't have to clock NEARLY as fast as HDTV. They
are conceptually similar animals, but different. My own SDTV camera has
720+H pels by 986+V pels, and uses DSP stuff to improve the picture. I
tend to agree that something like a 1280x720 CCD shouldn't have to cost
$25K, but to be able to clock at 60/30Hz (or even 24Hz) is a different animal
than one that needs to clock out images at 5/sec.

John

Jon Carroll

unread,
May 7, 2002, 5:19:47 PM5/7/02
to

Except, the HDCAM cameras have three CCDs for better color quality,
and most if not all digital cameras dont, which means those digital
cmeras have effectively 1/3rd of their pixel resolution in color
resolution.

Erkki Halkka

unread,
May 7, 2002, 6:45:08 PM5/7/02
to Tony Raven
Tony Raven wrote:

> I'm inspired and writing some software to add hiss, crackle and scratch
> noises to my CD's when I play them. Boy will those CD's sound good when
> I've finished.

Don't forget to add distortion too.

CU
--
- Eki

http://www.akmp-program.fi
http://www.kolumbus.fi/erkki.halkka/

*************************************
******** warp9.to/soapwish **********
*************************************

Michele Tavares

unread,
May 7, 2002, 9:39:30 PM5/7/02
to
> With all due respect, that in itself is a lame excuse.

It's also factual. Stuff shot on video, or more importantly anything
that LOOKS like it was shot on video is percieved as "cheap". Why do
you think even sitcoms shot on crappy sound stages are still shot on
film ? They get transferred to video & edited on video, and broadcast
in video, but they originate on FILM because of an AESTHETIC. Becasue
it is EXPECTED byt the audience.

Similarly, they added a FILM-like look to Star Wars II. It is a
texture choice. It's no different than choosing a different type of
film stock because of the grain or coloration look. Why is this
"lame" ?


> I don't care if you don't do your homework before quoting someone else, but I
> DO care if you don't do your homework and start attacking ME because of
> something someone ELSE said. If you can't keep track of who wrote what
> (which is fine), don't go making ASSumptions that might possibly result in
> you flaming someone who doesn't deserve it.

Lower your caffine intake. This is a public message board & not a
personal email to you.

- M.
www.sonnyboo.com

Su

unread,
May 7, 2002, 11:58:02 PM5/7/02
to
Actually, three camera sitcoms *are generally shot on video.

su

David Mullen

unread,
May 8, 2002, 12:38:09 AM5/8/02
to
Another point I want to make is that anyone here who is a cinematographer
knows that a post process will not turn bad photography into good
photography, so if "Attack of the Clones" looks halfway decent, don't think
that this was because it was "saved" by ILM. The quality of the lighting,
compositions, filtration, creative exposure, etc. combined with the basic
image qualities of Sony's 24P HD will be a bigger factor in the final
onscreen quality (or lack thereof) than anything ILM could do after-the-fact
to the image (other than replace it with huge gobs of CGI.)

David Mullen


S.E.E. Films

unread,
May 8, 2002, 12:55:31 AM5/8/02
to
There are a couple of reasons why we try to make video look or react the way
film does.

1) Video tends to be much cleaner then film and the depth of field is
considerably greater in video then film. As a cinematographer we want to be
able to control where the eye looks. In some case we do not want a pristine
depth of field because we want the audience to focus their attention on
something very specific in the frame.
2) There is an obvious difference in the way film looks as compared to video
and people consider film to be a realistic representation of life. When
color film was first introduced to filmmaking, filmmakers chose not to use
it because people associated color as fantasy and the Black and White stuff
was associated with realism. Take a look at the wizard of OZ. Kansas
(reality) is in Black and White and OZ is in color.


"Erik Harris" <n$wsr$ader@$harrishom$.com> wrote in message
news:qp2bdug2hnatlk42a...@4ax.com...


-----------== Posted via Newsgroups.Com - Uncensored Usenet News ==----------
http://www.newsgroups.com The #1 Newsgroup Service in the World!
-----= Over 100,000 Newsgroups - Ulimited Fast Downloads - 19 Servers =-----

Michael D. Most

unread,
May 8, 2002, 1:25:15 AM5/8/02
to

"Su" <s...@bake.org> wrote in message news:3CD8A2DC...@bake.org...

> Actually, three camera sitcoms *are generally shot on video.

That's not the case. I don't have the exact numbers available, but for a
number of years (through most of the 90's to the present day) the vast
majority of multicamera (generally 4, not 3, although many people still call
them "3 camera") sitcoms, especially those on the 4 major networks, have
been shot on film - in fact, I can't think of a sitcom currently on CBS,
NBC, or ABC that isn't on film, with only one exception ("According To Jim"
on ABC, which switched to 24p HD video the middle of this season). That's
beginning to change a bit these days with the advent of 24p HD video, so I
imagine the balance will tip in favor of video over the next season or two.
Then again, I believe that the majority of all prime time television will be
shot on 24p HD video eventually. I give it another 2 seasons.

>> Why do
> > you think even sitcoms shot on crappy sound stages are still shot on
> > film ?

What makes you think that multicamera sitcoms are shot on "crappy" stages?
In fact, they're shot on some of the nicest stages in town, especially
compared to many single camera shows that are shot in warehouses. Most of
the sitcoms are shot on "real" stages on the major film lots.

Mike Most

Jack of All Asses

unread,
May 8, 2002, 2:33:58 AM5/8/02
to
On Sun, 5 May 2002 05:46:42 -0400, "Stephane Brousseau"
<so...@not.email.com> wrote:

><fin...@lickin-good.com> wrote in message news:JW5B8.206190> "Michele


>Tavares" <mtav...@tavtel.com> wrote in message
>> news:186b40a.02050...@posting.google.com...
>> > I hopse this resolves a lot of crap for many people.
>> >
>> > I attended the STAR WARS thing in Indianapolis today & got to ask a
>> > blunt technical question to both JOHN KNOLL (inventor of Photoshop and
>> > head visual FX supervisor) and the producer Rick MCCallum.
>>

>> Adobe Photoshop came from the Aldus Photostyler software originally
>> developed by ULEAD. Adobe has nothing to do with it beyond the fact that
>> they developed it further. According to ULEAD Knoll had nothing to do with
>> Photoshop in the beginning. Inventor NO, develorper mayber YES.
>>
>> Get your facts straight.
>

>photoshop was already at version 3.0
>when Adobe aquired Aldus and killed photostyler (1994)
>
>
>http://www.storyphoto.com/multimedia/multimedia_photoshop.html
>

Photoshop was in Beta in 1963 running off of tape reels.
Didn't you see the Oswald holding the rifle photo ?

Su

unread,
May 8, 2002, 9:53:19 AM5/8/02
to
Huh. I could've sworn I saw video cameras at WB the other day. Oh well. Do
they shoot on 16?

I also wanted to add that the decision to add some extra processing to the Star
Wars footage was certainly made BEFORE they decided to do it. I sort of doubt
they shot the whole thing and then decided "oh that looks like shit, what can
we do to save it". Rather, it was probably something more like "Well, if we
shoot HD we will save this much $$, and if we spend $ to treat the footage
we'll get something that is acceptable to all concerned.

Every production undergoes decisions of this nature. Movies have a finite
budget and schedule, and the more that Star Wars could save on film stock, the
more they'd have left over for...CG sets... or whatever.

Well anyway I am supposed to test the PanaSony camera in a week or two, so I
will be able to stop talking out the proverbial ass and quote y'all some actual
experience.

su

"Michael D. Most" wrote:

> \

Erkki Halkka

unread,
May 8, 2002, 4:25:51 PM5/8/02
to Erik Harris
Erik Harris wrote:

> With all due respect, that in itself is a lame excuse. The audience is used
> to and expects this because that's all that's used, and all that's available
> in movie theaters.

Exactly. What's your point??

> It's silly to say that if something _better_ is available
> that it's better to stick with the older, lower quality standards because of
> audience expectations.

But it often is. An example from the TV world: Video is technically
superior to film (when it comes to broadcast), and has been for a some
while now, but all the higher end TV shows are done in film (or at least
on something that *looks* like film). Why? People connect film look with
quality.

> With this rationale, CD's and DVD's would have never
> been put into production, because the audience was used to and expected the
> quality of audio and video cassette tapes.

Don't you remember what's going on with the audiophiles?? It's just
recently that the Hi-Fi folks are starting to accept digital audio.

Here's a LOOOOOONG mail from another thread from long time ago in a far
away newsgroup debating the Film/Video issue (especially what's the
biggest difference between them when watching the stuff on TV):

Paste start:
*********************************
Lemme start by stating that even though what i write further down in
this message may give other expression, i like the film look ;-)

Also, all i state later applies only to TV.

Sswitaj wrote:
> >I mean, even though, I'm almost sure that to get that awesome look of film,
> >takes more than just a difference in FPS; lighting, for one, can have a huge
> >impact on how things will look too (right?),
>
> Damn Straight.

Yep. But this (lighting etc. production values) applies to both mediums
- it doesn't explain the difference of the looks.

The biggest difference IS the FPS.

> Conventional video has much worse resolution both spatially and in depth than
> 35mm films. Numbers vary, and people argue loudly, but figure that a frame of
> 35mm film contains maybe 8 times the usable data than a frame ofbroadcast
> quality video.

Of course, all that advatage is lost when transferring to video or DVD
or broadcast.

Actually, film is BY DEFINITION of poorer technical quality than video.
The 24 FPS frame rate actually means it has only 40% of the motion
information of video.

Also, film scanners aren't much more than fancy video cameras. So,
shooting film means converting formats, and generation loss which ALWAYS
causes loss of data. Film footage is usually color corrected so that
it's color and contrast response is no longer linear. Technically
speaking this is a defect.

Okay, now, why is this technically inferior way of acquiring footage
perceived as the higher quality one?

Well, my educated quess is this - because theatrical movies are shot in
film. The "effort per second" on them is by mutitudes higher than on
video stuff we've used to see.

So, we simply are used to the fact that film look equals quality.

The same thing applies to analogue audio recording - what people
percieve as "warmth" is actually noise, distortion and bad frequence
response.

BTW, Here's a resolution test i did while discussing the same subject on
another group (After all, this IS one of my favourite subjects ;-)

http://www.akmp-program.fi/Eki/Temp/Resolution.html

As mentioned in the page, noise caused by tape generations is clearly
visible on the film example, as is edge sharpening filters applied to
the footage.

> > I wonder, if video could've or would've turned out to produce the same
> > results, that people would'nt have been using it over film for many
> > many years already concidering how cheap video is to work with.
>
> Well, there are places where electronic imaging has already supplanted film
> based imaging because it can produce a better image (not video yet, either
> conventional or HD).

We shoot 99% of the commercials etc. higher end work on video. We just
process it to look like film.

On PAL it's really easy - one just has to de-interlace the footage to
get 25 FPS "progressive scan". (Films are also transferred at 25 FPS on
pal countries - they are actually 4% shorter here ;-)

Here's how to do that in post (without plugins)

http://www.kolumbus.fi/erkki.halkka/Interlace/Interlace.html

I shoot without detail sharpening, as this is an effect that can be
applied in post if necessary - it's much harder (read impossible) to
remove in-camera edge sharpening. The sharpening is what causes those
nasty "video like" edges, especially on the highlights.

I nowdays use a Panasonic DVCPro50 camera, that has adjustable gamma etc
- allowing me to do some preliminary color correction in the camera.
Also, with todays soft/hardware, video footage can be surprisingly well
color corrected in the post.

Now, to get that final touch, i usually add blur and grain to the video
footage - this is the same thing as the analogue audio thing, noise and
loss of high frequency information.
*********************************
Paste end:

Here's a post i wrote on the same subject year or 2(3??) ago, on yet
another
thread far far away -


Paste start:
*********************************
> Steve Rogers wrote:
>
> > Or is there another option I might be able to consider that will give me the
> > film look, and keep my budget down?
>
> *****************
> An anti flame disclaimer
>
> Most things in this post apply only to video or broadcast release, not
> the silver screen.
> *****************
>
>
> The important things are:
>
> 1. Even though it's to be shot in video, light it as you would light
> film i.e.. don't be afraid of contrast etc.
>
> 2. Adjust the video camera's "detail" (sharpness) setting to or almost
> to minimum. It's a post process that can be reproduced in the editing
> stage if needed.
>
> 3. The biggest technical difference (when the end result is for video
> release or broadcast) between shooting film and video is just the
> different frame rate, though some may claim otherwise. Regular video has
> 50 fields/second (60 ntsc) and film is 24 frames/second.
>
> De-interlace the footage in the post, or better yet, merge the fields.
> (works better with pal - you get 25 fps, in ntsc one gets 30 fps.) This
> can also sometimes be done in the camera, depending on the model (called
> frame or film mode).
>
> 4. Film is also often grainier and softer than video. If you are bold
> enough, boost the camera's gain setting to get instant film grain - it's
> safer to leave it to post though... Adjusting camera detail helps the
> smoothness issue, but you may want to blur the footage a bit in the post
> too.
>
> Mixing sharp and blurred versions of the footage can sometimes help
> getting the characteristics of film - you can add bloom/glow this way
> too.
>
> 5. Careful color correction helps a lot too. Though some people claim
> film has "Better colors and contrast" than video, this simply not true
> (Technically speaking). Film does have more resolution and better color
> and luminosity reproduction BEFORE it gets transferred to video.
>
> A film to video transfer is actually done with a "video camera". The key
> to "better colors" is better color correction. Many transfer facilities
> can apply the same correction to video footage as they can to film.
>
> There are several dedicated plugins available for film look (i.e..
> CineLook, FilmFX), but the effect can usually be done quite well with
> just the basic editing apps.
*********************************
Paste end:

Erkki Halkka

unread,
May 8, 2002, 4:26:32 PM5/8/02
to
Erik Harris wrote:

> With all due respect, that in itself is a lame excuse. The audience is used
> to and expects this because that's all that's used, and all that's available
> in movie theaters.

Exactly. What's your point??

> It's silly to say that if something _better_ is available


> that it's better to stick with the older, lower quality standards because of
> audience expectations.

But it often is. An example from the TV world: Video is technically


superior to film (when it comes to broadcast), and has been for a some
while now, but all the higher end TV shows are done in film (or at least
on something that *looks* like film). Why? People connect film look with
quality.

> With this rationale, CD's and DVD's would have never


> been put into production, because the audience was used to and expected the
> quality of audio and video cassette tapes.

Don't you remember what's going on with the audiophiles?? It's just

Erik Harris

unread,
May 8, 2002, 6:09:11 PM5/8/02
to
On 7 May 2002 18:39:30 -0700, mtav...@tavtel.com (Michele Tavares) wrote:

>> With all due respect, that in itself is a lame excuse.
>
>It's also factual. Stuff shot on video, or more importantly anything
>that LOOKS like it was shot on video is percieved as "cheap".

That's not because it's at a higher frame rate. "Video" (by which I assume
you mean anything sticking to the NTSC/PAL/SECAM regional standards, or VHS
tape) has limitations that film doesn't, such as pretty low resolution, and
being interlaced (which essentially chops the vertical resolution in half).
I wasn't talking about switching to an overall inferior standard. I was
talking about using the emerging digital technology to _improve_ on movie
projection standards. A movie shot on a format that offers quality
comparable to film at a higher frame rate wouldn't be perceived as "cheap,"
except perhaps by complete boneheads. :) "Video" does not offer that.

>Why do you think even sitcoms shot on crappy sound stages are still shot on
>film ? They get transferred to video & edited on video, and broadcast

Available equipment? Media longevity? I dunno. I doubt it's so they can
have the "advantage" of a lower framerate.

>Similarly, they added a FILM-like look to Star Wars II. It is a
>texture choice. It's no different than choosing a different type of
>film stock because of the grain or coloration look. Why is this
>"lame" ?

_That_ is not lame (though it's not really relevant to my question), but it's
not what you originally said, and not the excuse that I referred to as lame.

>> I don't care if you don't do your homework before quoting someone else, but I
>> DO care if you don't do your homework and start attacking ME because of

>Lower your caffine intake. This is a public message board & not a
>personal email to you.

No, this is a newsgroup, not a message board, since you seem to want to pick
at nits. :) Nitpickig aside, it's generally accepted that when you follow up
to and quote a specific person's message, that your reply is directed at that
person. The fact that newsgroups are public just means that everyone can see
the response. That's why I perceived your attack as being directed towards
me.

Erik Harris

unread,
May 8, 2002, 6:45:47 PM5/8/02
to
On Wed, 08 May 2002 23:26:32 +0300, Erkki Halkka <erkki....@kolumbus.fi>
wrote:

>Don't you remember what's going on with the audiophiles?? It's just
>recently that the Hi-Fi folks are starting to accept digital audio.

Depends on the audiophiles you talk to, I guess. From my vantage point, it
seems that they're looking to figure out whether DVD-Audio or SACD will land
at the top of the heap. Pretty "current" stuff. At the same time, they're
also among the only people left who like vinyl, but that's for some pretty
legitimate reasons and doesn't indicate that they don't accept digital audio,
just that it's not their _only_ choice, as it is for the masses now (and may
soon be even for many more audiophiles, with higher quality digital audio
standards like DVD-A and SACD).

>Here's a LOOOOOONG mail from another thread from long time ago in a far
>away newsgroup debating the Film/Video issue (especially what's the
>biggest difference between them when watching the stuff on TV):

Interesting message. It tells me that the main reason for sticking with film
is because of many people's stubborn nature, sticking to familiarity at the
expense of the superior options that are available.

Thaddeus Beier

unread,
May 8, 2002, 7:00:13 PM5/8/02
to
Erik Harris wrote:
>
> On 7 May 2002 18:39:30 -0700, mtav...@tavtel.com (Michele Tavares) wrote:
>
>
> >Why do you think even sitcoms shot on crappy sound stages are still shot on
> >film ? They get transferred to video & edited on video, and broadcast
>
> Available equipment? Media longevity? I dunno. I doubt it's so they can
> have the "advantage" of a lower framerate.
>

The people that I have talked to in the TV and film business insist that
they shoot stuff on film because the 24-frames-per-second of film, with
the abomination of 3:2 pulldown is what people associate with high-quality,
and 60-frames-per-second is associated with low quality.

At PDI, when we were doing fully-CG commmercials, we did them at 24 FPS
because it then had the film judder of 3:2 pulldown -- objectively it was
worse, but it was perceived to be better.

It's reality, and will persist whether you believe it or not :)

thad

Erkki Halkka

unread,
May 8, 2002, 7:21:06 PM5/8/02
to Erik Harris
Erik Harris wrote:
>
> That's not because it's at a higher frame rate.

If we talk about the "look", the frame rate is the biggest difference.

> "Video" (by which I assume
> you mean anything sticking to the NTSC/PAL/SECAM regional standards, or VHS
> tape) has limitations that film doesn't, such as pretty low resolution, and
> being interlaced (which essentially chops the vertical resolution in half).

Nope. It's not that simple. Interlaced footage has exactly the same
resolution as non-interlaced footage when the subject is not moving. And
when the subject is moving, the motion effectively masks the difference.
For the viewer, both look more or less the same resolutionwise.

Have you actually *seen* the difference between 24 fps progressive scan
and 25/30 fps interlaced footage??



> >Why do you think even sitcoms shot on crappy sound stages are still shot on
> >film ? They get transferred to video & edited on video, and broadcast
>
> Available equipment? Media longevity? I dunno. I doubt it's so they can
> have the "advantage" of a lower framerate.

The film look is the main thing i'd say. That and color correction
possibilities. And the film look comes mainly from the framerate.

I've done tests with film shot at 50 fps (we work in PAL here) and then
speeded that up by 200% - it looked quite a lot like video.

Erkki Halkka

unread,
May 8, 2002, 7:21:14 PM5/8/02
to
Erik Harris wrote:
>
> That's not because it's at a higher frame rate.

If we talk about the "look", the frame rate is the biggest difference.

> "Video" (by which I assume


> you mean anything sticking to the NTSC/PAL/SECAM regional standards, or VHS
> tape) has limitations that film doesn't, such as pretty low resolution, and
> being interlaced (which essentially chops the vertical resolution in half).

Nope. It's not that simple. Interlaced footage has exactly the same


resolution as non-interlaced footage when the subject is not moving. And
when the subject is moving, the motion effectively masks the difference.
For the viewer, both look more or less the same resolutionwise.

Have you actually *seen* the difference between 24 fps progressive scan
and 25/30 fps interlaced footage??

> >Why do you think even sitcoms shot on crappy sound stages are still shot on
> >film ? They get transferred to video & edited on video, and broadcast
>
> Available equipment? Media longevity? I dunno. I doubt it's so they can
> have the "advantage" of a lower framerate.

The film look is the main thing i'd say. That and color correction

Mark Spatny

unread,
May 8, 2002, 7:22:24 PM5/8/02
to
On Tue, 07 May 2002 03:07:54 GMT, "David Mullen"
<dav...@earthlink.net> wrote:

> because no doubt every HD post house will be offering
>similar processes to clients within a few years.

At least once a month a client says to me "Can you do Film Look?"
Lucas isn't doing anything new. People have been processing video to
look like film for years. The only difference is that ILM is doing it
to HD footage, with their own proprietary software. Big deal.

As you say, it's an artistic choice. There is no "right" or "wrong"
about it.

Unlike Jar Jar, which we all know is simply wrong, period.

Erkki Halkka

unread,
May 8, 2002, 7:30:14 PM5/8/02
to vfxpr...@nospamhotmail.com
Mark Spatny wrote:

> Unlike Jar Jar, which we all know is simply wrong, period.

Yep on all accounts.

;-)

Mr. Boy

unread,
May 8, 2002, 8:30:51 PM5/8/02
to
"Michael D. Most" <mike...@attbi.com> wrote in message news:<%E2C8.71139$071.22...@typhoon1.we.ipsvc.net>...

> "Su" <s...@bake.org> wrote in message news:3CD8A2DC...@bake.org...
> > Actually, three camera sitcoms *are generally shot on video.
>
> That's not the case. I don't have the exact numbers available, but for a
> number of years (through most of the 90's to the present day) the vast
> majority of multicamera (generally 4, not 3, although many people still call
> them "3 camera") sitcoms, especially those on the 4 major networks, have
> been shot on film - in fact, I can't think of a sitcom currently on CBS,
> NBC, or ABC that isn't on film, with only one exception ("According To Jim"
> on ABC, which switched to 24p HD video the middle of this season).

The changeover started in the early-to-mid 80's, actually. "Cheers"
was shot on film from Day 1, when at the same time "Night Court" was
entirely video.

That's
> beginning to change a bit these days with the advent of 24p HD video, so I
> imagine the balance will tip in favor of video over the next season or two.
> Then again, I believe that the majority of all prime time television will be
> shot on 24p HD video eventually. I give it another 2 seasons.
>

Fox's "Titus" recently switched to 24p as well, and "The Education of
Max Bickford" is a Drama series shot on 24p, and it actually looks
fairly nice (though is probably tweaked as well.)

-Mr. Boy
http://www.birdlingbrains.com/

Mr. Boy

unread,
May 8, 2002, 8:37:15 PM5/8/02
to
Su <s...@bake.org> wrote in message news:<3CD92E5F...@bake.org>...

>
> Well anyway I am supposed to test the PanaSony camera in a week or two, so I
> will be able to stop talking out the proverbial ass and quote y'all some actual
> experience.
>
> su

Chris-

When you test it out, look closely at depth-of-field differences. I
saw Alan Daviau's side-by-side comparison of 24p and 35mm, and the two
things I noticed that differed between the two were that the 24p was a
bit more contrasty, and 24p's depth-of-field is huge (which, in my
opinion, is a huge turn-off.) Indeed, I haven't tested the cameras
myself- this is just what I noticed from the DVD Sony provided.
(They really should've said what the exposures were and what film
stock they were using for the 35mm- would've been very helpful.
Because if the 24p has depth-of-field that deep when shooting wide
open, that's just terrible; whereas if they were shooting at F5.6,
it's more acceptable.)

Has anyone else seen that footage?

-Mr. Boy
http://www.birdlingbrains.com/

matt

unread,
May 8, 2002, 9:35:40 PM5/8/02
to
I agree with all of you that film is perceived to be better ONLY
because that s what most of us grew up with. You paid to see film and
got video for free on TV. Ive worked on projects where we tried some
"film look" on crappy 8mm consumer video in some scenes while the rest
o f the project was shot on expensive Beta cam. At screenings folks
would say they liked the grainey "16mm " scenes and disliked the
cheap "camcorder" scenes. Even adding a silly fake "letterbox" onto
Beta Tape makes most people feel like theyre getting more for their
money.
As for TV shows- did it occur to you that high paid film camera
experts dont wanna lose their jobs to video folks, so theyre
proclaiming that film is still the way to go? Kodaks stock is
plummeting- why EVER shoot film (for stills) when you get 300 dpi from
(almost) free digital pictures?

Gary PoIIard

unread,
May 8, 2002, 9:34:32 PM5/8/02
to
"Tony Raven" <ju...@raven-family.com> wrote in message
news:ab2pss$638$1...@news6.svr.pol.co.uk...

>
> "Michele Tavares" <mtav...@tavtel.com> wrote in message
> news:186b40a.02050...@posting.google.com...
> >
> > So the panavision lenses are not the end all, be al of the "look" of
> > Star Wars. but a high end, proprietary film look process involving
> > film grain, color changes, and blurring were the secret recipe to
> > ILM's latest masterpiece.

>
> I'm inspired and writing some software to add hiss, crackle and scratch
> noises to my CD's when I play them. Boy will those CD's sound good when
> I've finished.

Plenty of the vinyl fans out there still seem to think so.

Gary

Gary PoIIard

unread,
May 8, 2002, 9:37:17 PM5/8/02
to
"Michele Tavares" <mtav...@tavtel.com> wrote in message
news:186b40a.02050...@posting.google.com...
> > Wasn't "When Clones Attack!" shot entirely on HD, and not intercut with
film?
> > If so, why degrade the quality to make it look more like film?
>
> It is purely an AESTHETIC that the audience is used to & expects. Have
> you ever noticed any deviation fromt he norm generally is not accepted
> ? By making the HD footage look like film, they have made the movie
> look more like what audiences expect, not a VIDEO look.

From what I've seen on the big screen, Star Wars looks MUCH better than I
expected video to look at that size . And certainly much better than
"Vidocq", which was also shot on 24P. When "video" looks that good, the talk
of "degrading" the image is ridiculous.

Gary

Gary PoIIard

unread,
May 8, 2002, 9:39:08 PM5/8/02
to
"Erik Harris" <n$wsr$ader@$harrishom$.com> wrote in message
news:umsdduolaqqfmcf3o...@4ax.com...
> On Sun, 5 May 2002 22:13:24 +0100, David Crossman
> <da...@NOdareks.fsnet.co.ukyah> wrote:

>
> >In an article, Erik Harris <n$wsr$ader@$harrishom$.com> wrote:
> >
> >> It makes sense to drop down to 24fps for the majority of the
> >> theaters still using film reels, but it seems like it
> >> would make sense to shoot at a higher frame rate and display
> >> at a higher frame rate in the few lucky theaters equipped with

> >> high end DLP systems, and to keep the HD quality instead of making
> >> it look like film for those theaters.
>
> >I seem to remember the guy who 'invented' IMAX saying that he
> >had done some research that indicated a frame rate of between
> >50 and 60 fps was the optimum in delivering a maximum something
> >to audience's brains. (Can someone help me out here?)
>
> I've heard similar numbers, though I never heard it attributed to any one
> person in particular.

The guy who did the experiments and devised the process was special FX guy
Douglas Trumbull. It was reported in American Cinematographer at the time.

Gary

Paul Layman

unread,
May 8, 2002, 9:55:34 PM5/8/02
to
I live in city that has a theater with the DLP projectors.
Would it be worthwhile trying to see the movie there?

Paul

M. Cooney

unread,
May 8, 2002, 11:54:42 PM5/8/02
to

"Su" <s...@bake.org> wrote in message news:3CD92E5F...@bake.org...

> Huh. I could've sworn I saw video cameras at WB the other day. Oh well.
Do
> they shoot on 16?
>
>
> Well anyway I am supposed to test the PanaSony camera in a week or two, so
I
> will be able to stop talking out the proverbial ass and quote y'all some
actual
> experience.
>
> su
>


Su. Darn i forgot the name of the camera, Viper i think, but someone came
out with a hidef camera that uses three 9 megapixel ccds, so unlike the sony
camera where the chrominance is lower resolution than the luminance, it uses
RGB instead.

I was always wondering if such a camera came out, it would be used for
certain plate and especialy bluescreen work, it might make better mattes
than regular 35mm cameras, and way better than the sony.


Michael D. Most

unread,
May 9, 2002, 12:19:49 AM5/9/02
to

"Mr. Boy" <yev...@mrboy.com> wrote in message
news:7f3d638d.02050...@posting.google.com...

> When you test it out, look closely at depth-of-field differences. I
> saw Alan Daviau's side-by-side comparison of 24p and 35mm, and the two
> things I noticed that differed between the two were that the 24p was a
> bit more contrasty, and 24p's depth-of-field is huge (which, in my
> opinion, is a huge turn-off.) Indeed, I haven't tested the cameras
> myself- this is just what I noticed from the DVD Sony provided.
> (They really should've said what the exposures were and what film
> stock they were using for the 35mm- would've been very helpful.
> Because if the 24p has depth-of-field that deep when shooting wide
> open, that's just terrible; whereas if they were shooting at F5.6,
> it's more acceptable.)

The depth of field in current 24p video cameras has nothing to do with
electronic capture, but everything to do with physics. Current cameras use
2/3" CCD imagers, less than 1/4 the size of a 35mm film frame. The smaller
the imaging plane, the wider the lens for the same field of view. Wider lens
= more depth of field. You can use the longest lenses possible and shoot
wide open, but the depth of field is going to be far deeper than 35mm no
matter what you do. The only way this will be addressed is with a larger CCD
(or other electronic imaging device). That will probably happen, but not for
quite a while.

Mike Most

Michael D. Most

unread,
May 9, 2002, 12:24:44 AM5/9/02
to

"matt" <funny...@mac.com> wrote in message
news:56c3f148.0205...@posting.google.com...

.
> As for TV shows- did it occur to you that high paid film camera
> experts dont wanna lose their jobs to video folks, so theyre
> proclaiming that film is still the way to go?

No, it never occurred to me because I've worked in the film/television
industry for almost 25 years and nobody I've ever known feels that way.
People involved in making television programs simply want to create the best
imagery they can with the best tools they're allowed to use.

Mike Most

Michele Tavares

unread,
May 9, 2002, 1:49:41 AM5/9/02
to
Paul Layman <lay...@att.net> wrote in message news:<3CD9D72F...@att.net>...

> I live in city that has a theater with the DLP projectors.
> Would it be worthwhile trying to see the movie there?

Based on seeing the trailer in DLP and then in 35mm on Spiderman -
HELL YES. There are much more vibrant colors and the sharpness is
ungodly good.

Check it out.

- M
www.sonnyboo.com

David Mullen

unread,
May 9, 2002, 1:58:07 AM5/9/02
to
> But it often is. An example from the TV world: Video is technically
> superior to film (when it comes to broadcast)

On what planet??? 35mm-to-video is superior to video acquisition in almost
any category you wish to name -- color depth & saturation, especially in
dark areas, resolution, latitude, ability to resize the image, lack of
compression artifacts, etc.

I just recently saw some comparison tests in an HD post suite comparing HD
video acquisition to 35mm-to-HD transfers that were quite thorough in terms
of shooting resolution charts, doing over and underexposure tests, etc., and
35mm beat HD acquisition in every category worth naming. Even in terms of
grain, 35mm was ultimately finer-grained than HD, which was noisier once you
tried to match the color saturation of 35mm negative (Kodak Vision 250D in
this case.)

In what way is video superior in image quality when viewed on TV compared to
35mm-to-video transfers?

David Mullen


M. Cooney

unread,
May 9, 2002, 3:46:17 AM5/9/02
to

That new camera i found out about, tried finding it again, It's called
the Viper Filmstream by thomson. I downloaded the brocure, lots of hype,
but i sorta like that trick it uses where you can shoot 2.37 widescreen
without cropping or anamorphics using some fancy sampling technique. I
wonder if any FX companies have tested this for bluescreen work.


Jon Carroll

unread,
May 9, 2002, 6:27:14 AM5/9/02
to
On Thu, 09 May 2002 07:46:17 GMT, "M. Cooney" <mco...@earthlink.net>
wrote:

I'd rather use the dual HD-SDI link to get the uncompressed 4:4:4
output from the camera... much more promising :) but then, you might
have misunderstood it and called it 'hype'

--
-Jon Carroll
dra...@infi.net
"But HoHos are a vital part of my cognitive process!!"- Xander

Jess

unread,
May 9, 2002, 8:41:06 AM5/9/02
to
Does anyone know what Spider-Man was shot with? (format, stocks, etc....)

I thought it was a great movie, and lots of stuff to remind me of the
evil dead trilogy :-)

~Jess

Erkki Halkka

unread,
May 9, 2002, 9:52:11 AM5/9/02
to David Mullen
David Mullen wrote:

> On what planet???

Ours.

Gotta say though, i wasn't talking that much about real life work, but
theory. The goal being as neutral image as possible.

Meaning no color/gamma etc. correction in the post, no changes in the
framing... which of course are the areas where film starts to shine
(though one CAN pan/scan HD video to some extent without losing
resolution).

I should have emphasized that more i guess. There are many other
benefits of shooting film in addition to the ones you mentioned, like
better lenses, large variety of cameras (some of which are cheap enough
to put in dangerous places), possibility to use the same footage in
theaters, world wide compatibility (and availability of equipment) etc.

The fact still is that it's possible to make video look like film. It's
much harder to make film look like video.

It's the law of entropy - it's easier to cause chaos than order, easier
to add artifacts than to remove them.

> 35mm-to-video is superior to video acquisition in almost
> any category you wish to name --

If shooting at 60 fps, maybe...

> color depth & saturation, especially in
> dark areas,

We're not looking for a lot of saturation, but for linear color
reproduction.

> resolution,

The resolution (Talking D1 here) ends up being 720*576 (PAL) or 720*480
(NTSC) in both.

> latitude,

The latitude ends up being from 0 to 100% luminosity in both. In
telecine all that extra info of film gets lost or compressed to the same
color space.

> ability to resize the image, lack of
> compression artifacts, etc.

As i said in the beginning, talking about a correctly composed shot
here...

Remember that the film transfer has to be recorded to a tape. Even if
one transfers uncompressed to a workstation, the master ends up being
compressed video.

> I just recently saw some comparison tests in an HD post suite comparing HD
> video acquisition to 35mm-to-HD transfers that were quite thorough in terms
> of shooting resolution charts, doing over and underexposure tests, etc., and
> 35mm beat HD acquisition in every category worth naming. Even in terms of
> grain, 35mm was ultimately finer-grained than HD, which was noisier once you
> tried to match the color saturation of 35mm negative (Kodak Vision 250D in
> this case.)

Again, talking about correct exposure and as linear as possible
transfer. AFAIK video wins there.

> In what way is video superior in image quality when viewed on TV compared to
> 35mm-to-video transfers?

Mainly the motion information. 60% (50% PAL) more of it.

Now, if people shot film at 60 fps (50 fps PAL) , things would be a
little different...

Erik Harris

unread,
May 9, 2002, 6:11:31 PM5/9/02
to
On Thu, 09 May 2002 02:21:14 +0300, Erkki Halkka <erkki....@kolumbus.fi>
wrote:

>> "Video" (by which I assume


>> you mean anything sticking to the NTSC/PAL/SECAM regional standards, or VHS
>> tape) has limitations that film doesn't, such as pretty low resolution, and
>> being interlaced (which essentially chops the vertical resolution in half).
>
>Nope. It's not that simple. Interlaced footage has exactly the same
>resolution as non-interlaced footage when the subject is not moving. And
>when the subject is moving, the motion effectively masks the difference.
>For the viewer, both look more or less the same resolutionwise.

On a television, yes, but what about the source footage? Isn't 35mm movie
film a heck of a lot higher resolution than VHS? I can't imagine blowing a
VHS picture up to movie screen sizes. (Even at TV screen sizes, it kinda
sucks)

>Have you actually *seen* the difference between 24 fps progressive scan
>and 25/30 fps interlaced footage??

Of course. Anyone who has been to a movie theater and who has seen video has
seen the difference. Personally, I find 24fps lacking for a lot of
applications (panning being the worst). I guess the majority of the public
is too set in its ways to care enough about actual quality, though, and
choose familiarity over quality.

>I've done tests with film shot at 50 fps (we work in PAL here) and then
>speeded that up by 200% - it looked quite a lot like video.

I'm not sure what you mean by this. At 100fps, it looked like video, and at
50fps it didn't? Huh?

Rob Mora

unread,
May 9, 2002, 6:17:05 PM5/9/02
to
> I just recently saw some comparison tests in an HD post suite comparing HD
> video acquisition to 35mm-to-HD transfers that were quite thorough in terms
> of shooting resolution charts, doing over and underexposure tests, etc., and
> 35mm beat HD acquisition in every category worth naming. Even in terms of
> grain, 35mm was ultimately finer-grained than HD, which was noisier once you
> tried to match the color saturation of 35mm negative (Kodak Vision 250D in
> this case.)

Did you see the latest edition of Hidef.org's magazine? American
Production Services did a head-to-head between the Arri 35m, Panasonic
DVCPRO HD and the Sony F900 24p. The results were pretty interesting.
You can download a PDF of the mag at www.hidef.org.

Rob

Erkki Halkka

unread,
May 9, 2002, 6:40:08 PM5/9/02
to Erik Harris
Erik Harris wrote:

> On a television, yes, but what about the source footage? Isn't 35mm movie
> film a heck of a lot higher resolution than VHS? I can't imagine blowing a
> VHS picture up to movie screen sizes. (Even at TV screen sizes, it kinda
> sucks)

Yep, VHS sucks ;-)

Though there's no way to directly compare VHS's resolution (or film's
resolution for that matter) to digital video, VHS is said to equal 1/4th
resolution (360*288 PAL) digital video - this would equal to roughly
1/16th of a 2K film scan resolution.

Professional digital video sucks less - D1 video has roughly 1/4 th of
the resolution of the much used 2K film scans, just a little less than a
1K film scan (the film itself has better resolution).

> >I've done tests with film shot at 50 fps (we work in PAL here) and then
> >speeded that up by 200% - it looked quite a lot like video.
>
> I'm not sure what you mean by this. At 100fps, it looked like video, and at
> 50fps it didn't? Huh?

Oh, my explanation was missing a little... when footage that's shot at
50 fps is telecined to video, the result is half speed slomo video -
this will have to be speeded up by 200% to make it "real time" again.
This speedup will also result in interlaced footage, even though it
originates as frame based film. (So it will have 50 fields/second, just
like video footage).

Erkki Halkka

unread,
May 9, 2002, 6:40:14 PM5/9/02
to
Erik Harris wrote:

> On a television, yes, but what about the source footage? Isn't 35mm movie
> film a heck of a lot higher resolution than VHS? I can't imagine blowing a
> VHS picture up to movie screen sizes. (Even at TV screen sizes, it kinda
> sucks)

Yep, VHS sucks ;-)

Though there's no way to directly compare VHS's resolution (or film's
resolution for that matter) to digital video, VHS is said to equal 1/4th
resolution (360*288 PAL) digital video - this would equal to roughly
1/16th of a 2K film scan resolution.

Professional digital video sucks less - D1 video has roughly 1/4 th of
the resolution of the much used 2K film scans, just a little less than a
1K film scan (the film itself has better resolution).

> >I've done tests with film shot at 50 fps (we work in PAL here) and then


> >speeded that up by 200% - it looked quite a lot like video.
>
> I'm not sure what you mean by this. At 100fps, it looked like video, and at
> 50fps it didn't? Huh?

Oh, my explanation was missing a little... when footage that's shot at


50 fps is telecined to video, the result is half speed slomo video -
this will have to be speeded up by 200% to make it "real time" again.
This speedup will also result in interlaced footage, even though it
originates as frame based film. (So it will have 50 fields/second, just
like video footage).

CU

Simon Bunker

unread,
May 9, 2002, 8:00:33 PM5/9/02
to

> >I've done tests with film shot at 50 fps (we work in PAL here) and then
> >speeded that up by 200% - it looked quite a lot like video.

But PAL is 25fps... 50 fields per second isn't the same. And NTSC's 29.97fps
is positively evil in all respects.

Simon

--
http://www.rendermania.com/
UIN 11123737

Simon Bunker

unread,
May 9, 2002, 8:05:30 PM5/9/02
to
Erm... telecines maybe, but film scanners are very different beasts indeed.
I guess you're not familiar with CFC's Northlight scanner? very nice indeed
from what I hear.

Simon

> Also, film scanners aren't much more than fancy video cameras. So,
> shooting film means converting formats, and generation loss which ALWAYS
> causes loss of data. Film footage is usually color corrected so that
> it's color and contrast response is no longer linear. Technically
> speaking this is a defect.

Simon Bunker

unread,
May 9, 2002, 8:10:04 PM5/9/02
to

> > In what way is video superior in image quality when viewed on TV
compared to
> > 35mm-to-video transfers?
>
> Mainly the motion information. 60% (50% PAL) more of it.
>
> Now, if people shot film at 60 fps (50 fps PAL) , things would be a
> little different...

ARGHHH! Please stop saying this! PAL is 25 FRAMES per second, NTSC 29.97

Simon

> CU

Erik Harris

unread,
May 9, 2002, 8:51:57 PM5/9/02
to
On Fri, 10 May 2002 01:10:04 +0100, "Simon Bunker" <si...@rendermania.com>
wrote:

>> Now, if people shot film at 60 fps (50 fps PAL) , things would be a
>> little different...

>ARGHHH! Please stop saying this! PAL is 25 FRAMES per second, NTSC 29.97

Maybe his "f" stands for "fields." :-)

This brings up another question I've always had. Aside from the semantic
difference, what IS the difference between a frame and a field? Is the only
real difference explained by the fact that fields are interlaced together?
Both a frame and a field represent the scene at a moment in time. Both
frames and fields are displayed sequentially (frames on progressive displays,
fields on interlaced displays), and if you do a deinterlace on a 30 frame /
60 field per second video in such a way as to retain all of the motion data
(meaning that both fields are kept and vertically upsampled), isn't the
result a 60 frame per second video? In the past, I've even seen people refer
to one of the fields in a frame as being an image or scene field, and the
other being a motion field, but aside from whether they're on odd or even
scanlines, how do two adjacent fields differ from each other? (or from
temporally adjacent frames in a noninterlaced video)

I hope you'll excuse my intentionally inaccurate use of 30 and 60 in place of
the more cumbersome and hardly-differentt 29.97 and 59.94 figures. :-)

Jon Carroll

unread,
May 10, 2002, 12:52:09 AM5/10/02
to
On Fri, 10 May 2002 01:05:30 +0100, "Simon Bunker"
<si...@rendermania.com> wrote:

>Erm... telecines maybe, but film scanners are very different beasts indeed.
>I guess you're not familiar with CFC's Northlight scanner? very nice indeed
>from what I hear.
>

some film scanners use CCDs, others use other methods.

Jon Carroll

unread,
May 10, 2002, 12:58:05 AM5/10/02
to
On Fri, 10 May 2002 00:51:57 GMT, Erik Harris
<n$wsr$ader@$harrishom$.com> wrote:

>On Fri, 10 May 2002 01:10:04 +0100, "Simon Bunker" <si...@rendermania.com>
>wrote:
>
>>> Now, if people shot film at 60 fps (50 fps PAL) , things would be a
>>> little different...
>
>>ARGHHH! Please stop saying this! PAL is 25 FRAMES per second, NTSC 29.97
>
>Maybe his "f" stands for "fields." :-)
>
>This brings up another question I've always had. Aside from the semantic
>difference, what IS the difference between a frame and a field? Is the only
>real difference explained by the fact that fields are interlaced together?
>Both a frame and a field represent the scene at a moment in time. Both
>frames and fields are displayed sequentially (frames on progressive displays,
>fields on interlaced displays), and if you do a deinterlace on a 30 frame /
>60 field per second video in such a way as to retain all of the motion data
>(meaning that both fields are kept and vertically upsampled), isn't the
>result a 60 frame per second video? In the past, I've even seen people refer
>to one of the fields in a frame as being an image or scene field, and the
>other being a motion field, but aside from whether they're on odd or even
>scanlines, how do two adjacent fields differ from each other? (or from
>temporally adjacent frames in a noninterlaced video)

two fields are dislocated from one another by 1/50 or 1/60 of a
second. yes, the frames are still seqential, but each field is
sequential to the one preceding it. yes, you can take the fields,
seperate them, and double the resolution vertically to get 60
effective frames per second, but you do suffer resolution loss.

basically, a fielded display shows things like this: Field 1 frame 1 ,
field 2 frame 1; field 1 frame 2, field 2 frame 2; field 1 frame 3,
field 2 frame 3, etc. When the original video has field in it, each
field is 1/60 of a second (NTSC), of course, divided by the shutter
rate

Su

unread,
May 10, 2002, 12:42:13 AM5/10/02
to
Well, the project I am testing the camera for is one which might benefit from
increased depth of field. A bigger issue for me is the resolving power of lenses.
The smaller CCD puts a greater burden on the quality of the glass than even 35mm.

Anyway, we'll see.

su

Su

unread,
May 10, 2002, 10:14:37 AM5/10/02
to
I don't think anyone uses tubes anymore. I would bet (just short of eating my
hat, thank you) that every commercial and non-commercial device out there that
scans at 2K or above is using a CCD. The Spirit, Lightning, Genesis, the old
CFC scanner, the Northlight, all use linear arrays. I'm not disbelieving you-
I'm just curious if there's something out there I don't know about.

As far as the assertion that "film scanners are basically video cameras", well,
there are some similarities (they both use CCDs) but the differences end
there. Video cameras use planar CCD arrays, while any self respecting film
scanner uses a linear array. Also, the cost of the CCD alone in good film
scanners is 3-5K, more than the average prosumer camera. Scanners are far more
similar to optical printers, with the CCD being in front of the projected
image. Other models share a little with laser recorders in the way the move
the film across the CCD. Some even use the exact same screw-drive sleds as the
Arri recorder.(a very cool little thing- accurate to nanometers, I'm told)

I could go on, but I have a day job :)

Su "the dark currant"

(that was a very bad film scanner joke...)

Michael D. Most

unread,
May 10, 2002, 11:16:37 AM5/10/02
to

"Su" <s...@bake.org> wrote in message news:3CDBD665...@bake.org...

> I don't think anyone uses tubes anymore. I would bet (just short of
eating my
> hat, thank you) that every commercial and non-commercial device out there
that
> scans at 2K or above is using a CCD

The Cintel C-Reality (capable of 4K scanning, according to the manufacturer)
is tube based, at least it was last time I looked.

Mike Most

Erik Harris

unread,
May 10, 2002, 3:40:27 PM5/10/02
to
On Fri, 10 May 2002 04:58:05 GMT, dra...@infi.net (Jon Carroll) wrote:

>two fields are dislocated from one another by 1/50 or 1/60 of a
>second. yes, the frames are still seqential, but each field is
>sequential to the one preceding it. yes, you can take the fields,
>seperate them, and double the resolution vertically to get 60
>effective frames per second, but you do suffer resolution loss.

How would this result in a loss of resolution? You're interpolating to add
data, not to take it away. In a sense, it'll "increase" the spatial
resolution, but since it's interpolated, that increase isn't really "real."

As I understand it, this is exactly what a progressive scan TV does to video
(double the temporal resolution and interpolate the spatial resolution), and
that's pretty much universally seen as a GOOD thing, not a process that
incurs loss.

>basically, a fielded display shows things like this: Field 1 frame 1 ,
>field 2 frame 1; field 1 frame 2, field 2 frame 2; field 1 frame 3,

In other words, the only real difference between a field and a frame is the
way it's stored. Two fields are essentially two "frames" merged into one via
interlacing, and there's little functional difference between the two (both
represent a "snapshot" of the scene in the same way).

Robert Gruen

unread,
May 10, 2002, 5:59:54 PM5/10/02
to
I'm probably the least qualified to chime in on this topic but....

One of the things that I am mastering with my Cannon SLR is the depth of
field effects that film is capable of. I believe this to be very important
in cinematography due to the fact that we see in three dimensions but
project in two. Properly limiting the depth of field allows the
cinematographer to pop the subject out of the background. This is not only
useful, it's essential for scenes that are shot EXT.-DAY, where the sun
overpowers any attempt at key/back/fill. This is very evident in Jaws
(beach scenes), if anyone wants an example. My understanding is that only
the best digital still cameras (like the Cannon EOS-D) can come close to
making the dramatic depth of field effects that film can. True, or can DV
accomplish this yet? I recall reading an article that stated the still
camera genre gives the maker the ability to use a CCD that 'resets' itself
to a null value between shots (not that the all do this), while a video CCD
must always be 'hot', and a ocilliating crystal drives a sampling process to
read it. Correct?

Another thing that bears keeping in mind is that pixels and grains are
slightly different, especially in the time domain. A pixel is always in the
same place, it just holds a different state in that same location (color,
intensity), while a grain is randomly dispersed, and hold the intensity of
it's respective color only. When viewed in the time domain, the apparent
position of a single grain is in fact the relative combination of multiple
grains in the same general location. Impact?

Finally, in terms of frame rates, human brains process visual images at
22 fps(?), but we can see 'jumps' when gross camera panning motions happen.
Isn't this the same phenomena that rescues us from nausea? Nausea happens
in the human brain when the visual acuity is disassociated with the
equilibrium information sent by the inner ear. Different people have
different tolerances for this, so you must minimize the effect if you want
to entertain a broad audience (I'm afflicted, I can't read in a moving car
to save my life). I recall once being told that at near 22 fps speeds the
brain was not 'fooled' into believing the motion that was being visually
presented to it, so the viewer wouldn't become nauseous. This seems to be
reiterated in other posts here as well.

Bob Gruen


"David Mullen" <dav...@earthlink.net> wrote in message
news:eyHB8.1890$663.1...@newsread2.prod.itd.earthlink.net...
> > >> Wasn't "When Clones Attack!" shot entirely on HD, and not intercut
with
> film?
> > >> If so, why degrade the quality to make it look more like film?
> >
> > >It is purely an AESTHETIC that the audience is used to & expects.
> >
> > With all due respect, that in itself is a lame excuse. The audience is
> used
> > to and expects this because that's all that's used, and all that's
> available
> > in movie theaters.
>
> When your talking about applying some sort of "film look" processing to a
> digital image, even though this can be considered "degradation", to the
> minds of those who prefer a film look, the attitude isn't that they are
> destroying the image but making it more aesthetically pleasing, so
therefore
> it is not a degradation except in the most technical sense of the word.
>
> Look, DP's use softer lenses or diffusion filters or smoke, etc. all the
> time when shooting and don't really consider this "degradation" but the
> application of photographic style in order to create the proper look and
> mood that the story requires. The point of making movies is NOT to show
off
> a technology but to tell stories. Otherwise, all movies would have to
look
> like Kodak advertisements.
>
> As for whether Lucas should have gone for a more direct, "raw" HD image
> versus processing it to look more like film, one argument is simply that
he
> didn't want the look to drift TOO far from that established by earlier
"Star
> Wars" films.
>
> I also don't consider such image processing in post to "improve" the HD
> image to be "cheating" because no doubt every HD post house will be
offering
> similar processes to clients within a few years. All that ultimately
> matters is if you like the end results anyway, not what tricks you
employed
> to get there.
>
> David Mullen
>
>


Erkki Halkka

unread,
May 11, 2002, 4:29:28 AM5/11/02
to Simon Bunker
Simon Bunker wrote:

> ARGHHH! Please stop saying this! PAL is 25 FRAMES per second, NTSC 29.97

And each of those frames consists of two fields.

Film is frame based, so only way to get the same amount of motion
information would be shooting at double frame rate and compiling each
video frame from two film frames.

Erkki Halkka

unread,
May 11, 2002, 4:57:06 AM5/11/02
to dra...@infi.net
Jon Carroll wrote:

> two fields are dislocated from one another by 1/50 or 1/60 of a
> second. yes, the frames are still seqential, but each field is
> sequential to the one preceding it. yes, you can take the fields,
> seperate them, and double the resolution vertically to get 60
> effective frames per second, but you do suffer resolution loss.

How can you play that back?

;-)

Okay, actually it could be played back, i.e. on a computer monitor
(which are not interlaced), and yes, one would loose half of the
resolution.

> basically, a fielded display shows things like this: Field 1 frame 1 ,
> field 2 frame 1; field 1 frame 2, field 2 frame 2; field 1 frame 3,
> field 2 frame 3, etc. When the original video has field in it, each
> field is 1/60 of a second (NTSC), of course, divided by the shutter
> rate

Oh yes, i've so far left out the shutter rate for simplicity ;-)

Erkki Halkka

unread,
May 11, 2002, 5:01:50 AM5/11/02
to Erik Harris
Erik Harris wrote:

> How would this result in a loss of resolution? You're interpolating to add
> data, not to take it away. In a sense, it'll "increase" the spatial
> resolution, but since it's interpolated, that increase isn't really "real."

Each field only has 1/2 of the resolution of a full frame.

If video footage is slowed down by 50%, each field is shown for the full
duration of a frame. This is essentially the same thing as is discussed
here. The loss of resolution is clearly visible.

Erkki Halkka

unread,
May 11, 2002, 5:02:14 AM5/11/02
to
Erik Harris wrote:

> How would this result in a loss of resolution? You're interpolating to add
> data, not to take it away. In a sense, it'll "increase" the spatial
> resolution, but since it's interpolated, that increase isn't really "real."

Each field only has 1/2 of the resolution of a full frame.

If video footage is slowed down by 50%, each field is shown for the full
duration of a frame. This is essentially the same thing as is discussed
here. The loss of resolution is clearly visible.

Erkki Halkka

unread,
May 11, 2002, 5:09:35 AM5/11/02
to Simon Bunker
Simon Bunker wrote:

> But PAL is 25fps... 50 fields per second isn't the same.

Yes it is.

> And NTSC's 29.97fps
> is positively evil in all respects.

;-)

Erkki Halkka

unread,
May 11, 2002, 5:16:24 AM5/11/02
to s...@bake.org
Su wrote:
>
> Also, the cost of the CCD alone in good film
> scanners is 3-5K, more than the average prosumer camera.

That's why i called them "fancy" ;-)

> Scanners are far more
> similar to optical printers, with the CCD being in front of the projected
> image. Other models share a little with laser recorders in the way the move
> the film across the CCD.

Yep, i should have talked about telecine rather than film scanners...
and of course i exaccurated a little to get the point through -

Erkki Halkka

unread,
May 11, 2002, 5:16:42 AM5/11/02
to Erik Harris
Erik Harris wrote:

> Maybe his "f" stands for "fields." :-)

Nope, it stood for frames. What i was saying is that you can get 25 (or
29.97) interlaced frames a second by shooting film at 50 (or 59.94)
frames per second and compiling each video frame of two film frames -
each film frame to one field.

Easiest way to do this is to shoot film at 50 or *roughly* 60 frames a
second, telecining it to video (giving half speed slomo when played
back) and then speeding the footage up by 200% in the post.

I've actually tried this, and the footage looked more or less like
video.

> This brings up another question I've always had. Aside from the semantic
> difference, what IS the difference between a frame and a field?

1. A single field consists of half of a frame, vertically.

2. Each field also represents half of the motion within a frame.

3. If you think of a frame as a digital image, each odd row of pixels
make up the odd field, and the even rows make up the even field.

4. Even though each of the fields only have 1/2 of the vertical
resolution, they are combined so that a frame actually is of full
resolution.

5. The difference between frames and fields based footage is only
visible if the subject is moving.

6. the motion effectively masks out the loss of resolution with moving
footage.

> Both
> frames and fields are displayed sequentially (frames on progressive displays,
> fields on interlaced displays), and if you do a deinterlace on a 30 frame /
> 60 field per second video in such a way as to retain all of the motion data
> (meaning that both fields are kept and vertically upsampled), isn't the
> result a 60 frame per second video?

The video playback is always of the standard frame rate of the
television system. Actually, also film footage is played back field by
field -

In PAL, both of the fields consist of the same "time" (each film frame
is transferred to a video frame), but in NTSC the 24 fps footage is
dispersed so that some of the video frames have one full film frame, and
some are compiled to fields from two film frames.

> I hope you'll excuse my intentionally inaccurate use of 30 and 60 in place of
> the more cumbersome and hardly-differentt 29.97 and 59.94 figures. :-)

This is what i've been doing all along too ,-)

Erkki Halkka

unread,
May 11, 2002, 5:16:48 AM5/11/02
to
Erik Harris wrote:

> Maybe his "f" stands for "fields." :-)

Nope, it stood for frames. What i was saying is that you can get 25 (or


29.97) interlaced frames a second by shooting film at 50 (or 59.94)
frames per second and compiling each video frame of two film frames -
each film frame to one field.

Easiest way to do this is to shoot film at 50 or *roughly* 60 frames a
second, telecining it to video (giving half speed slomo when played
back) and then speeding the footage up by 200% in the post.

I've actually tried this, and the footage looked more or less like
video.

> This brings up another question I've always had. Aside from the semantic


> difference, what IS the difference between a frame and a field?

1. A single field consists of half of a frame, vertically.

2. Each field also represents half of the motion within a frame.

3. If you think of a frame as a digital image, each odd row of pixels
make up the odd field, and the even rows make up the even field.

4. Even though each of the fields only have 1/2 of the vertical
resolution, they are combined so that a frame actually is of full
resolution.

5. The difference between frames and fields based footage is only
visible if the subject is moving.

6. the motion effectively masks out the loss of resolution with moving
footage.

> Both


> frames and fields are displayed sequentially (frames on progressive displays,
> fields on interlaced displays), and if you do a deinterlace on a 30 frame /
> 60 field per second video in such a way as to retain all of the motion data
> (meaning that both fields are kept and vertically upsampled), isn't the
> result a 60 frame per second video?

The video playback is always of the standard frame rate of the


television system. Actually, also film footage is played back field by
field -

In PAL, both of the fields consist of the same "time" (each film frame
is transferred to a video frame), but in NTSC the 24 fps footage is
dispersed so that some of the video frames have one full film frame, and
some are compiled to fields from two film frames.

> I hope you'll excuse my intentionally inaccurate use of 30 and 60 in place of


> the more cumbersome and hardly-differentt 29.97 and 59.94 figures. :-)

This is what i've been doing all along too ,-)

CU

Erkki Halkka

unread,
May 11, 2002, 6:06:45 AM5/11/02
to Rob Mora
Rob Mora wrote:
>
> Did you see the latest edition of Hidef.org's magazine? American
> Production Services did a head-to-head between the Arri 35m, Panasonic
> DVCPRO HD and the Sony F900 24p. The results were pretty interesting.
> You can download a PDF of the mag at www.hidef.org.

Hey, thanks for the cool link!

You don't happen to know if these examples are available somewhere, with
slightly lower compression? Even with the higher quality pdf, the
compression artifacts make it hard to see any differencies... if there
are any.

Su

unread,
May 11, 2002, 11:32:23 AM5/11/02
to
Like I said, I don't think anyone uses a tube... (har)

So many people have steered me away from that machine I have never bothered to
test it. Who's got one that's actually using it for high res scanning
(somewhere besides the floor of NAB?)

MK70

unread,
May 11, 2002, 9:48:07 PM5/11/02
to
>Digital Camera CCDs don't have to clock NEARLY as fast as HDTV. They
>are conceptually similar animals, but different. My own SDTV camera has
>720+H pels by 986+V pels, and uses DSP stuff to improve the picture. I
>tend to agree that something like a 1280x720 CCD shouldn't have to cost
>$25K, but to be able to clock at 60/30Hz (or even 24Hz) is a different
animal
>than one that needs to clock out images at 5/sec.

Digital Cameras are using the same CCD as your SDTV camera.
They have the same sensitivity of light.
DC can use a very high shutter speed at the similiar film speed rating as
real film.
The real problem is the data I/O speed.

CCD convert light intensity variation into electric voltage variation,
hence, an analog signal.
An Analog to Digital chip convert the analog signals into digital signals.
DSP are used to compress the digital signals into smaller data size, so the
signal can be reduce to accomodate the limited data I/O speed.
DSP isn't a very expensive component, and multiple DSP can be easily
combined to increase the processing power.

The electronic parts are cheap, and the technology isn't state of the art.
The main driving force behind the high price tag is the small quantity; it
is simply economics.

>Except, the HDCAM cameras have three CCDs for better color quality,
>and most if not all digital cameras dont, which means those digital
>cmeras have effectively 1/3rd of their pixel resolution in color
>resolution.

Yes, but Digital Camera indeed has a single CCD with better resolution than
one individual CCD of the HDTV Camcorder. If you can take 3 CCD chips out of
3 Digital Cameras and put them into a single DV Camcorder,
you will have a DV Camcorder with HDTV resolution. Remember, a mid-range
Digital Camera with 2.1 Mega Pixels costs less than $300 now.
(1920x1080 = 2073600 or about 2.1 Mega Pixels)

The new Sigma SD9 Digital Camera with a FOVEON X3 (CMOS) chip is expected to
be selling at around $3000. The FOVEON X3 chip combines 3 layer of light
censors in a single chip so their have true (no interpolation) resolution of
2268 x 1512 = 3429216 or about 3.4 Mega Pixels. In addition, the FOVEON X3
will eliminate the prism that is required to split the light and image on 3
individual CCD, and this will help reduce the cost.

John S. Dyson

unread,
May 12, 2002, 12:29:52 AM5/12/02
to

"MK70" <no...@nomail.com> wrote in message news:abkh63$ct9$1...@news.ctimail.com...

> >Digital Camera CCDs don't have to clock NEARLY as fast as HDTV. They
> >are conceptually similar animals, but different. My own SDTV camera has
> >720+H pels by 986+V pels, and uses DSP stuff to improve the picture. I
> >tend to agree that something like a 1280x720 CCD shouldn't have to cost
> >$25K, but to be able to clock at 60/30Hz (or even 24Hz) is a different
> animal
> >than one that needs to clock out images at 5/sec.
>
> Digital Cameras are using the same CCD as your SDTV camera.
> They have the same sensitivity of light.
> DC can use a very high shutter speed at the similiar film speed rating as
> real film.
> The real problem is the data I/O speed.
>
They often DO NOT use the same CCD. The larger area CCDs optimized
for still photography have differing design requirements. Just because you
might be able to find a 1920x1080 CCD that is capable of 60FPS, that doesn't
mean that is the same sensor used in your 1920x1080 CCD camera. You'll
pay LOTS MORE for the faster sensor. Take a look at the KODAK sensor
site for examples of the fast and slow sensors.

John

David McCall

unread,
May 12, 2002, 1:03:31 AM5/12/02
to

I hope this guy is wrong, or half of what I thought I knew is wrong.

"MK70" <no...@nomail.com> wrote in message news:abkh63$ct9$1...@news.ctimail.com...

> >Digital Camera CCDs don't have to clock NEARLY as fast as HDTV. They
> >are conceptually similar animals, but different. My own SDTV camera has
> >720+H pels by 986+V pels, and uses DSP stuff to improve the picture. I
> >tend to agree that something like a 1280x720 CCD shouldn't have to cost
> >$25K, but to be able to clock at 60/30Hz (or even 24Hz) is a different
> animal
> >than one that needs to clock out images at 5/sec.
>

I doubt that clocking is nearly the challenge as compared to
getting that many pixels to match decently. There is only
so much compensation you would want to do.

> Digital Cameras are using the same CCD as your SDTV camera.
> They have the same sensitivity of light.
> DC can use a very high shutter speed at the similiar film speed rating as
> real film.
>

Most modern cameras can simulate a high shutter speed, but that
isn't exactly what you need when shooting 24P. A fairly slow shutter
speed is usually better because it allows action to be blured to
smooth out the motion (motion blur). A higher shutter would tend to
enhance the jittery motion that such low frame rates are known for.

> The real problem is the data I/O speed.
>

The real problem is R & D and lack of demand.

> CCD convert light intensity variation into electric voltage variation,
> hence, an analog signal.
> An Analog to Digital chip convert the analog signals into digital signals.
> DSP are used to compress the digital signals into smaller data size, so the
> signal can be reduce to accomodate the limited data I/O speed.
> DSP isn't a very expensive component, and multiple DSP can be easily
> combined to increase the processing power.
>

OK

> The electronic parts are cheap, and the technology isn't state of the art.
> The main driving force behind the high price tag is the small quantity; it
> is simply economics.
>

Sure, among other things.

> >Except, the HDCAM cameras have three CCDs for better color quality,
> >and most if not all digital cameras dont, which means those digital
> >cmeras have effectively 1/3rd of their pixel resolution in color
> >resolution.
>

Most profesional video cameras, SDTV or HDTV, as well as the
better consumer cameras have 3 chips. OPnly the lesser
consumer cameras, very small cameras (lipstick cams),
and many still cameras use a single chip.
This is true of analog and digital cameras.

> Yes, but Digital Camera indeed has a single CCD with better resolution than
> one individual CCD of the HDTV Camcorder. If you can take 3 CCD chips out of
> 3 Digital Cameras and put them into a single DV Camcorder,
> you will have a DV Camcorder with HDTV resolution. Remember, a mid-range
> Digital Camera with 2.1 Mega Pixels costs less than $300 now.
> (1920x1080 = 2073600 or about 2.1 Mega Pixels)
>

DV is expected to be 720 x 480 pixels. No matter what chips
you put behind the lens, it will still be the same resolution,
interms of pixel count.

> The new Sigma SD9 Digital Camera with a FOVEON X3 (CMOS) chip is expected to
> be selling at around $3000. The FOVEON X3 chip combines 3 layer of light
> censors in a single chip so their have true (no interpolation) resolution of
> 2268 x 1512 = 3429216 or about 3.4 Mega Pixels. In addition, the FOVEON X3
> will eliminate the prism that is required to split the light and image on 3
> individual CCD, and this will help reduce the cost.
>

That sounds nice.


Erik Harris

unread,
May 12, 2002, 12:48:31 PM5/12/02
to
On Fri, 10 May 2002 17:59:54 -0400, "Robert Gruen" <bgr...@mindspring.com>
wrote:

>I'm probably the least qualified to chime in on this topic but....
> One of the things that I am mastering with my Cannon SLR is the depth of

You're most certainly more qualified than me to chime in on this particular
topic, but...

>Finally, in terms of frame rates, human brains process visual images at
>22 fps(?), but we can see 'jumps' when gross camera panning motions happen.

...I've been studying and working in image-processing type fields for a
number of years now, and I've never heard this before. Where on Earth did
you hear that the HVS processes motion at 22fps?? There may be some
legitimate scientific work to back that claim up, but I've never heard of it,
and all of the estimates I've heard are much higher. 22fps seems too high to
serve as a _minimum_ requirement for fooling the eye into seeing motion as
opposed to a series of pictures, but is far too low to serve as any kind of
maximum effective frame rate.

The mere fact that "we can see 'jumps' when gross camera panning motion
happens" at frame rates anywhere near your supposed optimal rate suggests
that you're wrong. Or, if you're right, it would suggest that we should see
the same thing when turning our heads or surveying a scene in real life.
That's not the case, though.

>I recall once being told that at near 22 fps speeds the brain was not 'fooled'
>into believing the motion that was being visually presented to it, so the viewer
>wouldn't become nauseous.

Which further tells you that the HVS can't be processing motion at 22fps, if
this is too slow to fool the brain into believing the motion. But on the
other hand, NOT fooling the brain totally defeats the purpose of video/motion
pictues. Dunno about you, but I want to go see the Star Wars Ep II MOVIE,
not a Star Wars Ep II slideshow. :-) And the only real difference between a
movie and a slideshow is that with a movie, my brain is fooled into thinking
that it's seeing motion, and with a slideshow, my brain _knows_ that it's
just seeing a series of pictures. (I suppose I should say HVS, since even
with a movie, I understand the fact that it's a series of pictures, but the
HVS sees it as motion)

Erik Harris

unread,
May 12, 2002, 12:52:50 PM5/12/02
to
On Sat, 11 May 2002 12:01:50 +0300, Erkki Halkka <erkki....@kolumbus.fi>
wrote:

>Erik Harris wrote:

>> How would this result in a loss of resolution? You're interpolating to add
>> data, not to take it away. In a sense, it'll "increase" the spatial
>> resolution, but since it's interpolated, that increase isn't really "real."

>Each field only has 1/2 of the resolution of a full frame.

That's true. Each field only _HAS_ 1/2 of the resolution of a full frame,
meaning that the other half _ISN'T_THERE_. That is to say that when you
convert interlaced at 30fps (60 fields/sec) to progressive at 60fps, you're
not losing any resolution, because the extra vertical resolution doesn't
exist in the original interlaced picture. It's not there to be lost!

>If video footage is slowed down by 50%, each field is shown for the full
>duration of a frame. This is essentially the same thing as is discussed
>here. The loss of resolution is clearly visible.

No, this is not what I was talking about (slowing down? I never said
anything about slow motion). And furthermore, this in itself doesn't result
in a loss of resolution either. It mucks up the picture visually because you
see the temorally adjacent fields interlaced together at the same time, but
it doesn't lose any resolution. But that's not the same as deinterlacing and
outputting at 60fps, which is what converting to progressive scan (the right
way) does, and what you claimed results in a loss of resolution.

Erik Harris

unread,
May 12, 2002, 12:54:16 PM5/12/02
to
On Sat, 11 May 2002 11:57:06 +0300, Erkki Halkka <erkki....@kolumbus.fi>
wrote:

>> seperate them, and double the resolution vertically to get 60


>> effective frames per second, but you do suffer resolution loss.

>How can you play that back?

Progressive scan television (which, at least according to the one person here
who answered my progressive scan question earlier, operates at 60 fps) or
computer monitor.

>Okay, actually it could be played back, i.e. on a computer monitor
>(which are not interlaced), and yes, one would loose half of the
>resolution.

No, it would not result in any resolution loss.

Steve Kraus

unread,
May 12, 2002, 1:10:04 PM5/12/02
to
> The new Sigma SD9 Digital Camera with a FOVEON X3 (CMOS) chip is expected to
> be selling at around $3000. The FOVEON X3 chip combines 3 layer of light
> censors in a single chip so their have true (no interpolation) resolution of
> 2268 x 1512 = 3429216 or about 3.4 Mega Pixels. In addition, the FOVEON X3
> will eliminate the prism that is required to split the light and image on 3
> individual CCD, and this will help reduce the cost.

This will also solve one of the issues regarding use of the HD cameras
for cinema. Both "Jackpot" and "SW:EP II" were released in the 2.39 : 1
anamorphic Scope format. The native format of these HD cameras is 1.78
: 1 (what video people, who apparently cannot divide, refer to as 16 :
9). Not a shape that was ever a cinema standard! Panavision wanted to
fit the camera with 1.34 squeeze ratio anamorphic lenses but because of
the need for a long back focal length that "reaches" back through the
prism block it got to be a mess with relay optics and the end result was
that resolution was being compromised. Note that while film has
substantially more rez than HD video (really now, this was intended as a
consumer format!) the small size of the CCD's jams such resolution as it
has into a very small area which taxes the optical characteristics of
the lenses to the limit. Anyway, they gave up on using anamorphic
optics and ended up achieving the widescreen shape simply by cropping
the capture data. Thus they were only using 1920 by (approx) 800 pixels
making for some pretty sad resolution.

Eliminating the prism on future camera designs will enable a great deal
more flexibility in lenses including using anamorphics. If they would
design a chip with the same area as a 35mm film frame then lenses would
be directly interchangeable.

They really need to start the design of an Ultra Definition format
specifically for cinema instead of using a glorified consumer format
with an odd frame shape that has never been seen in any theatre. 1920
pixels across is simply not enough. 3000 might do the trick but 4000
ought to be the goal.

Erkki Halkka

unread,
May 12, 2002, 6:11:16 PM5/12/02
to Erik Harris
Erik Harris wrote:

> That's true. Each field only _HAS_ 1/2 of the resolution of a full frame,
> meaning that the other half _ISN'T_THERE_. That is to say that when you
> convert interlaced at 30fps (60 fields/sec) to progressive at 60fps, you're
> not losing any resolution, because the extra vertical resolution doesn't
> exist in the original interlaced picture. It's not there to be lost!

Actually, it is. Remember that in interlaced footage, each field is
shown only in either odd or even scanlines. Two fields will create a
full resolution frame when combined.

Now, if you take one field and make it fill a full progressive scan
frame, you only have half of the vertical resolution available (the same
field will fill both odd and even scanlines), and interpolating will not
give back that missing half.


Slowing down by 50% was just a real-world example of having one
field/frame.

I was talking about slowing down footage because i had never heard of
60fps progressive scan televisions (mentioned in your other post)...
what in earth are those used for?

Erkki Halkka

unread,
May 12, 2002, 6:11:22 PM5/12/02
to
Erik Harris wrote:

> That's true. Each field only _HAS_ 1/2 of the resolution of a full frame,
> meaning that the other half _ISN'T_THERE_. That is to say that when you
> convert interlaced at 30fps (60 fields/sec) to progressive at 60fps, you're
> not losing any resolution, because the extra vertical resolution doesn't
> exist in the original interlaced picture. It's not there to be lost!

Actually, it is. Remember that in interlaced footage, each field is


shown only in either odd or even scanlines. Two fields will create a
full resolution frame when combined.

Now, if you take one field and make it fill a full progressive scan
frame, you only have half of the vertical resolution available (the same
field will fill both odd and even scanlines), and interpolating will not
give back that missing half.


Slowing down by 50% was just a real-world example of having one
field/frame.

I was talking about slowing down footage because i had never heard of
60fps progressive scan televisions (mentioned in your other post)...
what in earth are those used for?

Erkki Halkka

unread,
May 12, 2002, 6:13:05 PM5/12/02
to Erik Harris
Erik Harris wrote:
>
> On Sat, 11 May 2002 11:57:06 +0300, Erkki Halkka <erkki....@kolumbus.fi>
> wrote:
>
> >> seperate them, and double the resolution vertically to get 60
> >> effective frames per second, but you do suffer resolution loss.
>
> >How can you play that back?
>
> Progressive scan television (which, at least according to the one person here
> who answered my progressive scan question earlier, operates at 60 fps) or
> computer monitor.

I didn't know that 60 fps televisions exist... what are they used for?



> >Okay, actually it could be played back, i.e. on a computer monitor
> >(which are not interlaced), and yes, one would loose half of the
> >resolution.
>
> No, it would not result in any resolution loss.

Yes it would, see my other post.

Erkki Halkka

unread,
May 12, 2002, 6:13:11 PM5/12/02
to
Erik Harris wrote:
>
> On Sat, 11 May 2002 11:57:06 +0300, Erkki Halkka <erkki....@kolumbus.fi>
> wrote:
>
> >> seperate them, and double the resolution vertically to get 60
> >> effective frames per second, but you do suffer resolution loss.
>
> >How can you play that back?
>
> Progressive scan television (which, at least according to the one person here
> who answered my progressive scan question earlier, operates at 60 fps) or
> computer monitor.

I didn't know that 60 fps televisions exist... what are they used for?


> >Okay, actually it could be played back, i.e. on a computer monitor
> >(which are not interlaced), and yes, one would loose half of the
> >resolution.
>
> No, it would not result in any resolution loss.

Yes it would, see my other post.

CU

Simon Bunker

unread,
May 12, 2002, 8:34:50 PM5/12/02
to
I guess in a perfect world we would have a CCD at 2048 x 1556 pixels -
standard 2K images which most people are familiar dealing with (full app,
flat) and can go straight out to a film recorder. There isn't usually a need
to go to 4K unless it is a background matte painting that is going to come
under close scrutiny or you are going to be shooting out onto 70mm. Plus
most post houses would freak if they got all of their matterial as 4K -
pipelines are not designed to handle that much information usually!

Quite a few films are shot flat and then cropped to Cinemascope
(panovision - whatever you want to call it) at the standard 2.35:1 aspect
ratio. Going down from the 1.31 accademy aspect ratio means you crop the
image to 2048 x 870 pixels so you are only loosing 128 horizontal and 70
vertical pixels with HD. (although it is nice if you don't have to!). You
have to remember film in still cameras passes through the gate horizontally
(same as vistavision cameras) whereas normal film cameras move the film
vertically so you only have to deal with a 24mm width (minus sound track) as
opposed to 36mm.

Another thing that has come out of digital stills cameras is the use of 35mm
"equivalant" lenses to account for the shorter focal lengths needed, but due
to the expense of optics it would be nice not to have to go out and learn a
completely new set. If you base work on field of view rather than focal
length then things should work out the same, but as I do not work in
production I do not know if this change would be available at reasonable
cost or if these lenses are developed enough to not introduce much
distortion. This is analagulous to talking about T or f/no.s (almost the
same thing..depends how anal you want to get :o) which are independent of
the focal length (and in fact derived from it and the diameter of the
optic).

It would be nice to have 35mm sized CCD arrays, but then you have the
problem of moving the charge off the chip whilst the shutter is closed fast
enough is a stumbling block. You either have to have very small chips or
interlace the pixels with a buffer, which also affects resolution.

By the way the Foveon chips look great technology - will be nice when they
get in production. Also you could get higher frame rates and better better
resolution with something like Fuji's hexagonal CCD arrays (used in still
cameras) - although the chip would more likely get smaller!

The other nice part about digital cinema is DLP technology . I don't think
the current resolutions go very high (couldn't find any good figures outside
of home cinema), but I liked the look of the two films I have seen projected
with it the images are very crisp and contrasty.

By the way doesn't the 16:9 format come from a compromise between the 2.35:1
and 1.85:1 widescreen formats??

Simon

--
http://www.rendermania.com/
UIN 11123737

"Steve Kraus" <scr...@BLOCKERfilmteknik.com> wrote in message
news:3CDEA2...@BLOCKERfilmteknik.com...

Su

unread,
May 13, 2002, 1:35:46 AM5/13/02
to
16:9 is essentially the same as 1.85. Yes, 16/9=1.78, but only people with
nothing better to do are going to argue about it (when they are not arguing about
the difference between 2.35 and 2.39)

You are right about the small CCD. That is a design flaw, IMHO.

As far as "sad resolution" goes, it's no worse than what one gets with super 35,
(which is 871 pixels) given the extra optical step, so the end result is not far
off from the 817 that you get in HD. Films shot in HD have the benefit of being
shot flat, but recorded squeezed so as to avoid the indignity of further
degradation by yet another piece of glass.

Despite what people are saying on this NG there are plenty of interviews floating
around the web (see
http://www.cinematographer.com/article/mainv/0,7220,30827,00.html ) about how
great HD worked on EP2. Who really knows how much of this is hype? I do know
many many people who have seen EP2 and not a one of them has complained about
image quality. I'm sure we'll all have our own opinions next week, anyway.

I'm on a project now that may be shot on film or HD, but will be released at a
2.35 aspect. Even if (when) we shoot on film the squeeze will be handled
digitally and the result shot directly to IP.

4K is overkill for current film stocks, for electronic projection, for
cost-effective production, and for what people need to see. imho

su

Rob Mora

unread,
May 13, 2002, 9:52:32 AM5/13/02
to
Erkki Halkka <erkki....@kolumbus.fi> wrote in message news:<3CDCED35...@kolumbus.fi>...

> You don't happen to know if these examples are available somewhere, with
> slightly lower compression? Even with the higher quality pdf, the
> compression artifacts make it hard to see any differencies... if there
> are any.


Not sure. You can, however, sign up for their free print version. I
have it and it shows what you're talking about.

Rob

Erik Harris

unread,
May 13, 2002, 8:36:41 PM5/13/02
to
On Mon, 13 May 2002 01:11:16 +0300, Erkki Halkka <erkki....@kolumbus.fi>
wrote:

>Actually, it is. Remember that in interlaced footage, each field is


>shown only in either odd or even scanlines. Two fields will create a
>full resolution frame when combined.

>Now, if you take one field and make it fill a full progressive scan
>frame, you only have half of the vertical resolution available (the same
>field will fill both odd and even scanlines), and interpolating will not
>give back that missing half.

That's a good point, though it only applies to still areas of the screen.
The information is still there, though, it's just in the adjacent frame. I
don't know that at 60fps, you'd be able to notice the "loss" on anything but
a "video photo," and even that might not be noticeable at 60fps.

It'd be an interesting experiment, I guess

>I was talking about slowing down footage because i had never heard of
>60fps progressive scan televisions (mentioned in your other post)...
>what in earth are those used for?

For watching television, silly. :) Many EDTV and HDTV formats are
progressive (there's the low end 480p format that I believe Fox is using,
which I believe is referred to as EDTV, not good enough to be considered
Hi-Def, the 720p format, and the 1080i format, among others not as commonly
used, and probably others outside the US). Many will also convert interlaced
signals to progressive, which prompted my question earlier about progressive
scan TVs' frame rates (to which only one person knew an answer, so hopefully
it's accurate). Interlaced TV's are yesterday's technology, get with the
times! :-)

Erik Harris

unread,
May 13, 2002, 8:38:35 PM5/13/02
to
On Mon, 13 May 2002 01:13:11 +0300, Erkki Halkka <erkki....@kolumbus.fi>
wrote:

>> >Okay, actually it could be played back, i.e. on a computer monitor


>> >(which are not interlaced), and yes, one would loose half of the
>> >resolution.

>> No, it would not result in any resolution loss.

>Yes it would, see my other post.

It's a good point, though I suppose technically it's more of a "displacement"
than a loss, since no information is LOST, but is reproduced in a different
frame (associated with the different field).

My main reason for replying to this message, though, is to point out that
you're double-posting. Every one of your posts is coming through twice. I
noticed it last time around, but assumed it was a one-time fluke. But you're
doing it this time, too.

It is loading more messages.
0 new messages