On Thursday, 17 May 2018 09:06:47 UTC-4, Zellyn wrote:
> Thomas,
>
> While I have your attention and obvious NTSC-decoding knowledge, I was curious about one thing in particular in the OpenEmulator NTSC rendering. It actually renders the monochrome input canvas to a decoded canvas of the same size. I was surprised by that. I expected the subtle fringing etc. to require more resolution than one output pixel per input pixel. I guess the scanlines, shadow mask, chroma- and luma-bandwidth blurring obscure it enough to smother the subtleties. I was curious on your opinions, though.
I'm self-taught, over the last few years, so may nor may not be trustworthy. That disclaimer aside: four samples per colour cycle is by coincidence the recommended capture rate for digital preservation of analogue content.
If you'll accept the hand-waving explanation: Nyquist-Shannon says that fully to preserve the content of a signal of rate n Hz with discrete samples, you need to sample at a rate of 2n Hz. The colour part of a composite signal is actually two signals of n Hz added together though, one offset by 90 degrees from the other. So you need to sample two streams of 2n Hz, each in turn.
Which means sampling at 4n Hz. The rate the Apple produces pixels at.
> One other thing I was curious about: in your emulator, you're actually decoding the signal as a continuous input, right? Are you doing horizontal and vertical sync with front-porch etc. and color sync by locking to an emulator-produced reference color signal? Are you using the GPU for all that, or just pumping it through some kind of more realtime output filter? I wouldn't know how to go about using a GPU for a continuous signal: OpenEmulator definitely treats the input as a two-dimensional texture, even though the filtering is all horizontal.
This is an attempted summary of a much longer, boring version:
Machines produce a continuous input, but it's a series of segments. "I output sync for n units of time", "I output data for m units of time", etc. They're allowed just to say "I output a colour burst of amplitude X and phase Y". The rule is that if it's an accurate but more compact description of the real signal, it's acceptable. So the CRT has to classify syncs because there's only one sync level, but usually gets away with just looking at where it thought a colour burst would need to be and either seeing one already identified or seeing nothing. Which saves some time.
But otherwise, the CPU tracks syncs and bursts and so on, and posts to the GPU a series of runs: a start (x, y), an end (x, y), the colour subcarrier amplitude and phase at the start of that run or else the fact that no colour burst was detected, and a reference to the data that comprises the run.
In the Apple's case, the data is naturally enough 1-bit, seven pixels per byte, double high resolution. So the GPU converts that to full composite, decodes the colours, and paints between the specified coordinates.
I don't think you could helpfully use the GPU for more than that because it's a purely sequential process, rather than a particularly parallel one.
I also don't think you gain anything in terms of a whole frame from doing it this way for the Apple II versus just processing the whole frame already composed in 2d. It's really more useful for machines like the Atari 2600, ZX80/81, Amstrad CPC et al where the programmer has direct control over sync placement and timing.