I try to think and build a formula how to construct color using artifact
color for Apple II Video. I can only concentrate three pixels at one
horizontial line at this time. It has 12 clock cycles out of 14MHz to
display 12 monochrome dots while 3 clock cycles out of 3.58MHz displays
color on group of four dots.
A group of four dots is called phase shift. It has the degree between 0
and 360. We have the information to obtain number of degree from the phase
shift. It is only the way to build a formula which it allows to translate
number of degree into RGB. First of all, we have YIQ formula, but YIQ
formula does not have number of degree. It can't be used to convert to RGB.
IQ is like R - Y, B - Y, R + Y, and B + Y, but it never has green signal.
Then, we have to add R +/- Y and B +/- Y together to create G +/- Y.
R +/- Y, G +/-Y, and B +/-Y are the RGB signals which they are sent to
the beam gunner for light. When you use 10X to 30X magnifier, you may be
able to see R, G, and B dot lights through the monitor window.
I thought that RGB signals might be used to translate to the individual
RGB pixels without converting from YIQ to RGB formula.
Do you know the formula with cos / sine and the number of degree to be
translated into RGB signals? I have spent time researching using Google
search while I am trying to find color reference 3.58MHz information. I did
see one of these formula, but it is involved with beam gunner only. It does
not tell how formula can be used from phase shift with number of degree.
Please let me know if you know the formula.
Bryan Parkoff
> Do you know the formula with cos / sine and the number of degree to be
> translated into RGB signals?
I believe this may be what you're after!?!
<http://www.ntsc-tv.com/images/tv/vector.gif>
Regards,
--
| Mark McDougall | "Electrical Engineers do it
| <http://members.iinet.net.au/~msmcdoug> | with less resistance!"
Mark,
Thank you for the information. I have seen this before, but it does not
answer my question. I want to know how phase shift produces color. For
example, 16 colors below.
0000 0 Black No Luminance
1000 1 Deep Red 0 Degrees
0100 2 Dark Blue 90 Degrees
1100 3 Purple 45 Degrees
0010 4 Dark Green 180 Degrees
1010 5 Dark Gray 50% Luminance
0110 6 Medium Blue 135 Degrees
1110 7 Light Blue 90 Degrees, Luminance of "3"
0001 8 Brown 270 Degrees
1001 9 Orange 315 Degrees Luminance of "2"
0101 10 Light Gray 50% Luminance
1101 11 Pink 0 Degrees, Luminance of "3"
0011 12 Green 225 Degrees
1011 13 Yellow 270 Degrees, Luminance of "3"
0111 14 Aquamarine 180 Degrees, Luminance of "3"
1111 15 White 100% Luminance
Think about Deep Red, Dark Blue, and Purple. You display Deep Red.
Then, you want to display Purple instead of Deep Red. You have to add Dark
Blue to the screen. It is like Deep Red and Dark Blue are joined together
like "OR" to become Purple. Deep Red "OR" Dark Blue = Purple.
Another example, you want Light Blue, but you have to add Luminance over
Purple. Then Purple becomes Light Blue. Purple "OR" Dark Green (With
Luminance) = Light Blue.
One more example. You want to display Ornage. It has Deep Red and
Brown. Deep Red "OR" Brown = Orange. You can see two artifact colors in
one pixel (four dots or pulses as phase shift). Two arfifact colors are
Deep Red and Orange.
It is only a question why Deep Red appears in the left of this pixel.
The answer might be two zero bits between one bit because of no luminance.
It is what I am trying to translate number of degrees to 16 individual
color RGB pixels.
Do you know what I am trying to clarify in details? Do you know the
formula or equation?
Bryan Parkoff
> Think about Deep Red, Dark Blue, and Purple. You display Deep Red.
> Then, you want to display Purple instead of Deep Red. You have to add Dark
> Blue to the screen. It is like Deep Red and Dark Blue are joined together
> like "OR" to become Purple. Deep Red "OR" Dark Blue = Purple.
I think your logic is not quite valid for a system based on luminance
and chrominance. You can think this way for RGB because it is 'additive'
- adding and subtracting components of R,G & B give you colours ranging
from black to white.
From my limited understanding, the phase shift as you move around the
colour wheel is directly related to wavelength. You can't simply add
phase offsets together to give the colour that corresponds to the sum of
the angles.
> It is what I am trying to translate number of degrees to 16 individual
> color RGB pixels.
> Do you know what I am trying to clarify in details? Do you know the
> formula or equation?
Have you read this..
<http://www.ee.washington.edu/conselec/CE/kuhn/ntsc/95x4.htm>
It should have the information you need.
eg. Looking at the Vector Scope reference I link in my previous post...
For RED, Q = 0.21, I = 0.6.
Theta = atan(0.21/0.6) = 19.3 deg.
Now I axis is 57 deg from the burst = 123 deg.
So RED = 123 - 19.3 = 103.7 deg.
Which corresponds to RED on the "VECTOR SCOPE".
Similarly for yellow... Q=0.21-0.52=-0.31, i=0.6-0.28=0.32
Theta = atan(-0.31/0.32) = -44 deg.
Yellow = 123 - (-44) = 167 deg.
etc.
Regards,
--
Mark McDougall, Engineer
Virtual Logic Pty Ltd, <http://www.vl.com.au>
21-25 King St, Rockdale, 2216
Ph: +612-9599-3255 Fax: +612-9599-3266
<http://www.poynton.com/ColorFAQ.html>
> It is only a question why Deep Red appears in the left of this pixel. The
> answer might be two zero bits between one bit because of no luminance.
Here's what *I* understand about colour artifacting, although I've never
studied the specifics on the Apple 2 in any detail. So take this with a
grain of salt..
As you know, colour (chrominance) is encoded as a phase shift from the
colour burst reference modulated on top of the monochrome signal (luminance).
In an ideal world, this colour component might consist of a sine wave whose
phase can change instantaneously for each 'pixel', and by any multiple of an
infinitesimal amount. Naturally, circuits have bandwidth limits as does the
transmission spectrum so the phase changes have a finite 'resolution' and
the phase can't actually change instantaneously. Obviously, these
limitations still allow a reasonable quality picture to be displayed.
The Apple II (as did other computers of that era) generate what should be
analogue video signals using digital approximations. Indeed, the very reason
computers have discrete 'pixels' is a by-product of this fact, whereas a TV
picture raster line has no such horizontal delineation.
You can, for example, crudely approximate a sine wave using a simple square
wave of the same frequency. If that square wave is passed through a low-pass
filter, the higher frequency components are filtered out and the resulting
output more closely resembles a sine wave.
Now, you can't change the phase of a sine wave using a square wave of the
same frequency. But if you chose, for example, a frequency 4 times higher,
and approximated the sine wave using 4 consecutive 1's followed by 4 0's,
then the resulting square would be exactly the same, but you can now vary
the phase by +/- 45 degrees by inserting or removing an extra 1 or 0 into
the stream.
It gets more complicated when you start moving away from 4 consecutive 1's
and 0's. For example, if you toggled 1's and 0's every two clocks (rather
than 4), then you'd think that you've simply doubled the frequency of the
colour signal. However, colour is encoded as a phase shift - it's not
frequency modulated - so that, and the fact that the decoder is band limited
- means the decoder 'sees' the 'double frequency' as a constantly changing
phase. You'd no doubt end up with groups of repeating pixel colours.
Also, the resolution of your 'clock' also limits how *quickly* you can
encode phase changes. If your stream changes from 0 to 1, the value is held
at 1 for the entire pixel, and the decoder can't 'see' how the waveform is
going to vary in future, so your next pixel is limited in some way by the
colour of the previous pixel. Depending on your clock resolution, it may
take 2 or more pixels to get from 1 colour to the next.
I don't know the specifics of the frequencies involved on the Apple 2, but I
*suspect* the artifacting is a result of being able to change the phase of
the signal by +/- 90 degrees only? Can anyone confirm?
Hopefully I haven't sold you a crock of sh*t here... I'm pretty sure it's
the gist of the mechanism if not 100% accurate.
So if you want to understand how to 'emulate' artifacting I think you need
to understand both (1) how colour is encoded on NTSC/PAL and (2) how the
apple generates the video signal.
If anyone knows better, please chime in!
Hello Mark,
Thank you again. Yes, I have read an explanation already before you
provided me the website address. It is complicated for me to understand. I
do understand how color in a circle works that is opposed to color in a
square. Think of color in a square. It is truly RGB. Darkness color is in
the bottom of this square and brightness color is in the top of this square.
You fill 0-255 value on R, G, and B signal. It allows the line of RGB to be
moving from the bottom to the top of this square until you get the correct
color.
Think of color in a circle what I am referring Deep Red, Dark Blue, Dark
Green, and Brown. Apple IIgs attempts to emulate a color in the circle to
display HGR and DHGR using phase shift by the following 0 to 360 degree.
You place one bit on position 0 and three zero bits on position 1 through 3
of DHGR. The phase shift is on 0 degree. It displays Deep Red pixel. It
is where an arrow in the color circle is pointed to the 0 degree.
You want Dark Blue instead of Deep Red. You have to move the arrow from
0 degree to 90 degree in the color circle by clearing a bit in position 0
and setting a bit in position 1. It looks like 0100 instead of 1000. It is
now 90 degree instead of 0 degree. Deep Red pixel became Dark Blue pixel.
It does the same process to clearing and setting bit. It is like a bit is
moving from phase shift to phase shift. 1000, 0100, 0010, 0001, 1000.
You can see Dark Blue changed to Dark Green at 180 degree. It does the
same for Dark Green changed to Brown at 270 degree. Again, it is changing
from Brown to Deep Red at 360 degree. It is at 360 degree that is pointing
at 0 degree. It continues to run a wheel like color circle while four
different color pixels are rolling like a wheel on the horizontial line. It
does the same example for a group of Purple, Blue, Green, and Orange, a
group of Dark Gray, and Light Gray, and also a group of Light Blue,
Aquamarine, Yellow, and Pink.
Do you know what I mean?
>> It is what I am trying to translate number of degrees to 16
>> individual
>> color RGB pixels.
>> Do you know what I am trying to clarify in details? Do you know the
>> formula or equation?
>
> Have you read this..
> <http://www.ee.washington.edu/conselec/CE/kuhn/ntsc/95x4.htm>
>
> It should have the information you need.
Do you agree what I provided 16 colors in a table with 4 bits and
degree? Is it wrong?
> eg. Looking at the Vector Scope reference I link in my previous post...
> For RED, Q = 0.21, I = 0.6.
> Theta = atan(0.21/0.6) = 19.3 deg.
> Now I axis is 57 deg from the burst = 123 deg.
> So RED = 123 - 19.3 = 103.7 deg.
> Which corresponds to RED on the "VECTOR SCOPE".
> Similarly for yellow... Q=0.21-0.52=-0.31, i=0.6-0.28=0.32
> Theta = atan(-0.31/0.32) = -44 deg.
> Yellow = 123 - (-44) = 167 deg.
> etc.
How do you get the value of Q and I with your knowledge? Do you guess
the values by trying to pick up the correct degree for the color? For an
example, I is orange and Q is green. It is what you intend to display
orange pixel. Use 10X to 30X magnifier at the NTSC monitor where it is
orange pixel. You may be able to see red dot light, green dot light, and
blue dot light behind the window of NTSC monitor so you can see strip of
orange and green. You might notice Deep Red in the left.
You intend to translate a group of phase shift or four pulses to four
individual RGB pixels. How? Without formula? It is possible to guess by
translating orange strip and green strip into two orange pixels and two
brown pixels of RGB. I am curious how Apple Computer, Inc knows to do with
RGB DHGR from NTSC. It is what I observed my oscilliscope equipment when I
captured all the DHGR's pixel structure into DHGR table. It is a success,
but it has 8% error because of color misalignment in the pixel structure.
It is not my design in error, but it is the way how logic is used.
The only way is to solve 8% error which it has to follow phase shift
with color circle and degree formula to RGB. If it is impossible, I would
add correction status in the DHGR table. It will detect error and correct
color misalignment in the pixel structure automatically. My screenshot of
DHGR pictures from Microsoft Paint looks identical 100% to the true Apple
IIgs' screen on VGA monitor unlikely KEGS32 and AppleWin.
Bryan Parkoff
Hello Mark again,
Apple II family uses the osc crystal chip that is 14,318,180 MHz. 14MHz
is divided by 4 to show 3.58MHz for color reference. Only load/shift
register can transmit one bit and zero bit through the serial cable to the
NTSC monitor at 14MHz. NTSC monitor displays monochrome pixels because it
sees pulse by pulse (bit by bit) stream like luminance. With color
reference (3.58MHz) is activated, it catches each group of four pulses to
paint one out of 16 color pixels.
>
> Hopefully I haven't sold you a crock of sh*t here... I'm pretty sure it's
> the gist of the mechanism if not 100% accurate.
>
> So if you want to understand how to 'emulate' artifacting I think you need
> to understand both (1) how colour is encoded on NTSC/PAL and (2) how the
> apple generates the video signal.
Yes, I understand perfectly for Apple II video when I studied from the
Understanding the Apple II manual. The problem is that I have hard time to
understand how YIQ, color circle, and other formula when I try to convert or
translate NTSC pixels to RGB pixels.
Bryan Parkoff
> Think of color in a square. It is truly RGB. Darkness color is in
> the bottom of this square and brightness color is in the top of this square.
> You fill 0-255 value on R, G, and B signal. It allows the line of RGB to be
> moving from the bottom to the top of this square until you get the correct
> color.
An RGB colour space incorporates both chrominance and luminence which is
why you have black and white included in the space. In, YIQ colour space
(which is the same as CIE XYZ) the 'wheel' mapped out by I&Q describes
only the chrominance - it does *not* include the luminance, Luminance
(Y) represents the "power" or intensity of the light and is not encoded
in the phase shift but rather the amplitude of the grey-scale signal.
That's why there's no black or white on the "VECTOR SCOPE".
> Think of color in a circle what I am referring Deep Red, Dark Blue, Dark
> Green, and Brown. Apple IIgs attempts to emulate a color in the circle to
> display HGR and DHGR using phase shift by the following 0 to 360 degree.
> You place one bit on position 0 and three zero bits on position 1 through 3
> of DHGR. The phase shift is on 0 degree. It displays Deep Red pixel. It
> is where an arrow in the color circle is pointed to the 0 degree.
I think you've sort of got the idea, but it's twisted a little. A group
of 4 dots does not represent a "phase shift". Phase shift is relative to
the colour burst, and is a property of a single pixel, not a group of
pixels. I *think* what may be confusing is that the Apple can only
adjust the phase of the signal by +/- 90 degrees for each bit, so it
would take 4 shifts (4 pixels) to move all the way around the colour
wheel. Of course, you can go backwards as well, so at most you need 2
shifts to get to any desired pixel value.
> It does the same process to clearing and setting bit. It is like a bit is
> moving from phase shift to phase shift. 1000, 0100, 0010, 0001, 1000.
Look at the *shape* of the waveform produced a bitstream.
eg. Say we have a colour burst encoded digitally at 4X the burst
frequency. It may look like this...
0011001100110011
So it takes 4 bits for once cycle, which looks more like a sine wave
after it's passed through a low-pass filter.
Now we choose a colour exactly *in phase* with the burst (which is a bit
confusing, because the I axis is defined as being 57 deg from the burst,
so our colour would be whatever is represented at -57 deg on the vector
scope)... the encoding would be:
burst: 00110011001100110011001100110011
IQ: 00110011001100110011001100110011
which is of course exactly the same as the burst.
Now say we wish to move 90 degrees around the colour circle. So we need
to shift the chrominance signal by 90 degrees. We then have...
burst: 00110011001100110011001100110011
IQ: 01100110011001100110011001100110
The above signals show the encoding over *several* pixels.
When you start changing the phase *every* pixel, it gets messy and
difficult to actually show the phase shift.
> Do you agree what I provided 16 colors in a table with 4 bits and
> degree? Is it wrong?
I haven't checked all your numbers but yes, I think some of them are wrong.
> How do you get the value of Q and I with your knowledge? Do you guess
> the values by trying to pick up the correct degree for the color?
No I didn't guess at all. It's all derived from the formula that
converts RGB to YQI...
Y = 0.3R + 0.59G + 0.11B
which is the luminance or grey-scale component
Q = 0.21R - 0.52G + 0.31B
I = 0.6R - 0.28G - 0.32B
which give the two quadrature components of chrominance.
So for RGB red (255,0,0) which we scale to (1,0,0) for simplicity...
Y = 0.3*1 + 0.59*0 + 0.11*0 = 0.3
Q = 0.21*1 - 0.52*0 + 0.31*0 = 0.21
I = 0.6*1 - 0.28*8 - 0.32*0 = 0.6
I&Q are two vectors at right-angle (quadrature). The representative
phase is the angle subtended when you join the vectors (this is the
angle on the colour wheel, which gives the hue) and the magnitude is how
far from the centre of the wheel the colour lies, which gives the
saturation. The *angle* is given by the arctan of the length of Q
divided by the length of I.
So for our example of red, with Q=0.21 and I=0.6, the phase is
arctan(0.21/0.6) = 19.3 deg.
Since the I axis is defined as being 57deg from the burst, looking at
the vector scope burst is at 180, so I is at 123deg. Subtracting 19.3
from 123 gives 103.7 which corresponds with the Red square on the vector
scope.
For yellow, which is RED+GREEN in RGB space, I used (255,255,0) to get
167 deg.
> It is only a question why Deep Red appears in the left of this pixel.
This goes back to my earlier post.
I believe artifacting like this occurs because the phase change is
limited to +/- 90 degrees. So you can't get from one side of the colour
wheel to the other (180 deg shift) in a single pixel - it takes two
pixels to get there. Hence the colour of the artifact is half-way
between the previous pixel and the next on the colour wheel. In effect
its colour is totally dependent on the pixels either side of it.
Mark,
Thanks again. Do you realize that there is 427 pixels per horizontial
line of NTSC? I have no idea where number "427" comes from when I read on
the website. Apple II can only display 140 pixels per horizontial line in
color on all LGR, HGR, and DHGR while 560 pixels per horizontial line in
monochrome.
We can't call 560 pixels in color because we have to follow 3.58 MHz
color reference. Pixel is defined a group of 4 pulses of 14MHz, but we can
call four sub-pixels per one whole pixel out of 140 pixels.
Bryan Parkoff
Mark,
OK. You used RGB to YIQ formula. The problem is that you already know
Apple IIgs' 16 RGB colors so you put them into RGB to YIQ formula to get
degree of color circle. It is not a problem. I do not intend to get RGB
value for color because I don't want to use Apple IIgs' RGB 16 colors so I
use NTSC color. I decide to follow the bit stream of 14MHz while 3.58MHz
color reference is activated. It is up the color reference to decide and
choose which color is to be used by the following bit stream of 14MHz.
With the color information from bit stream of 14MHz, the degree of color
should be obtained from it through 3.58MHz color reference. Then, we say
that YIQ is per four sub-pixels of whole pixel. We may want to simulate
NTSC screen on RGB monitor. We need to use YIQ to RGB formula so it will
show RGB pixels. It is what we use NTSC color which it will never be
identical to Apple IIgs' 16 RGB color.
Do you know what I mean? Do you know which is a good book so I can read
and study?
Bryan Parkoff
> OK. You used RGB to YIQ formula. The problem is that you already know
> Apple IIgs' 16 RGB colors so you put them into RGB to YIQ formula to get
> degree of color circle. It is not a problem. I do not intend to get RGB
> value for color because I don't want to use Apple IIgs' RGB 16 colors so I
> use NTSC color. I decide to follow the bit stream of 14MHz while 3.58MHz
> color reference is activated. It is up the color reference to decide and
> choose which color is to be used by the following bit stream of 14MHz.
> With the color information from bit stream of 14MHz, the degree of color
> should be obtained from it through 3.58MHz color reference. Then, we say
> that YIQ is per four sub-pixels of whole pixel. We may want to simulate
> NTSC screen on RGB monitor. We need to use YIQ to RGB formula so it will
> show RGB pixels. It is what we use NTSC color which it will never be
> identical to Apple IIgs' 16 RGB color.
Ahhhhhhhhhhhhhhhhh - I think I *finally* understand what you're trying
to do!?!? I've had it backwards all along...
Right, so you want to 'emulate' NTSC output on RGB, using the *exact*
same colours as you'd see on the NTSC monitor, including the
artifacting... right....
Now, 14MHz will allow you generate 4 distinct chrominance values, at
phases 0, 90, 180 and 270 degrees. Presumably you can also generate 4
different levels of luminance, which gives you 4*4=16 different colours.
So I guess the key is to knowing, from the 4 bits that define each
colour, which bits control the phase and which bits control the
luminance, and how. This is something that will be unique to the Apple
2, and not something a generic formula can give you.
Once you know that, you can at least choose the 16 'pure' values
directly from the colour wheel, given your 16 pairs of phase,luminance
values calculated theoretically.
Now the hard part, artifacts. The originating input is generated by
square waves, and low-pass filtered to be shaped more like sine waves.
But they're not perfect, and the phase comparator in the NTSC monitor is
going to 'see' the phase difference drift slightly across each pixel.
The problem is more pronounced when you need to switch the phase 180
from once pixel to the next (opposite sides of the colour wheel). This
is why you get purple/violet fringing on the white for example - it
takes a while for the "true" phase to be seen by the comparator.
To calculate these artifacts I think you'd have to model the filtering
of the square wave and the bandwidth limitation on the phase comparator.
Trouble is, the 'fringing' is an analogue effect and to show this on an
RGB monitor would require several pixels for each Apple 2 pixel. I guess
it's fortunate that we have 1900x1600 monitors these days! ;)
Am I on the right track now?
> Thanks again. Do you realize that there is 427 pixels per horizontial
> line of NTSC? I have no idea where number "427" comes from when I read on
> the website.
Technically, NTSC doesn't have any concept of "pixels". The rate of change
of phase is limited by the frequency band alloted for transmission of the
chrominance. The raster line is just one big long analogue waveform.
Mark,
Correct. It is what I mean.
Bryan Parkoff
Do you know a good book? It has a better explanation understanding how
artifact color of NTSC work. It might tell me how to emulate NTSC on VGA.
Bryan Parkoff
Bruce Artwick's "Applied Concepts in Microcomputer Graphics" explains NTSC
and how the Apple II creates color using artifacting.
http://www.amazon.com/gp/product/0130393223
Joshua
> Bruce Artwick's "Applied Concepts in Microcomputer Graphics" explains NTSC
> and how the Apple II creates color using artifacting.
Damn! My mother is due back from the states in 4 days... I don't really
want to pay $10 shipping for an $0.84 book...
Actual chroma subcarrier bandwidths are on the order of 1MHz, so the
band limiting is pretty severe.
> The Apple II (as did other computers of that era) generate what should
> be analogue video signals using digital approximations. Indeed, the very
> reason computers have discrete 'pixels' is a by-product of this fact,
> whereas a TV picture raster line has no such horizontal delineation.
>
> You can, for example, crudely approximate a sine wave using a simple
> square wave of the same frequency. If that square wave is passed through
> a low-pass filter, the higher frequency components are filtered out and
> the resulting output more closely resembles a sine wave.
>
> Now, you can't change the phase of a sine wave using a square wave of
> the same frequency. But if you chose, for example, a frequency 4 times
> higher, and approximated the sine wave using 4 consecutive 1's followed
> by 4 0's, then the resulting square would be exactly the same, but you
> can now vary the phase by +/- 45 degrees by inserting or removing an
> extra 1 or 0 into the stream.
I think you mean 90 degrees...
And note that 1110 has the same phase as 0100, but three times the
luminance.
> It gets more complicated when you start moving away from 4 consecutive
> 1's and 0's. For example, if you toggled 1's and 0's every two clocks
> (rather than 4), then you'd think that you've simply doubled the
> frequency of the colour signal. However, colour is encoded as a phase
> shift - it's not frequency modulated - so that, and the fact that the
> decoder is band limited - means the decoder 'sees' the 'double
> frequency' as a constantly changing phase. You'd no doubt end up with
> groups of repeating pixel colours.
No, as soon at there is no 3.58MHz component in the signal, there is
no color. Alternating bit values of a 4 x 3.58MHz clock would produce
a 7MHz signal, and display as 50% gray.
> Also, the resolution of your 'clock' also limits how *quickly* you can
> encode phase changes. If your stream changes from 0 to 1, the value is
> held at 1 for the entire pixel, and the decoder can't 'see' how the
> waveform is going to vary in future, so your next pixel is limited in
> some way by the colour of the previous pixel. Depending on your clock
> resolution, it may take 2 or more pixels to get from 1 colour to the next.
>
> I don't know the specifics of the frequencies involved on the Apple 2,
> but I *suspect* the artifacting is a result of being able to change the
> phase of the signal by +/- 90 degrees only? Can anyone confirm?
Yes--but the artifacting results from the video stream containing
3.58MHz components.
> Hopefully I haven't sold you a crock of sh*t here... I'm pretty sure
> it's the gist of the mechanism if not 100% accurate.
>
> So if you want to understand how to 'emulate' artifacting I think you
> need to understand both (1) how colour is encoded on NTSC/PAL and (2)
> how the apple generates the video signal.
The actual mechanism of color encoding is rather complex, and requires
an understanding of vector algebra and the response of analog filters.
For a complete understanding, I'd recommend reading some books on
analog TV and electronics.
Many years ago, D. G. Fink's "TV System Engineering" (IIRC) was the
definitive text.
-michael
New, faster SUDOKU v2.0 solver for Apple II's!
Home page: http://members.aol.com/MJMahon/
"The wastebasket is our most important design
tool--and it's seriously underused."