Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Streaming video over serial?

1,225 views
Skip to first unread message

Joshua Bell

unread,
Mar 29, 2008, 12:38:59 PM3/29/08
to
I have an idea for an "art" project, of sorts - streaming video from a PC
over serial to an Apple II. The idea is to present the appearance of a very
"retro client" version of the immersive 3D client/server application my
company creates on the Apple - just for fun.

The basic idea is as follows:

* Service on the contemporary machine that takes frame-grabs of an
application window at some frequency (say, 5Hz)
* The service then processes the frames to an Apple-friendly format.
Initially, probably uncompressed lo-res (40x48) display.
* The data is streamed out over serial to the Apple
* The Apple is running a fairly simple loop that reads and displays frames
on the lo-res screen
* The Apple is non-interactive while this is running.

A lo-res screen is 40x48 or 1920 pixels/frame; 4 bits/pixel = 7680
bits/frame. At 9600 bps that's about 1Hz refresh, which is a reasonable
lower bound.

Questions:
* Can the IIc (a free one landed on my lap, which inspired this; sans power
brick tho) do faster than 9600 reliably? Googling around, 19200bps seems
possible (2Hz refresh) - 115k is only available on the IIgs correct?
* Has anyone already implemented this? In 2006, Frank M. was doing
experiments with pre-converted lo-res video (e.g.
http://groups.google.com/group/comp.sys.apple2/browse_thread/thread/4665141b6f417b64)
which is an inspiration, but not directly applicable.

Notes:
* For maximum ease of setup, it would be ideal if the Apple side of the code
could be bootstrapped like ADT
* Based on David Schmenk's HBCC game, full-screen updates to the lo-res
screen while doing lots of other processing seems entirely feasible, so I'm
assuming that there are CPU cycles available for intra-frame compression and
even inter-frame compression.
* As an added feature... if the scene stops changing (to within some
threshold), the code could switch from streaming a lo-res screen to a hi-res
screen. This would take ~8 seconds (at 9600bps), but the result would be
that after the scene becomes static for 8 seconds the image quality would
increase dramatically, until the next change. To maintain the low latency,
the hi-res stream would need be interruptable (i.e. if change is detected by
the encoder, abort the transfer of the hi-res frame and resume lo-res
frames) requiring a modicum of protocol design. Compression here would also
be nice, but decompression on the Apple might be a killer.
* Obviously, there's the possibility of adding additional bells and
whistles, such as letting the Apple send keystrokes back "upstream" to drive
the source application, or sidechannel streams e.g. to display text status
displays on the Apple, trigger beeps, etc.

I haven't done any work on this and, truth be told, this is likely to be one
of those projects that never goes anywhere. But I wanted to throw out the
idea for comments. Is there any other interest in this?

aiia...@gmail.com

unread,
Mar 29, 2008, 1:21:19 PM3/29/08
to
On Mar 29, 9:38 am, "Joshua Bell" <inexorablet...@hotmail.com> wrote:
> I haven't done any work on this and, truth be told, this is likely to be one
> of those projects that never goes anywhere. But I wanted to throw out the
> idea for comments. Is there any other interest in this?

I think it is a great idea... maybe have the option of :
instead of framegrabs of application, do framegrabs of a video file...

http://rich12345.tripod.com/htmlview/index.html

is an example of an image proxy server... A PC on the net
converts any graphic into apple II HGR, and then serves it
to the Apple II...

it would be neat to include conversion of VIDEO formats,
and stream them to the apple II via ethernet


:-)


Rich

David Wilson

unread,
Mar 29, 2008, 5:59:20 PM3/29/08
to
On Mar 30, 3:38 am, "Joshua Bell" <inexorablet...@hotmail.com> wrote:
> Questions:
> * Can the IIc (a free one landed on my lap, which inspired this; sans power
> brick tho) do faster than 9600 reliably? Googling around, 19200bps seems
> possible (2Hz refresh) - 115k is only available on the IIgs correct?

The 6551 in a IIc can do 115kb/s - ADTPro allows this speed. Depending
on the vintage of your IIc the serial port may be slightly (3%) slow
and this may cause problems with some serial devices. My old IIc has
no problems up and down loading disk images with ADTPro.

Michael J. Mahon

unread,
Mar 30, 2008, 5:32:09 PM3/30/08
to
Joshua Bell wrote:
> I have an idea for an "art" project, of sorts - streaming video from a PC
> over serial to an Apple II. The idea is to present the appearance of a very
> "retro client" version of the immersive 3D client/server application my
> company creates on the Apple - just for fun.

Sounds like great fun!

> The basic idea is as follows:
>
> * Service on the contemporary machine that takes frame-grabs of an
> application window at some frequency (say, 5Hz)
> * The service then processes the frames to an Apple-friendly format.
> Initially, probably uncompressed lo-res (40x48) display.
> * The data is streamed out over serial to the Apple
> * The Apple is running a fairly simple loop that reads and displays frames
> on the lo-res screen
> * The Apple is non-interactive while this is running.

Since the time to transfer a frame is certainly known as soon
as it is captured (even if it is, say, run-length compressed),
the frame rate can be made adaptive to the content. In other
words, if it can run faster, it does.

> A lo-res screen is 40x48 or 1920 pixels/frame; 4 bits/pixel = 7680
> bits/frame. At 9600 bps that's about 1Hz refresh, which is a reasonable
> lower bound.
>
> Questions:
> * Can the IIc (a free one landed on my lap, which inspired this; sans power
> brick tho) do faster than 9600 reliably? Googling around, 19200bps seems
> possible (2Hz refresh) - 115k is only available on the IIgs correct?
> * Has anyone already implemented this? In 2006, Frank M. was doing
> experiments with pre-converted lo-res video (e.g.
> http://groups.google.com/group/comp.sys.apple2/browse_thread/thread/4665141b6f417b64)
> which is an inspiration, but not directly applicable.

As David pointed out, the IIc is quite capable of 115kbps with just
a single extra POKE for configuration.

And what you propose is *very* much like video streaming from a disk
file, but with a serial link instead and possibley some decompression
code.

> Notes:
> * For maximum ease of setup, it would be ideal if the Apple side of the code
> could be bootstrapped like ADT
> * Based on David Schmenk's HBCC game, full-screen updates to the lo-res
> screen while doing lots of other processing seems entirely feasible, so I'm
> assuming that there are CPU cycles available for intra-frame compression and
> even inter-frame compression.
> * As an added feature... if the scene stops changing (to within some
> threshold), the code could switch from streaming a lo-res screen to a hi-res
> screen. This would take ~8 seconds (at 9600bps), but the result would be
> that after the scene becomes static for 8 seconds the image quality would
> increase dramatically, until the next change. To maintain the low latency,
> the hi-res stream would need be interruptable (i.e. if change is detected by
> the encoder, abort the transfer of the hi-res frame and resume lo-res
> frames) requiring a modicum of protocol design. Compression here would also
> be nice, but decompression on the Apple might be a killer.

Not if it saves more data transfer time than it costs in processor
time. That's the tradeoff to examine, and it, of course, depends
critically on the actual serial data transfer rate. At 115kbps, it
may be necessary to forego decompression just to maintain the UART
transfer rate. For example, interrupt processing of UART data is
not the way to go at this rate--just simple polling and data storage.

There will be time in the loop to detect a simple protocol escape...
I'd recommend using it to trigger page-flipping at end of page and
any other things you might want to add. If you want to add keyboard
sensing, then that will cost another 6 cycles per received character
to test and a loss of some incoming characters when a keypress is
sensed and acted upon. A back of the envelope estimate indicates
that this will fit into the receive loop at 115kbps (~78 cycles/char).

The basic loop looks like it's about 30 cycles with a simple escape
test, so it shouldn't be critical as long as one byte value can be
dedicated to the escape function. And even in this case, a second
consecutive "escape" could result in a return to the storage loop
to store the "escape" byte:

loop lda UAstat,x ; UART have char?
and #mask ; Mask status
bxx loop ; -No, wait.
lda UAdata,x ; -Yes, get char.
cmp #escape ; Cmd escape?
beq cmd ; -Yes, do it.
sta (zp),y ; -No, store it
iny ; and increment
bne nocar ; store address.
inc zp+1
nocar lda kbd ; Key pressed?
bpl loop ; -No.
... ; -Yes, process key.
jmp loop ; Return to loop, having missed x chars.
<Since the sending machine cannot anticipate
this, it will be necessary to either "flush"
the in-process page, for which sync has been
lost, or keep accurate track of how many
characters were lost and resume the loop with
Y and (zp) incremented accordingly (to just
re-use the previous valued for the skipped
bytes.>

cmd <receive another character as above>
<switch on char value to cmd routine>
<each cmd routine misses x chars, where x could be
zero for very simple commands, like page flips>

When you decide to switch to hi-res, you'll be filling the hi-res
buffer while continuing to display the last full lo-res buffer, I
presume. So if you don't finish transferring the full hi-res
screen, the partial frame transfer would never be visible.

> * Obviously, there's the possibility of adding additional bells and
> whistles, such as letting the Apple send keystrokes back "upstream" to drive
> the source application, or sidechannel streams e.g. to display text status
> displays on the Apple, trigger beeps, etc.

Those can all be done, but will necessarily require either a "pause"
in data transmission, or, perhaps better, a pre-computed number of nulls
inserted in the stream after a protocol command to allow for processing
time on the Apple side. Each "long" command could finish in a routine
to receive bytes until a "data restart" byte is received, which would
then return to the main loop.

For highest speed, the protocol will need to be pretty "fragile",
with recovery from a data transmission error essentially awaiting
the next re-syncing event, but it should be fine for a local serial
link. Command decoding may need to include some redundancy to prevent
wild transfers of control (I would favor a direct vector through a
table with, perhaps, a requirement that the command byte be sent
twice, in both true and compliment form to add some robustness.)

> I haven't done any work on this and, truth be told, this is likely to be one
> of those projects that never goes anywhere. But I wanted to throw out the
> idea for comments. Is there any other interest in this?

I hope you give it a shot--it would be fun, and might open the
door to other "streaming video" activities.

-michael

NadaPong: Network game demo for Apple II computers!
Home page: http://members.aol.com/MJMahon/

"The wastebasket is our most important design
tool--and it's seriously underused."

David Wilson

unread,
Mar 30, 2008, 6:40:07 PM3/30/08
to
On Mar 31, 8:32 am, "Michael J. Mahon" <mjma...@aol.com> wrote:
> There will be time in the loop to detect a simple protocol escape...
> I'd recommend using it to trigger page-flipping at end of page and
> any other things you might want to add. If you want to add keyboard
> sensing, then that will cost another 6 cycles per received character
> to test and a loss of some incoming characters when a keypress is
> sensed and acted upon. A back of the envelope estimate indicates
> that this will fit into the receive loop at 115kbps (~78 cycles/char).

The IIc is the only 8-bit Apple II that can have keyboard generated
interrupts. This reduces the cost of monitoring the keyboard latch to
zero cycles but increases the number of cycles used to actually handle
the key press.

Joshua Bell

unread,
Mar 30, 2008, 8:38:18 PM3/30/08
to
Michael J. Mahon wrote:
> Since the time to transfer a frame is certainly known as soon
> as it is captured (even if it is, say, run-length compressed),
> the frame rate can be made adaptive to the content. In other
> words, if it can run faster, it does.

Yep. In this scenario, since it's doing realtime, maintaining a consistent
frame rate isn't necessary - "as fast as possible" is what's desired. The
closest analogy would be the "remote desktop" applications like VNC, which
attempt to give you a generic mechanism for the local display (and
optionally interactivity) of a remote computer's screen (or app window).

>> Compression here would also be nice, but
>> decompression on the Apple might be a killer.
>
> Not if it saves more data transfer time than it costs in processor
> time. That's the tradeoff to examine, and it, of course, depends
> critically on the actual serial data transfer rate. At 115kbps, it
> may be necessary to forego decompression just to maintain the UART
> transfer rate. For example, interrupt processing of UART data is
> not the way to go at this rate--just simple polling and data storage.

Back-of-the-envelope for 115kbps gives me about a 2Hz update rate for hi-res
graphics without compression. Given that, I'd probably skip lo-res entirely
since that rate is "good enough" for what I'm imagining.

You're right about compression. However, in the particular case I'm thinking
about, the graphics will not be amenable to the sort of decompression the
Apple could do in realtime, at least for intra-frame. Interframe compression
a la MPEG would be feasible to get higher than 2Hz (if less than a full
frame changes, transmit only the rectangle that does), but since it's an
interactive first-person immersive 3D environment the cases where higher
than 2Hz refresh are compelling are when the viewpoint is changing rapidly
(i.e you're trying to naviate), which are the worst-case scenarios for
intra-frame compression.

(That's not to say that either the more complex intermixed lo-res/hi-res or
intra-frame compression wouldn't be useful in more general applications of
this notion, i.e. streaming arbitrary video. I just know enough about the
data in this case to shelve those ideas for now.)

> There will be time in the loop to detect a simple protocol escape...

I confess to not knowing enough about serial transmission to know how much
putting a handshake between frames would slow things down, but I have to
assume "not much". If we're transmitting a frame in 0.5 seconds, spending
0.01 second between frames for the client to say "user pressed 'W' key" is
feasible, so doing this within the frame loop is not necessary.

I'm also not sure how the sender knows when when the receiver is done
pulling bits out of a buffer; to reduce perceived latency in this realtime
client/server app, the server should capture the frame to send as close to
when the client is done receiving the previous frame as possible. That's
probably serial communication 101, tho.

> When you decide to switch to hi-res, you'll be filling the hi-res
> buffer while continuing to display the last full lo-res buffer, I
> presume. So if you don't finish transferring the full hi-res
> screen, the partial frame transfer would never be visible.

Yep. (Again, it seems that at 115kbps I'd just skip lo-res which simplifies
things)

> For highest speed, the protocol will need to be pretty "fragile",
> with recovery from a data transmission error essentially awaiting
> the next re-syncing event

Agreed. Again, for this scenario, that's fine.

It's looking like the simplest client implementation is basically:
* initialize the UART to 115kbps
* start a loop -
* read from UART until a "start of frame" signature is seen
* start filling the non-visible hires page
* after 8192 bytes, flip hires pages
* jump to start of loop (i.e. wait for a signature)

The next thing to implement would be a simple 1-byte sync signal every 256
bytes; if not seen, assume there was a glitch somewhere and abort the
display of this frame, waiting for the next start-of-frame signature.

I've never done serial programming on either side, but I'm assuming the
initialization is trivial and you've provided the guts of the loop already.
(Thanks!)

The hard part now seems like it's actually on the sending side, where we
need to tackle algorithms for generating decent color hi-res screens. I'd
probably start by assuming it's a 140x192 with a fixed 6 color palette, do
an error-diffusion dither, and ignore the artifacts from trying to put blue
and green within (etc) a byte. Which would make Rich happy, I'm assuming.

Joshua


Michael J. Mahon

unread,
Mar 30, 2008, 10:16:30 PM3/30/08
to
Joshua Bell wrote:
> Michael J. Mahon wrote:
>
>>Since the time to transfer a frame is certainly known as soon
>>as it is captured (even if it is, say, run-length compressed),
>>the frame rate can be made adaptive to the content. In other
>>words, if it can run faster, it does.
>
>
> Yep. In this scenario, since it's doing realtime, maintaining a consistent
> frame rate isn't necessary - "as fast as possible" is what's desired. The
> closest analogy would be the "remote desktop" applications like VNC, which
> attempt to give you a generic mechanism for the local display (and
> optionally interactivity) of a remote computer's screen (or app window).
>
>
>>>Compression here would also be nice, but
>>>decompression on the Apple might be a killer.
>>
>>Not if it saves more data transfer time than it costs in processor
>>time. That's the tradeoff to examine, and it, of course, depends
>>critically on the actual serial data transfer rate. At 115kbps, it
>>may be necessary to forego decompression just to maintain the UART
>>transfer rate. For example, interrupt processing of UART data is
>>not the way to go at this rate--just simple polling and data storage.
>
>
> Back-of-the-envelope for 115kbps gives me about a 2Hz update rate for hi-res
> graphics without compression. Given that, I'd probably skip lo-res entirely
> since that rate is "good enough" for what I'm imagining.

Well, you'll need at least 9 bit times per byte, and that puts you
at 12,800 bytes/sec, or about 1.5 seconds per hi-res screen. That's
actually pretty slow.

Run-length compression could save you some, depending on whether you
compressed the actual screen or an XOR with the previous screen, and,
of course, on how much of the screen is changing each frame.

Compression would have to be *very* simple to fit in the <80 cycles
you have per byte, but since run-length uses character pairs, it might
work out well.

> You're right about compression. However, in the particular case I'm thinking
> about, the graphics will not be amenable to the sort of decompression the
> Apple could do in realtime, at least for intra-frame. Interframe compression
> a la MPEG would be feasible to get higher than 2Hz (if less than a full
> frame changes, transmit only the rectangle that does), but since it's an
> interactive first-person immersive 3D environment the cases where higher
> than 2Hz refresh are compelling are when the viewpoint is changing rapidly
> (i.e you're trying to naviate), which are the worst-case scenarios for
> intra-frame compression.
>
> (That's not to say that either the more complex intermixed lo-res/hi-res or
> intra-frame compression wouldn't be useful in more general applications of
> this notion, i.e. streaming arbitrary video. I just know enough about the
> data in this case to shelve those ideas for now.)

Give up on anything more complicated than run-length compression of data
or data differences. 80 cycles per byte is a harsh mistress. ;-)

>>There will be time in the loop to detect a simple protocol escape...
>
>
> I confess to not knowing enough about serial transmission to know how much
> putting a handshake between frames would slow things down, but I have to
> assume "not much". If we're transmitting a frame in 0.5 seconds, spending
> 0.01 second between frames for the client to say "user pressed 'W' key" is
> feasible, so doing this within the frame loop is not necessary.

You don't want a handshake, since the only recovery is to keep going!

If the sending machine is sent data, it simply acts on it without any
housekeeping handshakes. All "service" should simply be "best effort".

> I'm also not sure how the sender knows when when the receiver is done
> pulling bits out of a buffer; to reduce perceived latency in this realtime
> client/server app, the server should capture the frame to send as close to
> when the client is done receiving the previous frame as possible. That's
> probably serial communication 101, tho.

The receiver doesn't "pull" bits, it is "sent" to the receiver by
the sender without handshaking.

Since the sender knows exactly when the end-of-frame 2-byte sequencs
has been sent, it knows exactly when the receiver has displayed it.

>>When you decide to switch to hi-res, you'll be filling the hi-res
>>buffer while continuing to display the last full lo-res buffer, I
>>presume. So if you don't finish transferring the full hi-res
>>screen, the partial frame transfer would never be visible.
>
>
> Yep. (Again, it seems that at 115kbps I'd just skip lo-res which simplifies
> things)

You'd be surprised how much better a 16-color, 6 FPS display looks
than a 6-color 0.67 FPS display! (If you want 16-color hi-res, that's
double hi-res, and requires twice as long to send.)

At 6 FPS you can see motion pretty well, but at under 1 FPS it's
pretty hard unless things are changing quite slowly.

It's also quite easy to change, since a simple "switched" command
handler on the Apple II side can be told exactly which screen to
fill next and which to display now. There are really no smarts on
the Apple side at all--in fact, it doesn't even keep count of bytes
transferred, it just stores them in increasing addresses until told
to switch buffers and screens.

>>For highest speed, the protocol will need to be pretty "fragile",
>>with recovery from a data transmission error essentially awaiting
>>the next re-syncing event
>
>
> Agreed. Again, for this scenario, that's fine.
>
> It's looking like the simplest client implementation is basically:
> * initialize the UART to 115kbps
> * start a loop -
> * read from UART until a "start of frame" signature is seen
> * start filling the non-visible hires page
> * after 8192 bytes, flip hires pages
> * jump to start of loop (i.e. wait for a signature)

Actually, switching buffers and screens is fast enough that you can
probably do it between the end-of-frame command and the next character,
so no "wait for start of frame" is needed.

If you want to accept other, longer to process commands (like play a
sound on the Apple speaker), then the sending machine would anticipate
the delay and insert an appropriate number of in-line nulls and finish
(at just after the time when the Apple would be done in the worst case)
with a "resume data" byte that would send control back into the
"receive data" loop. This could even happen inside a frame.

> The next thing to implement would be a simple 1-byte sync signal every 256
> bytes; if not seen, assume there was a glitch somewhere and abort the
> display of this frame, waiting for the next start-of-frame signature.

No, just run open-loop. There's nothing the sender can do to improve
things in case of an error--this is real-time, after all--and you can
easily verify correct or incorrect operation by observing the screen.

> I've never done serial programming on either side, but I'm assuming the
> initialization is trivial and you've provided the guts of the loop already.
> (Thanks!)
>
> The hard part now seems like it's actually on the sending side, where we
> need to tackle algorithms for generating decent color hi-res screens. I'd
> probably start by assuming it's a 140x192 with a fixed 6 color palette, do
> an error-diffusion dither, and ignore the artifacts from trying to put blue
> and green within (etc) a byte. Which would make Rich happy, I'm assuming.

I agree completely. Palette reduction is non-trivial (maybe with an
animation, you can get some advantage from the fact that many colors do
not change from frame to frame).

Don't ignore the "color set" bit, or what you produce will be suitable
only for monochrome viewing. Trust me, with only 40 bytes across the
screen, changing random byte's color sets makes a very visible mess.

Maybe a good way to start would be to make a monochrome version. The
conversion is much easier to write. ;-)

BTW, although as David points out, the IIc keyboard can generate
interrupts, I would very much recommend *against* using interrupts.
They will make your character receive loop non-deterministic and
random failures will occur. Further, if you skip keyboard (or any
other) interrupts, *all* Apple II's will run the code perfectly.

Joshua Bell

unread,
Mar 31, 2008, 12:36:10 AM3/31/08
to
>> Back-of-the-envelope for 115kbps gives me about a 2Hz update rate
>> for hi-res graphics without compression. Given that, I'd probably
>> skip lo-res entirely since that rate is "good enough" for what I'm
>> imagining.
>
> Well, you'll need at least 9 bit times per byte, and that puts you
> at 12,800 bytes/sec, or about 1.5 seconds per hi-res screen.

Can you double check your math? I think you inverted the ratio...

8192 bytes/frame * 9 bits/byte = 74kbits/frame
115kbits/second / 74kbits/frame = 1.5 frame/second

So lower than the 2Hz I guessed, but > 1Hz, which is okay for me... unless I
am doing something stupid (it is just 9 bits of the transmission rate per
byte, yes?) Also, if we ignore the screen holes, it's only 7680 bytes/frame:

7680 bytes/frame * 9 bits/byte = 69kbits/frame
115kbits/second / 69kbits/frame = 1.7 frame/second

(This optimization might be worth it; the 40-byte scan line loop could be
unrolled if necessary.)

> That's actually pretty slow.

In my specific scenario, where I'm not actually streaming video but an
interactive application, quality actually trumps frame rate at about the 1Hz
rate. Since the app in question is interactive, the usual case will be long
periods of slowly changing content where quality is key, followed bursts of
activity. I agree that for actual *usability* from an Apple client, dropping
to lores would be great, but it's not necessary in this case.

(That's not to say we shouldn't go all out and design it! I'm just
justifying my shortcuts.)

> Give up on anything more complicated than run-length compression of
> data or data differences. 80 cycles per byte is a harsh mistress.

Data differences, definitely. If only part of the screen is updating,
transmit the bounds and then the contents of the rectangle (on byte
boundaries, of course).

> If the sending machine is sent data, it simply acts on it without any
> housekeeping handshakes. All "service" should simply be "best
> effort".

Ah, right - I'm thinking Ethernet where you at the lowest level you can't
count on delivery.

> The receiver doesn't "pull" bits, it is "sent" to the receiver by
> the sender without handshaking.
>
> Since the sender knows exactly when the end-of-frame 2-byte sequencs
> has been sent, it knows exactly when the receiver has displayed it.

I guess I was expecting buffering to be occurring on some level... working
too high on the stack for too long, I guess. I'll read up on serial
programming before I ask more dumb questions. :)

> You'd be surprised how much better a 16-color, 6 FPS display looks
> than a 6-color 0.67 FPS display! (If you want 16-color hi-res, that's
> double hi-res, and requires twice as long to send.)
>
> At 6 FPS you can see motion pretty well, but at under 1 FPS it's
> pretty hard unless things are changing quite slowly.

Agreed - the lores video demos definitely inspired this. Again, due to the
*specific* nature of what I'm thinking of streaming, at about 1 FPS I think
the hires option will be superior. But I'll have to try it out to be sure.

> It's also quite easy to change, since a simple "switched" command
> handler on the Apple II side can be told exactly which screen to
> fill next and which to display now. There are really no smarts on
> the Apple side at all--in fact, it doesn't even keep count of bytes
> transferred, it just stores them in increasing addresses until told
> to switch buffers and screens.

That makes sense. Keep it simple!

> Actually, switching buffers and screens is fast enough that you can
> probably do it between the end-of-frame command and the next
> character, so no "wait for start of frame" is needed.

That's residue from me thinking about a dumb, error-prone protocol where you
aren't reliably getting bytes from one end to another in predictable form.
Serial = simple, got it!

> No, just run open-loop. There's nothing the sender can do to improve
> things in case of an error--this is real-time, after all--and you can
> easily verify correct or incorrect operation by observing the screen.

I was more thinking that the Apple could discard a frame entirely if the
results are bad, but if the error cases are more likely to be corrupt data
than out of sync reads (which would corrupt the whole rest of the stream)
then this isn't necessary. (Now, instead of thinking high level buffering,
I'm thinking low level latching onto a stream. Duh...) So it would need to
be an actual checksum... and this is probably overdesigning before we even
know what the error rate is. So.... scrap it!

> I agree completely. Palette reduction is non-trivial (maybe with an
> animation, you can get some advantage from the fact that many colors
> do not change from frame to frame).

We'll have to time it and see... I haven't actually coded up a diffusion
dither myself, but I'm expecting it can crank out 2Hz.

> Maybe a good way to start would be to make a monochrome version. The
> conversion is much easier to write. ;-)

Definitely phase 1! Phase 2 is probably "ignore the high bit and hope for
the best" and see how that looks. Phase 3 is where magic happens. I was also
pondering the style where alternate scan lines have alternate color set bits
set consistently. This would make pure-blue unachievable, but gives you a
larger palette of blended colors, and a screen resolution of 140x96.

David Wilson

unread,
Mar 31, 2008, 2:47:43 AM3/31/08
to
On Mar 31, 3:36 pm, "Joshua Bell" <inexorablet...@hotmail.com> wrote:
> 8192 bytes/frame * 9 bits/byte = 74kbits/frame
> 115kbits/second / 74kbits/frame = 1.5 frame/second

If you want 6 colors you need to send 8 bit bytes with start and stop
bits making 10 bits in all. This is an 11% slowdown.

You can counter most of that by only sending 120 bytes of each 128
(skip the undisplayed 8 bytes per group of 3 scan lines). This gives a
6% speedup.

192 lines/frame * 40 bytes/line * 10bits/byte = 76.8kbit/frame => 1.5
frames/sec

Michael J. Mahon

unread,
Mar 31, 2008, 6:27:12 AM3/31/08
to
Joshua Bell wrote:
>>>Back-of-the-envelope for 115kbps gives me about a 2Hz update rate
>>>for hi-res graphics without compression. Given that, I'd probably
>>>skip lo-res entirely since that rate is "good enough" for what I'm
>>>imagining.
>>
>>Well, you'll need at least 9 bit times per byte, and that puts you
>>at 12,800 bytes/sec, or about 1.5 seconds per hi-res screen.
>
>
> Can you double check your math? I think you inverted the ratio...

Yep--I inverted it!

> 8192 bytes/frame * 9 bits/byte = 74kbits/frame
> 115kbits/second / 74kbits/frame = 1.5 frame/second
>
> So lower than the 2Hz I guessed, but > 1Hz, which is okay for me... unless I
> am doing something stupid (it is just 9 bits of the transmission rate per
> byte, yes?) Also, if we ignore the screen holes, it's only 7680 bytes/frame:
>
> 7680 bytes/frame * 9 bits/byte = 69kbits/frame
> 115kbits/second / 69kbits/frame = 1.7 frame/second
>
> (This optimization might be worth it; the 40-byte scan line loop could be
> unrolled if necessary.)

I think it's worth it, but there's time to handle the table-look-up
within the character loop, so unrolling shouldn't be necessary.

>>That's actually pretty slow.
>
>
> In my specific scenario, where I'm not actually streaming video but an
> interactive application, quality actually trumps frame rate at about the 1Hz
> rate. Since the app in question is interactive, the usual case will be long
> periods of slowly changing content where quality is key, followed bursts of
> activity. I agree that for actual *usability* from an Apple client, dropping
> to lores would be great, but it's not necessary in this case.
>
> (That's not to say we shouldn't go all out and design it! I'm just
> justifying my shortcuts.)

OK--you know your problem.

>>Give up on anything more complicated than run-length compression of
>>data or data differences. 80 cycles per byte is a harsh mistress.
>
>
> Data differences, definitely. If only part of the screen is updating,
> transmit the bounds and then the contents of the rectangle (on byte
> boundaries, of course).

That should work, but will require a full Y-base-address lookup table.
Within the loop, I'd expect default increments to scan through lines
and stepping to the next line done until interrupted by end-of-frame
or another command.

>>If the sending machine is sent data, it simply acts on it without any
>>housekeeping handshakes. All "service" should simply be "best
>>effort".
>
>
> Ah, right - I'm thinking Ethernet where you at the lowest level you can't
> count on delivery.
>
>
>>The receiver doesn't "pull" bits, it is "sent" to the receiver by
>>the sender without handshaking.
>>
>>Since the sender knows exactly when the end-of-frame 2-byte sequencs
>>has been sent, it knows exactly when the receiver has displayed it.
>
>
> I guess I was expecting buffering to be occurring on some level... working
> too high on the stack for too long, I guess. I'll read up on serial
> programming before I ask more dumb questions. :)

The sending machine is likely to buffer the data stream, but on
the Apple side it goes straight to frame buffer. If the serial
buffer is truncated at the end of each frame, it should be easy
to know when the end-of-frame has been sent.

You'd like to sample the next frame to send in just enough time
to convert it and start to send it. I don't know what capabilities
are provided on the sending end to sense how much of the stream has
already been sent...

>>You'd be surprised how much better a 16-color, 6 FPS display looks
>>than a 6-color 0.67 FPS display! (If you want 16-color hi-res, that's
>>double hi-res, and requires twice as long to send.)
>>
>>At 6 FPS you can see motion pretty well, but at under 1 FPS it's
>>pretty hard unless things are changing quite slowly.
>
>
> Agreed - the lores video demos definitely inspired this. Again, due to the
> *specific* nature of what I'm thinking of streaming, at about 1 FPS I think
> the hires option will be superior. But I'll have to try it out to be sure.

Another consideration is how big a screen will be used for the
display. Actually, what angle the screen will subtend at the
viewer's eye. If the angle is small, lo-res looks pretty good,
but if you are closer to the display, hi-res will look much better.

>>It's also quite easy to change, since a simple "switched" command
>>handler on the Apple II side can be told exactly which screen to
>>fill next and which to display now. There are really no smarts on
>>the Apple side at all--in fact, it doesn't even keep count of bytes
>>transferred, it just stores them in increasing addresses until told
>>to switch buffers and screens.
>
>
> That makes sense. Keep it simple!
>
>
>>Actually, switching buffers and screens is fast enough that you can
>>probably do it between the end-of-frame command and the next
>>character, so no "wait for start of frame" is needed.

Using a bounding rectangle to set the limits for the default
memory addresses complicates the loop somewhat, but the saving
in transmission time is the gating parameter, so it's a good
thing (as long as it all fits into the byte loop timing, and
I'm pretty comfortable that it does).

> That's residue from me thinking about a dumb, error-prone protocol where you
> aren't reliably getting bytes from one end to another in predictable form.
> Serial = simple, got it!
>
>
>>No, just run open-loop. There's nothing the sender can do to improve
>>things in case of an error--this is real-time, after all--and you can
>>easily verify correct or incorrect operation by observing the screen.
>
>
> I was more thinking that the Apple could discard a frame entirely if the
> results are bad,

That will usually happen automatically if you just restart loading
the next page buffer when an error is detected. As long as the
Apple doesn't receive end-of-frame, it never displays the partial
buffer.

> but if the error cases are more likely to be corrupt data
> than out of sync reads (which would corrupt the whole rest of the stream)
> then this isn't necessary. (Now, instead of thinking high level buffering,
> I'm thinking low level latching onto a stream. Duh...) So it would need to
> be an actual checksum... and this is probably overdesigning before we even
> know what the error rate is. So.... scrap it!

Frankly, the expected error rate is so low that checksums don't
really make much sense. If that assumption turns out to be wrong,
then a simple EOR checksum will prevent any bad frames from being
displayed.

BTW, losing a byte looks pretty bad, since it shifts every following
byte--like losing horizontal hold. ;-)

>>I agree completely. Palette reduction is non-trivial (maybe with an
>>animation, you can get some advantage from the fact that many colors
>>do not change from frame to frame).
>
>
> We'll have to time it and see... I haven't actually coded up a diffusion
> dither myself, but I'm expecting it can crank out 2Hz.

Yes, especially at these low resolutions!

>>Maybe a good way to start would be to make a monochrome version. The
>>conversion is much easier to write. ;-)
>
>
> Definitely phase 1! Phase 2 is probably "ignore the high bit and hope for
> the best" and see how that looks. Phase 3 is where magic happens. I was also
> pondering the style where alternate scan lines have alternate color set bits
> set consistently. This would make pure-blue unachievable, but gives you a
> larger palette of blended colors, and a screen resolution of 140x96.

The bad news is that a lot of high-contrast vertical dithers look pretty
bad--unless you put a sheet of waxed paper over the display. ;-)

I'm sure there are some good "constrained" palette-reduction schemes,
but I've never had to do one like this... ;-)

Michael J. Mahon

unread,
Mar 31, 2008, 6:36:00 AM3/31/08
to
David Wilson wrote:
> On Mar 31, 3:36 pm, "Joshua Bell" <inexorablet...@hotmail.com> wrote:
>
>>8192 bytes/frame * 9 bits/byte = 74kbits/frame
>>115kbits/second / 74kbits/frame = 1.5 frame/second
>
>
> If you want 6 colors you need to send 8 bit bytes with start and stop
> bits making 10 bits in all. This is an 11% slowdown.

Something in the back of my mind was telling me that 9 bit times
per byte was optimistic... I guess that's one cost of asynchronous
data transmission, but it seems like no stop bits would be needed...

> You can counter most of that by only sending 120 bytes of each 128
> (skip the undisplayed 8 bytes per group of 3 scan lines). This gives a
> 6% speedup.
>
> 192 lines/frame * 40 bytes/line * 10bits/byte = 76.8kbit/frame => 1.5
> frames/sec

Right--and a bounding rectangle for changes can improve it even
more. ;-)

sicklittlemonkey

unread,
Apr 1, 2008, 8:45:51 AM4/1/08
to
On Mar 31, 1:36 pm, "Joshua Bell" <inexorablet...@hotmail.com> wrote:
> Definitely phase 1! Phase 2 is probably "ignore the high bit and hope for
> the best" and see how that looks. Phase 3 is where magic happens. I was also
> pondering the style where alternate scan lines have alternate color set bits
> set consistently. This would make pure-blue unachievable, but gives you a
> larger palette of blended colors, and a screen resolution of 140x96.

Even in phase 1 you could double the refresh rate of plain HGR by
halving the vertical resolution to 96 and just duplicating the lines.
Just a thought.

Cheers,
Nick.

Michael J. Mahon

unread,
Apr 1, 2008, 4:01:25 PM4/1/08
to

True, but vertical resolution is the thing in shortest supply in
Apple graphics. ;-)

Michael J. Mahon

unread,
Apr 1, 2008, 4:06:50 PM4/1/08
to

OK, how about doing it hierarchically...

Send every 4th line, and duplicate them, then the middle lines in
between if things haven't changed much since the sampling, then
the odd lines if still not much change.

That gives a 4x improvement in frame rate, and still keeps full
resolution for slow-changing screens.

Unfortunately, it also complicates frame buffer ping-ponging,
by requiring some additional stores, but it still should fit...
at the cost of making page-flipping take some extra time
to set up more zero-page pointers or dynamic addresses.

Roger Johnstone

unread,
Apr 4, 2008, 11:35:24 PM4/4/08
to
In <8NmdncbgI-vj1G3a...@comcast.com> Michael J. Mahon wrote:
> Joshua Bell wrote:
>> Michael J. Mahon wrote:
>>
>> Yep. (Again, it seems that at 115kbps I'd just skip lo-res which
>> simplifies things)
>
> You'd be surprised how much better a 16-color, 6 FPS display looks
> than a 6-color 0.67 FPS display! (If you want 16-color hi-res, that's
> double hi-res, and requires twice as long to send.)
>
> At 6 FPS you can see motion pretty well, but at under 1 FPS it's
> pretty hard unless things are changing quite slowly.

If you want something in between don't forget double low-res mode: 80x48
with 16 colours, and available on the IIe, IIc and IIGS.

--
Roger Johnstone, Invercargill, New Zealand -> http://roger.geek.nz

Joshua Bell

unread,
Apr 10, 2008, 11:02:01 PM4/10/08
to
Quick update on this project which should more properly be titled "streaming
desktop over serial":

* I have the Windows app prototyped; it grabs the screen (or part of the
screen, the UI for that is hacky at the moment). It converts the frames to
Apple II graphics in three phases: (1) scale down to 140x192, (2) dither to
hi-res palette (it is displayed in the GUI at this point so you can watch
what's happening), then (3) convert to hi-res format (standard 8192 byte
image format; we'll optimize transmission later). To verify, you can hit a
button to save an image to a file. (I've loaded images into CiderPress; they
look acceptable). You can also send it out over a selected serial port
(which I've tested with a virtual com port) - again, right now, you have to
hit a button.

* I picked up a power brick for my //c on eBay (arrived yesterday). Much to
the delight of my co-workers, the //c has spent the last day drawing hi-res
lissajous curves.

* I ordered a USB->RS232C and //c-friendly null-modem cable from
retrofloppy.com. The cables showed up just as I was leaving work today
(yay!) so I stayed a little bit later to try them out. I tried using ADTPro
(I have no floppies, but don't really need them for this)... but just before
it completed bootstrapping ProDOS it bluescreened my Windows XP box - twice!
Erm, I guess I'll try to reinstall the USB drivers tomorrow. (I probably
plugged the USB thingy in before I installed the driver, just like the
instructions always say not to.)

* I've taken the seed MJM posted for the Apple side's inner loop, and after
some FAQ scouring written a bunch of the assembly code and UART twiddling.
Once I have it complete and actually assembling I'll post it here for (much
needed) feedback.

Joshua


Michael J. Mahon

unread,
Apr 10, 2008, 11:13:53 PM4/10/08
to

Way to go, Joshua--you're moving right along!

Joshua Bell

unread,
Apr 12, 2008, 3:28:21 PM4/12/08
to
Another update, this time in graphical form:

http://www.calormen.com/tmp/a2screen.jpg

w00t!

(Left to right: that's my Windows laptop on the left showing the desktop,
with the screen grabber window showing what it just grabbed; USB/RS232C and
null modem cables c/o retrofloppy.com; stock Apple //c, with color monitor
showing the hires screen it just received.)

I borrowed the bootstrapping idea from ADTPro so you can do this from a
bare-metal //c. Which is good, as I don't have any floppies. :)

TODO:
* This is just static "press a button to send a frame" mode at the moment.
Need to make it stream.
* Windows app needs serious cleanup.
* Apple2 code is not doing page flipping correctly (probably minor logic
bug, but I'm headed out for the day)
* Sending keystrokes upstream is untested
* As calculated, with the transfer unoptimized, it takes just under a second
for one screen to come down at 115kbps. We can boost this with various
optimizations

Joshua Bell

unread,
Apr 12, 2008, 3:36:49 PM4/12/08
to
"Joshua Bell" <inexora...@hotmail.com> wrote in message
news:p38Mj.2541$h75....@newssvr27.news.prodigy.net...

> Another update, this time in graphical form:
>
> http://www.calormen.com/tmp/a2screen.jpg

Sorry for the spam, but I forgot to do the obligatory "self-host" image:

http://www.calormen.com/tmp/a2meta.jpg

(That's taking the previous image showing on my desktop, cropping the
screen-grabbing area to just that, and sending it down to the Apple.)

Joshua

Michael J. Mahon

unread,
Apr 12, 2008, 7:30:57 PM4/12/08
to

Very nice!

Joshua Bell

unread,
Apr 13, 2008, 3:34:39 PM4/13/08
to
And after much delay:

http://www.youtube.com/watch?v=vAZHJa91JHk

(I had the video recorded yesterday, but downsizing/uploading proved a
problem so I had to re-record. At least this time I said "128 kilobytes"
instead of "128 megabytes"!)

aiia...@gmail.com

unread,
Apr 13, 2008, 5:11:43 PM4/13/08
to

Very neat! what frame rate are you getting on the IIc?

The graphics conversion looks good.

Rich

aiia...@gmail.com

unread,
Apr 13, 2008, 5:23:11 PM4/13/08
to
A few ideas :

1)reduce the resolution of your windows machine
2)reduce the resolution of the converter... have it output 140X95

I like the colors on the IIc version more than the peecee version :-)

can you show screenshots of the pc interface?

What speed is your PC?

I can think of a bunch of applications for your software...

Makes me want to try streaming ComputerEyes images
over tcp/ip


Rich

Joshua Bell

unread,
Apr 13, 2008, 5:46:58 PM4/13/08
to
<aiia...@gmail.com> wrote in message
news:324e1586-9633-42ba...@v26g2000prm.googlegroups.com...

> Very neat! what frame rate are you getting on the IIc?

As calculated, it's about 1.4fps. See elsewhere in this thread for
calculations, and optimizations.

<aiia...@gmail.com> wrote in message
news:e6bd3e1b-8b34-4928...@n1g2000prb.googlegroups.com...


> 1)reduce the resolution of your windows machine

FYI, for the demo it was at native 1440x900.

That would make the GUI elements (buttons, text, icons) show up somewhat
more comprehensibly, but text wouldn't be readable unless you shrank the
display down insanely. It's irrelevant for the Second Life demo, anyway,
since the whole experience scales.

> 2)reduce the resolution of the converter... have it output 140X95

Already discussed. A more elaborate protocol would allow the server to say
"this frame hasn't changed much, let me just send alternate scan lines"

> can you show screenshots of the pc interface?

http://www.calormen.com/tmp/sg.png - it should be fairly self-explanatory. I
need a better name for the app though.

> What speed is your PC?

1.7 GHz, single core.

David Schmenk

unread,
Apr 13, 2008, 6:20:32 PM4/13/08
to

Hey, the Apple IIc gets about the same framerate on SecondLife as my
Powerbook G4 ;-) The results are much better than I would have
imagined. Maybe you could fire up a web browser and show Rich what his
project might look like? Very cool,

Dave...

Michael J. Mahon

unread,
Apr 13, 2008, 9:08:48 PM4/13/08
to

Way to go, Joshua--you get a prize for getting the rubber
to hit the road so fast!

From this point on, it's all "incremental improvement"! ;-)

Joshua Bell

unread,
Apr 24, 2008, 2:30:36 AM4/24/08
to
It's up:

http://www.calormen.com/vnIIc/

Source coming soon.

aiia...@gmail.com

unread,
Apr 24, 2008, 2:24:23 PM4/24/08
to

would be neat to see (on the IIc) the PC running AppleWin

Michael J. Mahon

unread,
Apr 24, 2008, 4:08:00 PM4/24/08
to

THERE'S There's there's... TOO Too too... MUCH Much much...
ECHO Echo echo...

sfahey

unread,
Apr 24, 2008, 5:26:03 PM4/24/08
to
To: Michael J. Mahon
On 4/24/08 3:08 PM, in article -4adnQEfS985dY3V...@comcast.com,

"Michael J. Mahon" <mjm...@aol.com> wrote:

> THERE'S There's there's... TOO Too too... MUCH Much much...
> ECHO Echo echo...
> ;-)

Has anyone seen Stephen Heumann around? His site that hosted VNCViewGS is
offline. I was hoping he'd still be working on it.

The last version I know of is available from Syndicomm as part of their
telecom starter kit ... but the program is free, and I'll try to get a copy
from Sheppy to post to A2C.

http://www.a2central.com/portal/?p=790\
http://www.a2central.com/portal/?p=826
--- Synchronet 3.14a-Win32 NewsLink 1.85
A2Central.com - Your total source for Apple II computing.

Michael Kent

unread,
Apr 24, 2008, 10:31:56 PM4/24/08
to
Michael J. Mahon <mjm...@aol.com> wrote:

> aiia...@gmail.com wrote:

>> would be neat to see (on the IIc) the PC running AppleWin

Only if you could run vnIIc on AppleWin!

> THERE'S There's there's... TOO Too too... MUCH Much much...
> ECHO Echo echo...

Kind of like putting your virtual memory on your RAMdisk. :)

Mike

PS: Way cool program!

Joshua Bell

unread,
Apr 25, 2008, 2:22:19 AM4/25/08
to
<aiia...@gmail.com> wrote in message
news:d9a32403-395e-42a7...@u12g2000prd.googlegroups.com...

>
> would be neat to see (on the IIc) the PC running AppleWin

Ask and ye shall receive:

http://calormen.com/vnIIc/images/ - scroll to the bottom. I'll make the site
more attractive at some point.

I chose Karateka since it has bold colors. I'm actually surprised (once
again) at how good it looks, particularly the castle image. I used a feature
of the server which lets you drag-select a portion of the desktop to
stream - basically, cropped it to just the AppleWin client area.

The last image on that page expands the screen capture to the whole desktop
so you can see - on the Apple display - the Windows desktop, including the
AppleWin window, the capture program, and the capture program's thumbnail of
the Windows desktop, including the AppleWin window, the capture program, and
the capture program's thumbnail of the Windows desktop, including the
AppleWin window... and I think you run out of pixels at that level of
recursion.

Michael J. Mahon

unread,
Apr 25, 2008, 4:20:32 PM4/25/08
to

Actually, that would be a good way of using extended RAM for
a 64KB-capable processor... Not as fast as bank-switching,
but infinitely versatile.

Michael J. Mahon

unread,
Apr 25, 2008, 4:23:24 PM4/25/08
to

Kind of makes you think that in full-screen mode, with the image
conversion algorithm mapping a pure Apple II screen to an Apple II
screen, it might even look *identical* to the native display... ;-)

aiia...@gmail.com

unread,
Apr 26, 2008, 10:38:39 AM4/26/08
to
On Apr 24, 1:08 pm, "Michael J. Mahon" <mjma...@aol.com> wrote:

> aiiad...@gmail.com wrote:
> > would be neat to see (on the IIc) the PC  running AppleWin
>
> THERE'S There's there's... TOO Too too... MUCH Much much...
> ECHO Echo echo...
> ;-)

That's the
point :-) . . . . .. . . . . . .... ..............................

aiia...@gmail.com

unread,
Apr 26, 2008, 10:42:05 AM4/26/08
to
On Apr 25, 1:23 pm, "Michael J. Mahon" <mjma...@aol.com> wrote:
> Joshua Bell wrote:
> > <aiiad...@gmail.com> wrote in message

> >news:d9a32403-395e-42a7...@u12g2000prd.googlegroups.com...
>
> >> would be neat to see (on the IIc) the PC  running AppleWin
>
> > Ask and ye shall receive:
>
> >http://calormen.com/vnIIc/images/- scroll to the bottom. I'll make the

> > site more attractive at some point.
>
> > I chose Karateka since it has bold colors. I'm actually surprised (once
> > again) at how good it looks, particularly the castle image. I used a
> > feature of the server which lets you drag-select a portion of the
> > desktop to stream - basically, cropped it to just the AppleWin client area.
>
> > The last image on that page expands the screen capture to the whole
> > desktop so you can see - on the Apple display - the Windows desktop,
> > including the AppleWin window, the capture program, and the capture
> > program's thumbnail of the Windows desktop, including the AppleWin
> > window, the capture program, and the capture program's thumbnail of the
> > Windows desktop, including the AppleWin window... and I think you run
> > out of pixels at that level of recursion.
>
> Kind of makes you think that in full-screen mode, with the image
> conversion algorithm mapping a pure Apple II screen to an Apple II
> screen, it might even look *identical* to the native display...  ;-)


If only AppleWin would map apple II pixels directly to VGA pixels.....

Michael J. Mahon

unread,
Apr 26, 2008, 1:20:00 PM4/26/08
to

But in a full-screen and most other modes, there would be lots of
VGA pixels for each Apple pixel, and all the info is there to be
recovered. With decimation and color quantization, the recovery
could be perfect.

aiia...@gmail.com

unread,
Apr 28, 2008, 11:51:15 AM4/28/08
to
On Apr 26, 10:20 am, "Michael J. Mahon" <mjma...@aol.com> wrote:

> aiiad...@gmail.com wrote:
> > On Apr 25, 1:23 pm, "Michael J. Mahon" <mjma...@aol.com> wrote:
>
> >>Joshua Bell wrote:
>
> >>><aiiad...@gmail.com> wrote in message
> >>>news:d9a32403-395e-42a7...@u12g2000prd.googlegroups.com...
>
> >>>>would be neat to see (on the IIc) the PC  running AppleWin
>
> >>>Ask and ye shall receive:
>
> >>>http://calormen.com/vnIIc/images/-scroll to the bottom. I'll make the

> >>>site more attractive at some point.
>
> >>>I chose Karateka since it has bold colors. I'm actually surprised (once
> >>>again) at how good it looks, particularly the castle image. I used a
> >>>feature of the server which lets you drag-select a portion of the
> >>>desktop to stream - basically, cropped it to just the AppleWin client area.
>
> >>>The last image on that page expands the screen capture to the whole
> >>>desktop so you can see - on the Apple display - the Windows desktop,
> >>>including the AppleWin window, the capture program, and the capture
> >>>program's thumbnail of the Windows desktop, including the AppleWin
> >>>window, the capture program, and the capture program's thumbnail of the
> >>>Windows desktop, including the AppleWin window... and I think you run
> >>>out of pixels at that level of recursion.
>
> >>Kind of makes you think that in full-screen mode, with the image
> >>conversion algorithm mapping a pure Apple II screen to an Apple II
> >>screen, it might even look *identical* to the native display...  ;-)
>
> > If only AppleWin would map apple II pixels directly to VGA pixels.....
>
> But in a full-screen and most other modes, there would be lots of
> VGA pixels for each Apple pixel, and all the info is there to be
> recovered.  With decimation and color quantization, the recovery
> could be perfect.

Ok, emulator emulates apple II video, then a PC program decodes
the VGA pixels and turns it back into the 8192 bytes present at
$2000 in the emulated memory map....

Why not just break into the AppleWin code, and find a pointer
to the $2000 area of the memory map... Then just send that
8k over the serial line....

AppleWin feature request?

Rich

Michael J. Mahon

unread,
Apr 28, 2008, 2:42:45 PM4/28/08
to
aiia...@gmail.com wrote:
> On Apr 26, 10:20 am, "Michael J. Mahon" <mjma...@aol.com> wrote:
>
>>aiiad...@gmail.com wrote:
>>
>>>On Apr 25, 1:23 pm, "Michael J. Mahon" <mjma...@aol.com> wrote:
>>
>>>>Joshua Bell wrote:
>>
>>>>><aiiad...@gmail.com> wrote in message
>>>>>news:d9a32403-395e-42a7...@u12g2000prd.googlegroups.com....

Now that really is an Apple II "network computer" interface!

Using NadaNet, you could make "slave displays" with a single
&POKE command to refresh a slave. ;-) (Of course, that's
with real Apple II's, not emulators.)

Joshua Bell

unread,
Jul 12, 2008, 8:48:47 PM7/12/08
to
FYI, I updated the code slightly and published a new version - see
http://www.calormen.com/vnIIc/

* Keyboard support - I'm not sure I published a version that had this. Type
on the Apple, keystrokes go to Windows (unless the vnIIc app itself is the
active window); includes open/closed apple keys (mapped to ALT)
* Joystick/Paddle state is read, and translated to Windows mouse events
* You can select a subset of the screen to stream (click a "+" button, then
drag-select on the desktop)

The bulk of the time spent on this update today was in tracking down a
frelling stupid bug in an SSCPUT routine (if not ready, it would loop
forever rather than retry). Well, that, and making the protocol synchronous
rather than async - which would make the poor client fall behind and madness
would ensue.

I experimented a few months back with mouse routines, so bolting mouse
support on at some point is likely to happen, but no-one should hold their
breath.

0 new messages