Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Info-IBMPC V6 #59

13 views
Skip to first unread message

Info-IBMPC Digest

unread,
Aug 28, 1987, 6:28:28 PM8/28/87
to
Info-IBMPC Digest Friday, 28 August 1987 Volume 6 : Issue 59

This Week's Editor: Billy Brackenridge

Today's Topics:

BIOS Musings
Zenith 304 Port
720K disk as drive B:
Microsoft Linker and Semantics of OFFSET in MASM
Don't blame Linker
"Ethernet Transfer Rates for Real Products"
WP for Science and the Real World
Lotus 123 Clone
V20 and EMS Card
Set Time and Date on AT
Switching from Protected to Real Mode (3 Msgs)
Brief OS Promises
Problems with Mike Higgins' Com Port Driver
Inconsistent Modem
3Comm and IP/TCP
LongJmp and Interrupts
My stupidity on pausing until an interrupt, and signals!
Call for Papers Computer Simulation
Microprocessor and PC History
Today's Queries:
Querying disk interleave
PIBTERM & ULTRA UTLS
HP7470A Talking with Lotus 123
Pinout of 9 pin D-shell color video output needed
Single Density Format
3780 RJE support for PCs
Reading data from digitizer in Turbo-Pascal
Bullet286-ii from Wave Mate
Info on DBASE
Adding Second Hard Drive
Problems with Vaxmate
Doubledos vs. DOS 3.2.
Dbase Mail List Wanted


INFO-IBMPC BBS Phone Numbers: (213)827-2635 (213)827-2515

----------------------------------------------------------------------

From: Ya`akov Miles <multi%dac.triumf.cdn%ubc....@RELAY.CS.NET>
Subject: BIOS Musings

[This is in reference to BIOS.ASM in our lending library. -wab]

You may be interested in a history of where this BIOS came from, and how
it arrived in its present form. A heavily patched, partially-functionally
BIOS (with no copyright statement, or other visible indication of origin)
was supplied with my IBM-PC/xt compatible 10 mHz motherboard. In order to
get my motherboard to function correctly, in other words, to work with the
parity interrupt enabled and to operate with the NEC "V20", it was necessary
to disassemble and thoroughly go thru this "anonymous" bios, which was
hinted as supplied by Taiwan, while limping along on a name brand bios, as
supplied on my previous motherboard by a different vendor. In the course
of this disassembly, aided by comparison with the published IBM-PC/xt listings,
it became apparent that the synchronization on horizontal retrace in the
video INT 10h service was the root cause of the failure to operate with the
NEC "V20", and that correcting it to correspond with logic (as in IBM's bios)
caused the glitch to disappear. I am unable to account as to why several
name brand BIOS brands (excluding IBM's) had similar glitches - maybe they
they were produced from similar source code, although this seems unlikely.
In any case, as evidenced by DEBUG, some of these name-brand BIOS were full
of machine-level patches - did the vendor ever bother to reassemble and
optimize the source code. The code that I examined was full of recursive
INT(errupt) instructions, which did not to contribute to screaming fast BIOS.
Therefore, the assembly code was rearranged so as to eliminate some of the
unnecessary CALL, JMP, and especially INT instructions, as the optimization
proceeded with the later releases. The bios is (c)Anonymous, because there
was no indication of the original authors...

ps: While playing around with my 10 mHz motherboard, I encountered an unusual
program called HELPME.COM, which ran at a higher pitch than normal. Since
this program behaved normally on other (8 mHz) turbo motherboards, my
curiousity was aroused. This eventually led me to discover that the 10 mHz
motherboard was refreshed in hardware by channel 1 of the 8253 timer ic,
and that this channel appeared to be counting down from an unusually fast
oscillator. Maybe this could cause problems with other programs...


------------------------------

Date: Sat, 22 Aug 87 11:16:32 EDT
From: Robert Bloom AMSTE-TEI 3775 <rbl...@apg-1.arpa>
Subject: Zenith 304 Port


> Does anyone know the correct addressing scheme for the Z-304 port
> on a Zenith Z-248 system? It is a 38 pin port, and they provide
> an adapter that brings that 38 pins out to a male "COM 3" port,
> and a female parallel port. I can't seem to find any descent
> documentation to show how to address these two ports.

With your address line, I assume this is a government Z248. If so,
check "Appendix 404" in your owners manual. Page 23 has a pin-out.
Page 24 has a MFM program to test the serial port. (To be honest,
the level of information in the appendix is well over my head - I
just use the thing.)


------------------------------


Date: Sat, 22 Aug 87 11:27 EDT
From: He...@MIT-Multics.ARPA
Subject: 720K disk as drive B:


I just installed a 720K drive (from Tigertronics) into a PC and
encountered the same problem. If I used DRIVER.SYS as described:
DEVICE=DRIVER.SYS /D:1 /F:2
(I think the /F does 720K, and that the /D only says be drive #1.)
What happened was that everything thought the drive was D: or E:, I
forget which. Thing is, DOS manual describes this as the *intended*
behavior!

But what the drivemaker suggested was using DRIVPARM=/D:1 /F:2 instead.
Now, this wasn't documented in my DOS 3.3 manual, but on the other hand,
I was running DOS 3.2 at the time. And it worked! You might give it a
try. As a last resort, just put ASSIGN B:=E: into your AUTOEXEC.BAT
and leave well enough alone.

Brian

------------------------------


Date: Sat, 22 Aug 87 12:27:30 PDT
From: microsof!pe...@beaver.cs.washington.edu
Subject: Microsoft Linker and Semantics of OFFSET in MASM
Cc: pe...@beaver.cs.washington.edu, reu...@beaver.cs.washington.edu


> [Try replacing anything of the form MOV ?X,OFFSET FOO with LEA ?X,FOO,
> Don't ask me why but the linker loves to generate bad code sometimes
> when you use the offset operator. -wab]

; Having spent two years working on the Microsoft linker, I always
; suffer from a knee-jerk reaction to remarks such as the editor's.
; So I apologize in advance for any flames, and I will try to give a
; small tutorial on the semantics of the OFFSET statement in MASM.
; The linker does not love "to generate bad code sometimes". More so
; than just about any other language utility, the linker simply does
; what it is told. The problem is that, sometimes through a lack of
; understanding, people tell it to do the wrong thing. I will be the
; first to admit that trying to understand the workings of MASM from
; the documentation is not always fruitful; MASM certainly has its
; share of quirks, and experience is the best teacher.
;
; Here is an annotated example you may find helpful:

NAME foomodule

foogroup GROUP fooseg

; Rule #1: Groups will get you in big trouble if you aren't
; careful.

fooseg SEGMENT WORD PUBLIC 'foodata'
ASSUME ds:foogroup

; Rule #2: If you put a segment in a group, always refer to
; it by the group unless you are trying to be very clever
; indeed.

PUBLIC foo
foo DW ?

; Rule #3: Believe it or not, the ASSUMEd contents of DS affects
; the way MASM emits the PUBlic DEFinition record for any data
; label. In this case, the hypothetical .OBJ file for the
; hypothetical .ASM file we are assembling will state that foo
; is in segment FOOSEG in group FOOGROUP. Had we omitted the
; ASSUME statement above, or had we said "ASSUME ds:fooseg"
; instead, the .OBJ would not mention any connection between foo
; and foogroup. This is potentially important when the linker
; resolves references to external symbols. More below.

fooseg ENDS


EXTRN foofar:FAR

; Rule #4: If you do not know the name of the segment that
; contains an external symbol, then don't put the EXTRN
; statement inside a segment. See rule #6.


foocode SEGMENT BYTE PUBLIC 'foocode'
ASSUME cs:foocode, ds:NOTHING, es:NOTHING, ss:NOTHING

; Rule #5: Be very conservative with ASSUMEs. It is best in a
; code segment to first assume globally that you don't know
; anything about the contents of DS, ES, and SS; and then, at
; the start of each procedure, have an ASSUME statement stating
; the conditions on entry to that procedure. Further, if you
; modify a segment register inside a procedure, place an ASSUME
; statement immediately after it is modified to reflect its new
; state.

EXTRN foonear:NEAR

; Rule #6: Only put an EXTRN statement inside a segment if you
; know for sure that the external symbol is actually defined (in
; some other file) in that segment.
;
; I will now digress to briefly explain fixups. "Fixup overflow"
; is everybody's least favorite linker error message, including
; me. Fixups on the 8086 are complicated thanks to Intel's
; segmented architecture. Basically, the 8086 uses 20-bit
; physical memory addresses (hence, the 1 Mbyte address space we
; all know and love). The chip forms a 20-bit address by taking
; the value in the appropriate segment register (this value is
; often called a paragraph number), multiplying it by 16, and
; adding in the specified offset. Mathematically, one might
; say:
;
; addr = parano*16 + offset
;
; Everyone knows the offset may range from 0 to 64K - 1. The
; problem is:
;
; addr = parano*16 + offset = (parano - 1)*16 + (offset + 16) =
; (parano - 2)*16 + (offset + 32) = ...
;
; In fact, every byte in memory may be addressed by 4096
; different combinations of paragraph number and offset. So,
; when the linker is asked to fill in the address of a symbol
; (this is what a fixup is), knowing the physical address of the
; target symbol (i.e., the symbol to which the fixup refers) is
; not sufficient. The linker needs to know which of the 4096
; possible addressing combinations is desired. So, in addition
; to the name of the target, the .OBJ requesting the fixup must
; tell the linker what value it expects to be in the segment
; register being used for the reference. This is not done by
; specifying an absolute number; it is done by specifying a
; segment name or a group name. This segment name or group name
; gives the linker a FRAME of reference (and, in fact, paragraph
; numbers are also called frame numbers, or just frames). Once
; a frame has been specified, then there can be only one
; possible offset relative to that frame that references the
; target. Note that there may not be ANY offset that will fit
; in 16 bits that references the target relative to the desired
; frame. When the linker detects such a case, it emits the
; dreaded "Fixup overflow" message.
;
; Enough digression for now. Let's examine what MASM does in
; the two EXTRN examples given. In the first, the EXTRN for
; foofar is outside all our segments. Thus, MASM does not
; assume it knows anything at all about where foofar is defined.
; So, if we call foofar, MASM will emit a fixup saying that the
; target is foofar and that it doesn't know anything about
; foofar's frame of reference. In this case, the linker is left
; to make the decision about what the frame ought to be. Here
; is where rule #3 comes in. If the PUBlic DEFinition (or
; PUBDEF) for foofar specifies that the segment containing
; foofar is in a group, then the address of that group will be
; used as the frame in the fixup; otherwise, the segment
; containing foofar will be used as the frame.
;
; In the second EXTRN example, the EXTRN is inside segment
; foocode. If we call foonear, MASM will emit a fixup record
; that tells the linker to use segment foocode as the frame of
; the fixup regardless of what the PUBDEF record for foonear
; says about foonear's frame. So, guess what? If foonear
; doesn't happen to be in foocode, there is a very good chance
; of seeing everybody's favorite error message.
;
; A small digression on segments and groups. The address of a
; segment is the address of the first byte in the segment. The
; frame of a segment is the address of the segment rounded down
; to the nearest multiple of 16 (which happens to be the nearest
; paragraph number). A group is a collection of segments that
; are to be referred to as if they were one segment. The
; address of a group is the smallest of the addresses of its
; member segments. The frame of a group is the address of the
; group rounded down to the nearest multiple of 16. It is
; important to note that the group directive does not force the
; segments of a group to be contiguous; nor does LINK.EXE do any
; explicit checking to see if all the segments in a group fit in
; 64K bytes. So, if the address of the last byte of the last
; segment in the group is 64K or more higher than the frame of
; the group, guess what? That's right, you'll probably see
; everybody's favorite ...

ASSUME ds:NOTHING, es:NOTHING, ss:NOTHING
fooproc PROC NEAR
call foofar
call foonear
mov ax,foogroup
mov ds,ax
ASSUME ds:foogroup
mov ax,offset foogroup:foo ; (1)

; Rule #7: Always use a segment or group name with the OFFSET
; operator (if the segment or group is known). We've finally
; reached the case that set me off. If you do not specify a
; group or a segment name when using the OFFSET operator, then
; MASM will always emit a fixup specifying the frame of the
; segment (assuming it knows what it is) EVEN IF THE SEGMENT IS
; A MEMBER OF A GROUP. This is unfortunate, but it is true. As
; you have pointed out,

lea ax,foo ; (2)

; does the right thing because of the "ASSUME ds:foogroup"
; statement. You may say, "Well, why doesn't MASM do the same
; thing with the OFFSET operator?" Go ahead and say it; just
; know that you are asking the wrong person.
;
; Now, since the lea always works (assuming your ASSUMEs are
; correct), and OFFSET doesn't if you forget to specify the
; group name, why would anyone ever want to use (1) rather than
; (2)? Well, (2) is one byte longer than (1). To some people,
; this matters.
;
; At any rate, I hope this helps with any questions about the
; linker generating bad code and understanding fixups on the
; 8086 in general.

mov dx,offset foogroup:disclaimer
mov ah,9
int 21h
mov ax,4C00h
int 21h
fooproc ENDP

foocode ENDS


fooseg SEGMENT
ASSUME ds:foogroup

disclaimer db 'The opinions expressed herein are my own', 0Dh, 0Ah
db 'and may not express those of my employer,', 0Dh, 0Ah
db 'Microsoft Corporation.', 0Dh, 0Ah, 0Dh, 0Ah
db 'Pete Stewart', 0Dh, 0Ah, '$'

fooseg ENDS


END fooproc


------------------------------


From: microsof!reu...@beaver.cs.washington.edu
Subject: Don't blame Linker
Date: Mon Aug 24 09:20:22 1987

As custodian of the Microsoft linker for the past nearly three
years, I can vouch for the complete accuracy of Pete Stewart's
recent remarks on segments, groups, and fixups. The editor's
remark about the linker "generating bad code" is false: the
linker does precisely what it is told.
Disclaimer: I speak for myself and not necessarily for my
employer, Microsoft Corporation.

[To correct the record I have never had any complaints with the
linker that I can trace to the linker. This information was
unavailable four years ago when the readers of INFO-IBMPC
collectively found out by experimentation that the LEA instruction
worked when the offset operator sometimes failed.

I haven't seen the problem for a long time. It must have been version 1
or 2 of MASM. We never saw a linker fixup error message; just code would
blow up. Modules which hasn't been assembled in months would take space
shots. It caused much grief at the time. I am delighted that at last
we have the problem well documented. -wab]

------------------------------


Date: Sun, 23 Aug 87 22:38:53 EDT
From: "James B. VanBokkelen" <JB...@AI.AI.MIT.EDU>
Subject: "Ethernet Transfer Rates for Real Products"


The transfer rate you get depends a great deal on the Ethernet interface
you are using, and the caliber of the machine it is installed in. The
fastest consistent speed I have ever seen was for a memory-to-memory TCP
transfer of 1Mb between two Wyse 286 machines with Micom-Interlan NI5210
cards in them: 1.46 Mbits/sec. Of course, this was a test program, but I
wrote it using our programming libraries...

Here are some real-world figures: Obtained with our FTP.EXE program against
a MicroVax II with a Hitachi 172Mb EDSI drive and a DEQNA. The only other
load was someone running "rain" on a Telnet connection during the whole test.
The file being transferred was the Ultrix 2.0 kernel, about 577Kbytes long,
in "image" (binary) mode. All speeds are in Kbytes/sec.

Disk Null device Ram disk

System 1, get: 35K 66K 52K
put: 42K 70K (from ram) 56K (from ram to vax disk)

System 2, get: 19K 36K n/a
put 23K 23K n/a

System 3, get: 36K 80K n/a
put: 50K 52K n/a

System 1: 8Mhz 286, ST4051 disk (40Mb), Micom-Interlan NI5010 interface
(older, "dumb" interface, only two packet buffers, TCP window of 1024).
Production PC/TCP software (version 1.16). MS/DOS 3.1.

System 2: ITT "XTra" (4.77 Mhz 8088), 10Mb disk, Micom-Interlan NI5210
interface (new, "semi-intelligent" interface, 8Kb of on-board packet buffer
and 82586 LAN controller). TCP window of 1024. Production PC/TCP (1.16).
ITT DOS 2.11.

System 3: 8Mhz 286, CMI 20Mb disk (originally in an IBM AT), Western Digital
8003 interface (new, "semi-intelligent" interface, 8Kb of on-board packet
buffer and Nat Semi 8390 LAN controller). Beta test PC/TCP (1.16 pl 1, with
some speed improvements not in base 1.16). TCP window of 4096. MS/DOS 3.1.

Comments: All of these figures include overhead for a fully-functioning TCP.
Presumably, someone who was only concerned with PC - PC transfers could use
a more streamlined packet format, or omit the hardware-independent checksum,
as Sun does on their UDP packets in NFS.

The XT was a bad place to look at the NI5210 (and would have been for the
WD8003, or other, similar cards). This is because they don't support
DMA, and the 8088 runs out of steam in memcpy(). The NI5010 in the AT was
being run without DMA, because that is much faster than using the AT's
crippled DMA controller. The 3Com 3C501 would have performed considerably
worse than any of these cards, had I found one in a machine, because it has
only one packet buffer, and some quirks...

In general, I expect to be able to get system 3's performance with all of
the new generation of Ethernet cards (no on-board processor, but a good
deal of buffer memory, and an intelligent LAN controller chip). One key
point about these is that in an AT, they don't miss many packets, which is
a major cause of performance bottlenecks when using older cards with large
TCP windows against machines with inferior TCP retransmit policies like
Ultrix's.

jb...@ai.ai.mit.edu
James B. VanBokkelen
FTP Software Inc.

[I still maintain that lack of non blocking disk I/O is THE major factor
slowing down real world file transfers. The get times reflect this, but
the put times don't. I am not sure why there is such a large difference
between get and put times to disk. -wab]

------------------------------


Date: Tue, 25 Aug 87 11:56:25 +0200
From: RPARBS%FRFUPL1...@wiscvm.wisc.edu
Subject: WP for Science and the Real World


The polytechnicum of LILLE (France) is using a program called MATHOR. It is a
TEXT processor, specialized in scientific text processing. It is really easy
to use and needs only one hour to learn 90% of the functions. You can enter
text (with latin, greek, "BARRE", math ... alphabets, regular/italic/bold/
underlined/double-size characters), with the common utilities (justify, change
margins, indent, etc...), together with formulas, matrices, tables and space
reserved for figures. Everything is shown on the screen as appearing on the
paper. It works with EGA or HERCULES CARDS installed on PC-XT & AT or clones,
also on OLIVETTI M24 and M28 and H.P. Vectra. A large choice of printers is
available : EPSON FX, LQ800, LQ1000 or LQ1500, NEC P2 to P7, Xerox 4045 and of
course CANON LPB-8A2 or H.P. Laserjet+ laser printers. It can also take
documents from an ASCII file, manage a library of paragraphs and formulas
(frequently used), and (for US people) translate a text into TEX format.

The version 2.0 (June 1987) is sold 6950 French Francs (around 1100$) by
NoVedit, av. du Hoggar BP112, 91944 LES ULIS CEDEX, FRANCE. It is a protected
software (just a hard-key plugged in the parallel port).

I told quite much about it. But it's just because we are very satisfied. All
our technical reports and some theses have been written, without any
particular problem.

/regis BOSSUT

------------------------------


Date: 25 Aug 87 06:35:21 PDT (Tuesday)
Subject: Lotus 123 Clone
From: "Paul_Norder.HENR801G"@Xerox.COM


Regarding Lotus 123 clones: check out the offerings by Mosaic Software.
They are so much like Lotus 123 that Lotus is suing (or has sued) Mosaic
for infringement. I bought my copy of "The Twin", Mosaic's 123
look-alike, for about 40 or 50 bucks from BCE liquidators. They
advertise in the Computer Shopper. (Look for a large group of pages
that have red edges.) If you want an integrated, Symphony-like clone,
try Mosaic's "Integrated 7" package, also available from BCE for about $
80. Regards --- Paul.

------------------------------


Date: Tue, 25 Aug 1987 11:35 EDT
From: Villy G Madsen <VMADSEN%UALTAVM...@wiscvm.wisc.edu>
Subject: V20 and EMS Card


I read with interest the note from the user having problems with a
v20 and an EMS card. I ran into some of the same difficulties with a
V30 running at 8 MHz and an EMS card. I finally determined that my
problem was not a V30 software compatibility problem, but a hardware
incompatibility problem. It was access the IO bus too quickly on
either word reads or writes (I can't remember which). As it
happened, I couldn't get an 8086 to work with the board at 8 Mhz,
but it was a lot closer to working than the V30 (which was what clued
me into the fact that it was a timing problem). With the board I'm
using, both micro's work fine at 4.77, one of these days I'm going to
replace some of the 150 ns memory in the EMS board with 120ns memory
and see if that doesn't solve my problem Villy

------------------------------


Date: Tue 25 Aug 87 10:22:40-PDT
From: Ted Shapin <BEC.S...@ECLA.USC.EDU>
Subject: Set Time and Date on AT
Phone: (714)961-3393; Mail:Beckman Instruments, Inc.
Mail-addr: 2500 Harbor Blvd., X-11, Fullerton CA 92634


Save and Restore AT CMOS Data

Written by Bill Marquis, Beckman Instruments, Inc. August 21, 1987
(DeSmet C).

This program will save and restore the AT's CMOS data, which
includes, but is not limited to, the configuration, and the
time and date. This program will also let you change the time
and date (The DOS DATE and TIME commands do not work).

[DOS 3.3 fixes this. -wab]

To save the configuration (do this before your battery dies)
C:>settd /s (will store configuration in file \\CMOSINFO.)

To restore the configuration (do this after you replace your battery)
C:>settd /r (will restore from file \\CMOSINFO.)

To set the time and date (either one is optional):
C:>settd hh:mm:ss dd-mm-yy

To display the current time and date:
C:>settd /d

[SETTD.C has been added to the info-ibmpc lending library. -wab]
------------------------------


Date: Wed 26 Aug 87 04:46:42-PDT
From: br...@m10ux.UUCP (MHx7079 mh)
Subject: Switching from Protected to Real Mode
Organization: AT&T Bell Labs, Murray Hill

In article <13...@imagen.UUCP>, ge...@imagen.UUCP (Geoffrey Cooper) writes:
> Is there a way (I deduce from recent net messages that there is) for a
> program to switch from 286 protected mode back to real address mode,
> without rebooting the machine?

I read in one of the trade magazines (Electronic Engineering Times)
a few weeks ago that Microsoft announced that a research team of theirs
had discovered a way to do this without a hardware reset.
It involves something wacky like a double fault.
They also announced that they would be patenting this.

Microsoft really seems to have developed an attitude here.
First, they equate finding a hack with "research",
and then they expect a hack to be patentable.

This, and the fact that IBM wants to patent their micro-channel
bus specification sort of make me gag.
--

Doug Braun AT+T Bell Labs, Murray Hill, NJ
m10ux!braun 201 582-7039


------------------------------

From: jo...@well.UUCP (John A. Limpert)
Subject: Switching from Protected to Real Mode
Date: 24 Aug 87 07:25:38 GMT
Organization: Whole Earth 'Lectronic Link, Sausalito, CA


From what I have heard, they load the IDTR (interrupt descriptor
table register) with 0 and execute an INT 3 instruction. This causes
multiple faults and drops the 80286 back into real mode. I am not
sure what state this leaves you concerning segments and the prefetch
queue. I picked this info up from someone who had gone to one of
the OS/2 seminars.

------------------------------

From: tr...@sequent.UUCP (Scott Tetrick)
Subject: Switching from Protected to Real Mode
Date: 25 Aug 87 14:55:28 GMT
Organization: Sequent Computer Systems, Beaverton, OR


It is important to note that it is NOT the INT 3 that returns the 80286
to real mode, but the hardware of the AT. Whenever a double fault occurs,
the 80286 performs a SHUTDOWN bus cycle. This is detected by the hardware
of the AT, and generates a RESET to the processor. Memory contents are
still valid from before the reset. The prefetch queue is flushed on a reset.


------------------------------


Date: Tue 25 Aug 87 22:17:19-CDT
From: Ivo....@GSBADM.UCHICAGO.EDU, 324-5036 <CRSP...@GSBADM.UCHICAGO.EDU>
Subject: Brief OS Promises


These days I usually don't flame anymore, and particularly not on
technical forum. But since these may be the (hopefully not) last days
of the best forum that ever existed on PCs, a brief one you can
hopefully distributable:

(1) I am sick and tired of the OS games. I bought my AT 3 years ago
expecting an OS for it ever since. Remember how MS pushed the deadline
for delivery about 1/2 year further and further? Taking on the
impending delivery of a MS-powered protected OS deterred a lot of
people from stepping in themselves. What do we get now? A slow crock
of OS/2 for a price that will ensure narrow acceptance--in half a
year. MS actually expects 3.xx to continue living!

Meanwhile, the UN*X people continue offering packages for $400 and up
for a UN*X THAT RUNS MY OLD PROGRAMS. Don't software developers
understand about marginal cost? Microport, please hear me: Now that
people find out about the lack of OS/2 wonder, a price of $150
complete would be THE chance to get rid of MS DOSsss forever and
command the PC market for years to come.

Ivo (crsp...@gsbadm.uchicago)

Disclaimer: ...

[I can understand a little flame on this subject. We are all in the
same boat. We are all running out of memory and need background
tasks, networking and good mail systems. DOS is a dead horse, and we
have beaten it enough. Here at ISI we haven't made a decision as to
what to do yet. Do we look for a Unix that does DOS windows or wait
for OS/2? Do we buy more AT clones, PS/2s, PS/2 clones, or any of a
number of incompatible 386 machines?

It is a bit premature to write off OS/2. When you compare OS/2 to
UN*X, you are comparing an alpha test release intended for
educational purposes for software developers to a twenty year old
operating system. Just dumping on OS/2 isn't going to get us
anywhere. We have had productive discussions about how interrupts are
handled (or not handled) in OS/2. You mention OS/2 is "a slow crock".
Where is it slow? Give us some timings for a context switch or Disk
I/O. How many serial lines can it handle and how fast? Once a large
application gets control is it hampered by OS/2? How does OS/2 handle
networks? If answers to some of these questions are unacceptable, are
we looking at artifacts of the alpha release or fundamental design
flaws of OS/2? I haven't heard these questions asked yet much less
answered.

Here's your chance to flame and get it heard at Microsoft! They do
read the digest. Lets just keep it friendly and technical. -wab]


------------------------------


From: obroin%hslrswi.UUCP%cernvax...@jade.berkeley.edu (Niall O Broin)
Subject: Problems with Mike Higgins' Com Port Driver
Date: 26 Aug 87 09:27:09 GMT
Organization: Hasler AG, CH-3000 Berne 14, Switzerland
Lines: 49


I recently posted a request for assistance with some problems I had with
trying to implement Mike Higgins' com port driver which was recently posted by
Honzo Svasek. I finally got everything sorted out and working.

You need to make a couple of changes to get TERM.C and IOCTL.ASM working
together properly. This is because these programs were written 3 years ago,
and MicroSoft C and MASM have changed since then. The changes are needed in
IOCTL.ASM :


Change the name of the code segment from @CODE to _TEXT.

Make the code segment BYTE aligned (currently none).

Change the name of IOCTL PROC NEAR to _IOCTL PROC FAR

So to do this in simple terms :

Change the line which now reads


@CODE SEGMENT PUBLIC `CODE' to read

_TEXT SEGMENT BYTE PUBLIC `CODE'


Change all other occurrences of @CODE to _TEXT.
Change all occurrences of IOCTL to _IOCTL (except in comments - doesn't matter)


Hope this is of some help to somebody. The driver package is really nice - it
gives you two interrupt driven com ports which are properly hooked into DOS,
so you can read from and write to them from any language just like any other
DOS devices., e.g. PRN or CON.

Regards,

#\\\\\-----\\\\\ Niall O Broin
###\\\\\-----\\\\\ AXE Software Development
#####--------------- Hasler AG
#######--------------- Berne
#########\\\\\-----\\\\\ Switzerland
###########\\\\\-----\\\\\
####### ///// ///// obr...@hslrswi.UUCP
####### ///// /////
##### ///// It is better never to have
### ///// been born, but anyone that
# ///// ///// lucky won't be reading this.
///// /////

------------------------------


Date: Wed, 26-Aug-87 11:38:13 PDT
From: bcsaic!asymet!lib...@june.cs.washington.edu (Mailing list readers)
Subject: Inconsistent Modem

Try pulling out your mouse card, especially if it's a Microsoft mouse.
I've seen mice conflicting with modems and even with hard disks. If
pulling the mouse card makes the problem go away, start experimenting
with the jumper that controls which IRQ it uses.


------------------------------


Date: Wed, 26 Aug 87 20:59:23 edt
From: James Van Bokkelen <ftp!jb...@harvard.harvard.edu>
Subject: 3Comm and IP/TCP


He asks:

Is there a version of the PC TCP/IP that works with 3Com 3+
network software resident? ....

What he wants is full coexistence between the TSR redirector/LAN program
and TCP/IP utilities, sharing the network interface in real time. This
requires that a software interface exist between the two packages, such
that incoming packets are demultiplexed and handed to the appropriate
receive routine, and minimally, an interlock such that they never try to
transmit simultaneously. This is quite nice in practice, because it
means that the TCP/IP utilities can access the LAN server's disk, and any
other services continue without disruption when you use FTP or Telnet.

We know how to do this. In fact, it is relatively easy if the demultiplexing
can be done by Ethertype (true with 3Com's XNS-derived protocols). We
presently support full coexistence with several different PC LAN products:

Versions of Microsoft Networks available from BICC Data Networks.

Lifenet (from Univation or Lifeboat Systems Designer Team).

Ethernet versions of Banyan's Vines (2.1 or later)

Versions of Novell's Netware available from Univation, BICC Data
Networks (mostly U.K. & Europe) and Schneider & Koch (West German).

We are actively working on coexistence with another vendor's version of
Novell, but it isn't ready yet, so I won't spill the beans.

As far as I know, we are the only PC TCP/IP vendor offering this kind of
coexistence on the workstation. BICC used to offer a PC/IP which worked
with their MS/Net, but they switched over to our product. Some other
vendors presently have similar functionality with their (non-RFC-conforming)
TCP/IP NETBIOS support and an add-on LAN program like IBM's or Microsoft's
(not like 3Com's). We will do the same later this year when we release our
(RFC conforming) TCP/IP NETBIOS.

The problem is that the active co-operation of the LAN software vendor
is required (to add or document some sort of magic INT interface). I
personally have made a number of attempts to get 3Com interested in either
documenting a (hypothetical) existing packet-level interface sharing
mechanism, or adding something (either to an existing public-domain spec
some of our other customers have used, or one they develop). Nothing ever
came of it, and I haven't tried in the past 3 months. I don't know if or
how 3Com's merger with Bridge might change their level of interest. I
suppose I ought to try again.

spdcc!ftp!jb...@harvard.harvard.edu
James B. VanBokkelen
FTP Software Inc.

[All the companies involved have had a chance to see this message through
one mailing list or another. It would be nice to see if through the net there
could be some cooperation on networking standards. -wab]

------------------------------


Date: Thu, 27 Aug 87 09:22:09 ULG
From: Andre PIRARD <A-PIRARD%BLIULG1...@wiscvm.wisc.edu>
Subject: LongJmp and Interrupts


>...
>When an interrupt occurs, the interrupt handler routine that you have
>previously set up (possibly with a signal as I did below--signals are
>a way of catching interrupts like overflow and divide by zero),
>should terminate with a longjmp routine. The longjmp routine
>restores the state of your program, and you magically continue
>execution at the point just before returning from the setjmp routine
>you executed earlier, ...

I *am* confused! Do you mean your interrupt routine *really* ends
with anything else than IRET? If this is the case, it gives me shiver
in the spine. Imagine your interrupting a DOS call in the middle of
its buffer management which was in turn interrupted by a timer tick
in the middle of its bunch of hookups, then a keyboard interrupt,
then ... I wouldn't dare shortcut all that. It will work 99 times
out of 100 but will blow your system or crash your hard disk on the
first occasion. In fact, the only safe case is when a process
produces ("signals") the interrupt itself. Then you are sure there's
nothing between it and the interrupt handler.

I have been looking myself for a general means of causing an
interrupt to restart a process at a different predetermined execution
address. The only easy case is when it is driven by an interpreter
loop to which you can set a flag, hoping the process is not stuck
somewhere. Else the only general solution I have found is to scan the
process hardware stack for a sign of an interruption and modify the
return address. In fact, such a program interruption feature is the
operating system's concern and MSDOS sure provides no way. The
process should be able to ask the operating system that when external
events occur, a routine be given control in program state and be able
to either return to the interrupted process or restart it anew. But
restart it anew means that any system service active at the time
should be interrupted which takes us back to the previous problem.
Either the operating system lets them stack down to completion, but
that's hopefully they will (they can be waiting) or the only way to
restart the previous process is to issue an OS abort that each
pending service and finally the original process recover. But that's
far beyond MSDOS in code size and execution overhead.

I wonder what solution OS/2 provides to this? So the only reasonable
answer to the original question is that a program can only test flags
(or use data) that are set by asynchronous interrupt routines.

------------------------------


Date: Fri, 28 Aug 87 07:04:37 PDT
From: rohan%lock...@VLSI.JPL.NASA.GOV
Subject: My stupidity on pausing until an interrupt, and signals!


Me and my infinite stupidity, has gotten me in trouble again. I've
been working too much lately with Unix environments that started to
liken signals to interrupts...They are not. They are just what they
are...signals from one process to another...a sort of interprocess
flagging mechanism. (System-wise it flags mostly for exception
handling purposes.) A-PIRARD, whomever he is, is right, the only way
to test if an interrupt has occurred, is if the interrupt handler
sets something somewhere that can be checked by your main program.
The thing it sets could be a variable, an empty interrupt vector, or
set something else that is safe.

I disagree, however, with A-PIRARD however on checking the system
stack, because you have no way of knowing if what's on the stack is a
return address or data from some saved register.

I am sorry, if my statements had messed up anyone out there. Now that I've
shot myself in the foot, I hope I will think a little more before doing it
again.

Please forgive me, I'm overworked,
Rick Rohan

------------------------------


Date: Thu, 27 Aug 87 14:38:25 EDT
From: "James R. McCoy (CCS-E)" <jmc...@ARDEC.ARPA>
Subject: Call for Papers Computer Simulation


The Society for Computer Simulation is an organization
committed to encouraging the use of computer simulation.
SCS sponsors several conferences including the Eastern
Simulation Conferences.

The ESC will confer in Orlando, Florida in April of 1988.
ESC is comprised of the following conferences:
1. Tools for the Simulationist
2. Credibility Assessment
3. Simulation Languages
4. Simulators

Of the tracks in the "Tools for the Simulationist"
conference one is tentatively titled "Mainframe to Micro
Migration for the Simulationist". Obviously, Mini to Micro
migration is implicit in the title of the track. This track
needs creative people to act as Session Chairs and of
course, it needs good papers dealing with Simulation on
Personal Computers. If you have an idea for a good paper,
please send me an abstract rapidly. (target date for
abstracts was 1 September, but I'm hoping they will let me
squeak good papers through up to mid Sept.)

Note: If you have a paper dealing with Simulation and you're
working on a mainframe or a mini, go ahead and send me the
abstract and I'll forward it to the appropriate Chairperson.

Papers should be deliverable in approximately 20 minutes and
will be published in the Proceedings.

Potential Topics include:

Continuous Simulation on a personal computer
Discrete Simulation on a personal computer
Migrating a simulation from a mainframe to a micro
Simulation languages on a micro
Spreadsheet Simulations
DBMS package simulations

Electronic Mail: <jmc...@ardec.arpa>

Surface Mail: James R. McCoy
309 Highland Ave.
Neptune, NJ
07753


------------------------------


Date: Fri 28 Aug 87 05:12:24-PDT
From: ph...@sci.UUCP (Phil Kaufman)
Subject: Microprocessor and PC History
Organization: Silicon Compilers Systems Corp. San Jose, Ca

I have been watching with interest all of the discussion on the
net regarding the history of the PC and its use of Intel
microprocessors. I have also enjoyed the various semi-religious
arguments about computer architectures and related topics.

I can't speak for IBM as to why they chose the 8088 for the first
PC, but I can give you some very informed historical information
mixed with a lot of personal biases. I was in charge of much of
the strategic planning for Intel in the late 70's and was General
Manager of the Microprocessor Operation in the early 80's.
Consequently, I do know at least a little about what went on.

The architecture of the 8088/8086 was frozen in the middle of
1976! At that time all of the available useful software for
personal computer use resided either on Apple II 6502 or on
8080/8085/Z80 under CP/M. No one but Apple really saw any future
for 6502s. It was critically important for Intel both to leverage
the software base that existed in the 8080/8085/Z80 arena and the
familiarity of designers with the prior existing software and
hardware. It was also important to extend the capabilities of
microprocessors dramatically if the underlying semiconductor
technology was to be fully exploited into large market growth.
There was absolutely no desire to build yet another nice clean
plain vanilla architecture just because it could be done. From
these considerations came the 8088/8086 and its progeny.

The 8088 was created because 8-bit systems were quite adequate
for many applications, most all peripherals were 8-bits, and
8-bit systems had a cost advantage. However, it was made
absolutely compatible with the 8086 because clearly the world was
moving towards 16 bits - 32 bits was still in the far future. (The
8088 was exactly an 8086 with the addition of a byte multiplexer.
It cost more than an 8086 to build and was sold for much less
that an 8086 in order to develop the market.)

One of the things that has always, and still does, distinguish
Intel from its competitors is that Intel builds chip sets that
make computers while others build microprocessors. This is a much
more critical distinction than might be apparent. On the day that
the 8088/8086 was first offered a complete set of chips was
available to build an entire computer. To do so required that the
8088/8086 bus architecture be made reasonably compatible with
many of the existing 8085 peripheral chips. Motorola had, at
best, a CPU chip! Anyone wanting a well integrated low end
computer, i.e. a personal computer, had to chose Intel. Only
the bigger more expensive 'workstations' could afford the lack of
integration of any other choice.

The development of the 8087 floating point processor was an
integral part of the Intel plan. This effort began in 1974 and
included the investment of hundreds of thousands of dollars on
consulting from the world's best numerical analysis people and
the development of what is now the IEEE standard. No one else
comes close even today. The availability of the 8087, even if a
particular product didn't need it, was a key factor in many
choices to use the 8088/8086.

Another thing that distinguished Intel was the focus on software
and the tools necessary to develop computers and applications.
Intel had an operating system, development systems, In Circuit
Emulators, evaluation boards, etc. No one else came close.

Intel spent enormous energy getting 8-bit software converted to
the 8088/8086. That isn't a lot of software by today's standards
but it was the majority of what was available and its existence
effected a lot of decisions. It was the 8-bit software that gave
the 8088/8086 a kick-start and left all others in the dust.
( IBM would have really used CP/M for the PC if DRI hadn't
refused to give them a good OEM deal. Microsoft was smart enough
to understand who set standards and to effectively give an
operating system to IBM and make their money off of all of the
other vendors who followed IBM. )

An aside on computer architectures: Intel deliberately elected
not to build a "micro-PDP11 or micro-VAX" as Motorola and others
did. It was felt that to do so would only be a half step towards
the real future and that a bolder more risky step should be made.
Thus, many tens of millions of dollars were invested in the 432.
Remember the 432? It was a multiprocessor object oriented total
departure from any prior architecture machine. And, it was a
flop. It was a high risk, potentially high reward strategy that
didn't pan out.

Nice clean architectures are esthetically pleasing. No one, not
even Intel, thought that the 8088/8086 had a "nice" architecture
- just that it was good for the times. National had a much nicer
architecture than Motorola and you see where it got them.

So, when it came time to decide on a new microprocessor for a new
product people had a choice of the whole solution from Intel
(with an ugly architecture) or a microprocessor chip (with a nice
clean architecture) from several others. Which would you have
chosen to make your product successful? Which reason actually
swayed IBM I don't know. But, I don't think there was any other
real choice available.

The story went on with the 80286. Again, the focus was on solving
the whole problem and recognizing the realities in the world in
terms of both existing software and new needs. The 80286 was so
compatible that to this day few people have written any 80286
unique software, electing instead to spread their development
efforts over the 8088/8086/80186/80286 all at once by writing to
the lowest common denominator. Too bad, a lot of great software
could have been done that never will be.

Both the 80286 and the 80386 recognize the issue of memory
management. You simply can't have a lot of memory and modern
software without memory management. Only if memory management is
on chip is it both fast and standard for all computers built.
Look at the 68XXX. Every builder invented his own memory
management and no two systems are really compatible even though
the have the same CPU chip.

I think there are several lessons in all of this history, and
opinion. People by and large buy solutions to problems not
technical delights. Of the over 5 million PCs sold, I'd wager
that under 1 percent of the users know what the CPU instruction
set is. So, who really cares if it is elegant? Just us few folks
that have to write the low level software - and we don't buy many
machines. In fact, we'll write software for anything if the
result is that we can sell our software to a large installed base.

So, the PC world is dominated by Intel and it is going to stay
that way for a long time to come.

Phil Kaufman


------------------------------


Date: Fri, 21 Aug 87 17:36:29 CDT
From: "Richard Winkel" <CCRJW%UMCVMB...@wiscvm.wisc.edu>
Subject: Querying disk interleave

Does anyone know of a way to determine the interleave factor on a hard or
floppy disk? Preferably a method that doesn't require timing the disk i/o.

Thanks,
Rich Winkel
UMC Computing Services

------------------------------


Date: 23 Aug 1987 22:20-CDT
Subject: PIBTERM & ULTRA UTLS


If anyone has a copy of PIBTERM newer than the November 1985
version, would you please upload it to the SIMTEL2O MSDOS.MODEM
directory, or maybe give it to the SYSOP, so that he can do so? I've
just discovered this program, and it is truly fantastic. I'm sure
that improvements have been made since 1985, and I'm interested to
see what they are. Also, if someone has the latest version of Ultra
Utilities (version 4.O, I think), go ahead and upload that.

Frank Starr
SAC.55...@E.ISI.EDU

------------------------------


Date: Sun, 23 Aug 87 10:59 EDT
From: <MRB%PSUECL...@wiscvm.wisc.edu>
Subject: HP7470A Talking with Lotus 123


I am inquiring on behalf of a friend of mine. He is trying to get Lotus 1-2-3
to drive an HP7470A plotter via the COM1 RS-232C interface. Now, a simple
3-wire interface (with a couple of the control lines @ the PC end tied active)
works OK, but there is no handshaking & he must keep the data rate low to
avoid overrunning the buffer in the HP, not to mention other obvious drawbacks
to this quickie approach.

I would appreciate it if someone could send me the "official" set of
connections, switch settings, etc. to be used here. We have a HP7470 plotter,
too --- but it's GPIB not RS-232C, and I don't have the manual, anyhow.

Thank you very much for your help.

Email responses to MRB @ PSUECL (via BITNET, although lots of other gateways
seem to work in order to get to Penn State.
I just don't understand all that
miami!dallas!kansascity!from!here!to!there
stuff)

M. R. Baker

------------------------------


From: a...@PYTHAGORAS.BBN.COM (Anthony J. Courtemanche)
Subject: Pinout of 9 pin D-shell color video output needed
Date: 24 Aug 87 19:31:17 GMT
Organization: BBN Advanced Computers, Inc., Cambridge, MA


I am trying to use a modulator to use my television set as
a color monitor for my Leading Edge D, which has both mono
and color video outputs. The mono is Hercules emulation, and
I was under the impression that the color is CGA emulation.
Both ouputs are 9-pin D-shell connectors. Could anyone tell
me the pinout of the color connector? Thanks.

Anthony Courtemanche
a...@bfly-vax.bbn.com


--Anthony Courtemanche
a...@bfly-vax.bbn.com

------------------------------


Date: 24 Aug 87 20:56 +0600
From: Daniel Keizer <busu%cc.uofm.cdn%ubc....@RELAY.CS.NET>


I would like to get some information about Corvus's omninet boards and their
applicable use on the Dy-4 systems. If anyone has some information about
either, reply to me please.

Thanks.
Dan Keizer
BU...@CC.UOFM.CDN
BU...@UOFMCC.BITNET


------------------------------


Date: 24 Aug 87 21:00 +0600
From: Daniel Keizer <busu%cc.uofm.cdn%ubc....@RELAY.CS.NET>
Subject: Single Density Format

I am interested in getting Single Density format available on my pc.
I know the 765 FDC can accommodate both FM as well as MFM, but I have
not seen any docs that say MSDOS or BIOS supports it. As far as I
can see, nothing is readily implemented from a programming point of
view. The way I see it, I would have to program the chip myself
(something that I am sure will take a fair bit of time to do) and
would like to know if others have some helpful advice or tips on how
to control the chip. I know that the bios listings are in the PC
tech ref manual, is there any other sources ??

My goal is to read Osborne Single Sided disks on my PC. Don't ask
why, I'm not too sure myself!

Thanks for all information in advance.
Dan Keizer.
BU...@CC.UOFM.CDN
BU...@UOFMCC.BITNET


------------------------------


Date: 24 Aug 87 21:04 +0600
From: Daniel Keizer <busu%cc.uofm.cdn%ubc....@RELAY.CS.NET>
Subject: 3780 RJE support for PCs

I have had a recent request for information relating to the use of an IBM PC
as being used for 3780 RJE. I know there are a couple of vendors that I have
seen such as AST etc. but are there any other points people have noticed that
might prove detrimental to using a pc for such a use?

Any info would be appreciated, especially from people who are already doing
this or who have considered this.

Dan Keizer
BU...@CC.UOFM.CDN.
BU...@UOFMCC.BITNET

------------------------------


Date: Tue, 25 Aug 87 08:38 IST
From: Chezy Gal <A45%TAUNIVM...@wiscvm.wisc.edu>
Subject: Reading data from digitizer in Turbo-Pascal

Hello Netland

I want to read data from a digitizer in a Turbo-Pascal program. The
digitized data comes in three words separated by commas, i.e.,
XXXXX,YYYYY,C <CR>.

The digitizer is attached to COM1. I've tried to read the data by using
the AUX device, but without success. It seems that AUX can read only one
character, so I've tried to read in a loop, but it didn't help either.

Any suggestions how to do it?

Thanks in advance,

Chezy Gal
A...@TAUNIVM.BITNET
Acknowledge-To: Chezy Gal <A45@TAUNIVM>
------------------------------


Date: Tue, 25 Aug 87 16:10:14 GMT
From: K573605%CZHRZU1...@wiscvm.wisc.edu
Subject: Bullet286-ii from Wave Mate

Hi

Does anybody have experience with the Bullet286-ii Board from Wave Mate
(somewhere in CA)?
This is a motherboard replacement that boosts an old
PC to AT performance. 1MB RAM, 12.5MHZ, 0 wait states. The slots (PC
style) are slowed down to 4.77 MHz, so one should not have problems with
old cards. 384KB above DOS can be used as disk cache.
I would like to know if somebody does have this board, how it behaves, if
there are any compatibility problems (SW and/or HW) and how go Wave
Mate's user support is. Retail prices in USA?
Any comments are welcome.

Andreas lang

------------------------------

Subject: Info on DBASE
Date: Tue, 25 Aug 87 10:40:51 EDT
From: cps-...@braggvax.arpa

Problem: Somehow a DBASE file with 3000+ records has been deleted without
being told to do so. It appears nowhere in archive (IRWIN, HAYES). It is
GONE. Even the info on a back-up disk does not appear.

Answer: PLEASE HELP.

Please call SP4 Riddle, 1-919-396-5713/8818.

------------------------------


Date: Tue 25 Aug 87 11:54:30-CDT
From: Larry Smith <CMP.L...@R20.UTEXAS.EDU>
Subject: Adding Second Hard Drive

I have an IBM PC with a full 10-meg hard drive. I've been told that
it is both possible and impossible to add a 30-meg flashcard
to the system and keep the 10-meg in there. Can somebody tell me the
truth? Thanks.


------------------------------


Date: Wed, 26 Aug 87 15:44 CST
From: <NSMCC1%UHRCC2...@wiscvm.wisc.edu>
Subject: Problems with Vaxmate


We recently purchased Vaxmate, an AT clone. It consists of the
following:

1) 1 MB. of memory
2) 20 MB. Harddisk
3) CGA monitor

We are planning to use it for the purpose of doing word
processing. We are presently looking at three word processing
packages. They are WPSplus, MicroSoft Word, and ChiWriter.
We would like to use DEC's LN03R PostScript Laser printer.

I have three problems. The first problem is that I can not print
anything from DOS. This is probably a software configuration
problem. The only time I can print is when I am in MS-Windows.

The second problem is that I can not print from MicroSoft Word or
from ChiWriter. Our version of ChiWriter does not have any kind of
driver for PostScript printers. MicroSoft Word does have a
PostScript driver, APPLASER.PRD, but it does not work. According to
the printer manual any PostScript driver should work.

The third problem is that which word processing package should I
recommend? It must be capable of doing scientific symbols,
complex mathematical equations and various fonts. I need a package
that is easy to use and relatively user friendly.

I am open to any suggestion. Any assistance you can provide
will be greatly appreciated. Thank You...


Shah.. Hossain, Communications Tech. Cannata Research Computation Center
NSMCC1@UHRCC2 (BITNET) University of Houston
CRCC::SHAH (TEXNET) 4800 Calhoun SR1 Rm 221D
713-749-4612 (MABELL) Houston, Tx 77004


------------------------------


Date: Thu, 27 Aug 87 16:47:13 +0200
From: Karl Georg Schjetne <schjetne%vax.runit.u...@TOR.NTA.NO>
Subject: Doubledos vs. DOS 3.2.

I have been running a BBS-system (MBL) on my UNISYS HT for more than a
year. At present I am running version 3.20 under MS DOS 3.2.

My family sometimes need the PC for other purposes. Thus I have tried
to use Doubledos to serve both my fellow hams and my family at the
same time. This worked fairly well under DOS 2.11.

Some days ago I put DOS 3.2 on the PC - DDOS does not work any more!

DDOS starts OK, but the system deadlocks immediately after startup - before
the BBS gets any opportunity to do any work!

I would certainly appreciate all possible help and suggestions from any-
body "out there" with experience with DDOS and 3.2.

Some additional information:
- My PC is very close to an IBM XT, with 640Kbytes, 20 Mbytes etc..
- My version of DDOS is (3.2) V.

73 de Karl Georg Schjetne, LA8GE,
Steinhaugen 29,
N-7049 TRONDHEIM
NORWAY.

------------------------------


Date: Thu, 27 Aug 87 14:51:41 EDT
From: "James R. McCoy (CCS-E)" <jmc...@ARDEC.ARPA>
Subject: Dbase Mail List Wanted


I am posting the enclosed for Dr. Fairchild. We would appreciate direct
response which we will summarize for the Net if substantial response warrants.

An additional question comes to mind just prior to posting. Is there a dbase
mail list that the good Dr. can Join?

James R. McCoy <jmc...@ardec.arpa>
Surface: Jim McCoy
309 Highland Ave.
Neptune, NJ
07753


------------------------------

End of Info-IBMPC Digest
************************

-------

0 new messages