Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Size limits for text files?

97 views
Skip to first unread message

Jonzy

unread,
Feb 21, 1992, 5:42:00 PM2/21/92
to
Is there or has there been a suggested maximum size for text files?
I do not recall any documention on this subject.
-------------------------------------------------------------------------
Jonzy | postm...@cc.utah.edu
Postmaster (UUCC) | post...@utahcca.bitnet
University of Utah Computer Center | (801) 581-8810
-------------------------------------------------------------------------
President UHGA (Utah Hang Gliding Association)
-------------------------------------------------------------------------

Mark P. McCahill

unread,
Feb 21, 1992, 7:06:31 PM2/21/92
to
In message <EC997759...@CC.UTAH.EDU> Jonzy writes:
> Is there or has there been a suggested maximum size for text files?
> I do not recall any documention on this subject.

Some of the current gopher clients (the Mac and PC client) like text files
to be less than 32K in length because they use a 32K buffer to hold the
text of each document displayed on screen. This is an artifact of the tools
we used to write the clients rather than a hard limit in the protocol...
It is not that bad because (for instance) on the Mac client when you go over
32K worth of text, the client tells you it can't show you all of the text on
the screen, and gives you the option of saving the ENTIRE document to a disk
file or viewing with your favorite word processing program.


Mark P. McCahill
gopherspace engineer
Computer and Information System
University of Minnesota
(612) 625 1300 (voice)
(612) 625 6817 (fax)
m...@boombox.micro.umn.edu

e...@cic.net

unread,
Feb 23, 1992, 2:45:26 PM2/23/92
to
A 32K limit is not a bad design decision either. Consider that on
a SLIP connection you're not going to get more than 2-3KB/sec coming
down the wire - on a good day! - that's 10-15 sec to show the message,
way too long to wait.

It would be a little bit better if the clients started to show the
text as soon as it was being sent rather than waiting for the whole
thing - do any of them do that right now?

--Ed

Mark P. McCahill

unread,
Feb 24, 1992, 11:09:26 AM2/24/92
to
In message <920223194...@nic.cic.net> writes:
> A 32K limit is not a bad design decision either. Consider that on
> a SLIP connection you're not going to get more than 2-3KB/sec coming
> down the wire - on a good day! - that's 10-15 sec to show the message,
> way too long to wait.
>

I know all about this... I have beentesting Mac and PC gclients over SLIP
connections and things work surprisingly well even through 2400 bps SLIP
links. Still, grabbing huge documents gets a little boring.

> It would be a little bit better if the clients started to show the
> text as soon as it was being sent rather than waiting for the whole
> thing - do any of them do that right now?

I haven't seen a gopher client do this yet... but you are right, it would
feel a lot more interactive if clients put at least the first part of the
document on the screen immediately.

Another thing clients should do (but don't yet) is to cache information.
That is, the client ought to keep the previous list of gopher items in memory
so you don't have to fetch it again. Of course, there should also be a timer
so that if 10 minutes have passed since you were last got a list of items from a
gopher, you fetch the listing again on the chance that something has changed.

Most of the time though, caching the items listed bu gopher servers you have
already visitied is a win because nothing has changed (i.e. a client asking a
server for information it asked for three minutes ago is a little silly).
Caching lists of gopher items would be a real win when running over a 2400
blp SLIP link, like I am right now.

Mike Macgirvin

unread,
Feb 24, 1992, 12:25:58 PM2/24/92
to
>A 32K limit is not a bad design decision either.

A 32K limit is an absolutely horrible design decision. I'm constantly
fixing programs that created some arbitrary cap on the size of things which has
since been exceeded. I can understand your problem though and I think there
exists a mechanism to make this workable. That is the "range command" which is
used primarily on mail files. You could use this to suck in a document in bite
size chunks. Please don't saddle me with your machine limits.

"mike" (Mike Macgirvin) m...@SUMEX-AIM.Stanford.EDU
Unix Server Administrator SUMEX-AIM Computer Project

Ed Estes

unread,
Feb 24, 1992, 3:43:36 PM2/24/92
to

I agree with Mike. Technology is rapidly changing and to set the limits at
such a size is (IMHO) not wise. Just look at computers and how they've
changed in the past 10 years. My first machine had a whopping 16k RAM and
tape player to load/save software. Now I've got 32mb RAM and a 300mb hard
disk on my Mac IIfx. The costs are dropping daily. One can now buy an
Intel 14.4EX external modem for $699.00 LIST!

My point is the computer revolution has only just begun... PLEASE don't
set a maximum size for text files based on current technology - what's
state of the art today will be discontinued in another 2 years... ;-)

--Ed

---------------------------------------------------------------------------
Ed Estes | Passion isn't the sole province of those who
ede...@helix.nih.gov | write or those who paint, but belongs to those
(301) 402-1804 | who love doing what they're doing and are driven
"I bleed six colors" | to do it, no matter what.
---------------------------------------------------------------------------

e...@cic.net

unread,
Feb 24, 1992, 5:55:35 PM2/24/92
to
Ed, as much as I'd like to believe you (bigger, faster, cheaper)
there are still some real size and speed constraints that people
need to work within. That doesn't mean that I'm going to avoid
doing things like putting voice mail message in my gopher, just
that I want to be sensitive to the idea that it might not work
as well as I'd hope it would if the person viewing it is less
well-networked than I am.

gopher should be usable over a dialup, and inded it should be
pretty fast while doing that. I think some thought on caching
for menus (and some way of knowing whether a menu is out of date)
will be a win. I also suspect that the gopher2ftp system could
do some caching to good advantage.

disk space and cpu are getting cheaper faster than networks are
getting cheaper. State of the art at one point was a 3 mip
machine connected to a T1 national backbone with a gigabyte of
local disk. Now the same machine or somethign close to it is
sitting in your lap but you only have a dialup phone line. Something
is going to give...

--Ed

Mark P. McCahill

unread,
Feb 24, 1992, 9:01:09 PM2/24/92
to
In message <920224225...@nic.cic.net> writes:
> gopher should be usable over a dialup, and inded it should be
> pretty fast while doing that.

Actually the current release version of the Mac gopher client is
not bad at all on an Apple Powerbook 170 through the 2400 bps internal
modem using a SLIP connection. There is a lot to be said for to surfing
the internet with gopher from anywhere that you can find a phone jack...
I have been running gopher on a powerbook for the past couple weeks and I
especially like being able to search for recipes in gopher from my kitchen
while preparing dinner :-).

Note that there are no hard limits in the gopher protocol on size of documents
returned, but some of the clients have trouble displaying everything on the
screen if the documents get big, so they save the document to disk.

Client software that might be used over a slow link ought to have a convenient
way for the user to stop a document transfer that is taking a long time... and
the new Mac client has this...

Ed Estes

unread,
Feb 25, 1992, 11:21:38 AM2/25/92
to
>Ed, as much as I'd like to believe you (bigger, faster, cheaper)
>there are still some real size and speed constraints that people
>need to work within.

> I want to be sensitive to the idea that it might not work


>as well as I'd hope it would if the person viewing it is less
>well-networked than I am.


I understand your concerns. I just think that instead of limiting the
protocol, do something like Tim suggested:

> I'm all in favour of soft limits: I would certainly recommend that anyone
> writing more than 32kB of _text_ seriously think about breaking it up!

Make sure your users are aware of the current constraints of others and
suggest they adjust their postings accordingly. I sure wouldn't want to
post messages larger than 32k at THIS time. My point is that setting a low
hard limit in a protocol (based on CURRENT technology) can hurt a protocol
when technology evolves - and it IS evolving at a rapid rate...

net...@tellus.csc.fi

unread,
Feb 25, 1992, 11:34:20 AM2/25/92
to
Your message dated: Mon, 24 Feb 92 10:09:26 CST

> Another thing clients should do (but don't yet) is to cache information.
> That is, the client ought to keep the previous list of gopher items in memory
> so you don't have to fetch it again. Of course, there should also be a timer
> so that if 10 minutes have passed since you were last got a list of items from a
> gopher, you fetch the listing again on the chance that something has changed.
>
> Most of the time though, caching the items listed bu gopher servers you have
> already visitied is a win because nothing has changed (i.e. a client asking a
> server for information it asked for three minutes ago is a little silly).
> Caching lists of gopher items would be a real win when running over a 2400
> blp SLIP link, like I am right now.
>
> Mark P. McCahill

I also think that gopher should cache information. It's a pain to use gophers
in US from Finland because the connection to US has quite a high roundtriptime
(nearly a second to umn.edu).

I've been thinking on making a gopher cacheserver that saves directrory-lists
from other servers. It isn't very hard to cache information. You just store
the directory-entries you get from the server (with time-stamp for clearing
the cache) and trasfrom the directory entries (meaning entries of type 1)
so yoy get the next levels of directories into your cache and pass the
other entries to the client as they are.

Pekka Kyt|laakso
---------------------------------------------------------------
net...@tellus.csc.fi Centre for Scientific Computing
Pekka.Ky...@csc.fi PL 40 SF-02101 Espoo FINLAND
Phone: +358 0 4571 Telefax: + 358 0 4572302

e...@cic.net

unread,
Feb 25, 1992, 12:19:12 PM2/25/92
to
> WWW caches texts

hmmm. If you were running a WWW "http" gateway, I suppose you
could do a big pile of caching - rather than have every individual
user go out to the world to fetch documents individually, they
would go to your relay server which might well have the things
they were interested in already.

Such things have also been proposed for FTP servers, I guess I
would add -- you'd connect to a local caching FTP server, from
which you could 'cd' to other anonymous FTP sites; if the local
cache didn't have what you wanted it'd go off to the real place
to get it.

Fortunately gopher and WWW both seem more amenable to
hacking^H^H^H^H^H^H^H research in this regard than the
usual FTP demon.

w/r/t size - like I say I don't want to have hard coded limits
for things, but people doing design need to keep in mind that
if a menu pick results in a megabyte worth of text being thrown
at my client I'm not going to be happy about it...

--Ed

0 new messages