Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

UniVerse file size measuring tools

187 views
Skip to first unread message

DonnaSW25

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

Does anyone know of a more time efficient way to measure file sizes? We have
file that are 500MB - 1GB+ and we need a way to check for growth that we may
plan for resizing if necessary.

Sorry to sound vague. Any help is appreciated.

Thanks Donna

Jeff A. Fitzgerald

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

Donna,

My company, Fitzgerald & Long, offers a product called FAST that
automates the file resizing process. It has great analysis tools that not
only make good recommendations, etc. but check file integrity as well. If
you are interested, email me, visit our web site at www.fitzlong.com or call
us at (303) 755-1102. Hope this helps!

Jeff Fitzgerald
Fitzgerald & Long, Inc.

DonnaSW25 wrote in message
<19980115191...@ladder02.news.aol.com>...

MorphPSX

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

In article <19980115191...@ladder02.news.aol.com>, donn...@aol.com
(DonnaSW25) writes:

>Does anyone know of a more time efficient way to measure file sizes? We have
>file that are 500MB - 1GB+ and we need a way to check for growth that we may
>plan for resizing if necessary.
>
>

I only found out about this today, but if your unix system supports 'magic'
numbers with the 'file' command, the following -

$ file PICK.FILE.NAME <cr>

will return the modulo of the file, along with the separation. It's extremely
fast, as it only has to read the header of the file, where universe stores this
information in a format that the 'file' command can understand. If you compare
this to the size of the file (modulo *(separation * 512)) then I guess you
could work out if the file needed resizing.

Charles Stevenson

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

& look in your basic ref manual for "FILEINFO" function & "STATUS"
statement for other ways of digging out such information.

I concur with what you said elsewhere about HASH.HELP not giving reliable
advice.

Moving a lot of files to type 30 is a good idea. It's either you or the
machine that has to maintain & resize these files. Type-30 spreads the
maintenance task over a wide time period - a little bit each update , vs.
scheduled downtime for file maintenance. And the machine does all that
grunt work. In other words, let it do what it does better than you, &
free your own brain to do the real thinking.

For the non-type-30, checkout Fitzgerald & Long's "FAST" tool. I think
they've already done what you want so you can stop futzing about on this
stuff & get back to your core business.

For what it's worth,
Chuck Stevenson

Robert O. Sachs

unread,
Jan 17, 1998, 3:00:00 AM1/17/98
to

Doesn't UniVerse use a 2K 'frame'?

The last UniVerse site I worked at used an automated resizing
tool/system that was set to analyze all production files once a week,
resizing any that grew. Various options allowed for other types of
resizing (shrinking in particular), but we ran a number of large
transaction files that we didn't want messed with at the wrong time of
the month. The name escapes me at the moment, but it was an excellent
tool - even included analysis for file type changes.

-R.Sachs
ros...@vantek.net
FIDOnet 1:123/315 or 1:123/315.1

Robert O. Sachs

unread,
Jan 17, 1998, 3:00:00 AM1/17/98
to

"FAST" was the tool I was thinking of. It worked very well for us, and
even better when we split some extremely large databases into 'active'
and 'inactive' components.

We -did- notice an increase in thruput and reduction in access times,
even before splitting the databases, after resizing. Like all careful
users, we first had it doing nothing but recommending, and then
comparing the recommendations against our manual calculations and 'gut
feeling'. Once we got comfortable, we had it resize a subset of the
system each weekend until we had processed the whole thing. Then it was
set to review all files for growth each weekend and resize as needed.
Resizing was limited to increases only, as we had a number of
transaction files we didn't want messed with at the wrong time.

Overall, the system ran better after resizing than before, and we seldom
had problems with file sizes after that. As an added bonus, we also
were able to find the occasional error in some of the files, -before-
the users found them for us. <G>


-R.Sachs
ros...@vantek.net
FIDOnet 1:123/315 OR 1:123/315.1

Charles Stevenson wrote:
>
> MorphPSX wrote:
> >
> > In article <19980115191...@ladder02.news.aol.com>, donn...@aol.com
> > (DonnaSW25) writes:
> >
> > >Does anyone know of a more time efficient way to measure file sizes? We have
> > >file that are 500MB - 1GB+ and we need a way to check for growth that we may
> > >plan for resizing if necessary.
> > >
> > >
> >
> > I only found out about this today, but if your unix system supports 'magic'
> > numbers with the 'file' command, the following -
> >
> > $ file PICK.FILE.NAME <cr>
> >
> > will return the modulo of the file, along with the separation. It's extremely
> > fast, as it only has to read the header of the file, where universe stores this
> > information in a format that the 'file' command can understand. If you compare
> > this to the size of the file (modulo *(separation * 512)) then I guess you
> > could work out if the file needed resizing.
>

Charles Stevenson

unread,
Jan 20, 1998, 3:00:00 AM1/20/98
to

Robert O. Sachs wrote:
>
> Doesn't UniVerse use a 2K 'frame'?

It's configurable. The size is typically chosen to work well with
the unix fils system. On the ones I've worked on it will grab 8K at a
pop, i.e., 4 2K frames. (does that vary from unix to unix?) For
performance reasons you want numbers that will go evenly into 8K (512,
1K, 2K, 4K, 8K)or, theoretically, multiples of 8K (8,16,24...) so
retrieval of a given 'frame' never involves 2 disk reads. For typical
nice file, with small items, 2K is usually the no-brainer answer.

Has anyone seen dramatic performance differences by tweaking it?

Chuck Stevenson

0 new messages