Beta Max player - revision

11 views
Skip to first unread message

Robert Lewis

unread,
Jul 13, 2022, 8:26:59 PM7/13/22
to Felton Lug
I just learned the player is VHS-C
The cassette went in the camera.
3-5/8" X 2-1/4" X 7/8" roughly.

Apparently the last email was wrong.
An adapter exists that these go in and adapt to a standard VHS player.

Cheers,
Bob

Meg McRoberts

unread,
Jul 13, 2022, 10:00:19 PM7/13/22
to Felton Lug
I do have one of those adapters but I don't know where it is off-hand.
So not really a solution if you need to get to the contents right now but
a potential back-up plan...

--
You received this message because you are subscribed to the Google Groups "Felton LUG" group.
To unsubscribe from this group and stop receiving emails from it, send an email to felton-lug+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/felton-lug/CAHq9Pms6ZY8yZoxCVZuz5_LkzfrObAcL29fZNkqLzpQ3h%3DV03Q%40mail.gmail.com.

Meg McRoberts

unread,
Sep 13, 2022, 7:53:28 PM9/13/22
to Felton Lug
I have a strange problem -- wondered if anyone else has seen anything
like it recently and some help remembering things I knew long ago...

So my Ubuntu 18 system was running slowly (uptime was above 47:00 ;-) )
so I rebooted it.  And then could not log in -- I'd type in my password and
press enter and the screen would blank like normal, then give me back the
login screen.  Caps lock was not on, I even swapped out the keyboard in
case it was doing something strange.

So, ctrl-alt-f1, where I can log in just fine.  It's been a couple weeks since I
applied updates so I thought maybe it was worth doing that so I ran sudo apt-get update
and got a bunch of messages about error writing to file because no space left on disk.
So I did a df and was shocked to see that /dev/sda1 where / is mounted is 100% full
and has 0 blocks available.

/tmp had three not-huge regular files and some . files so I deleted the regular files but
was still at 0 space -- I thought that should at least give me a few blocks.  I was going
to look up the syntax of using find to locate big files -- I can't believe I have forgotten that!

/home is on its own partition -- I'm using 12% of the space.  Some other little partitions,
all of which show usage of 0-1%.

I confess that I do not monitor my disk usage regularly so I don't know if / has been gradually
filling up or just suddenly got real big.

I can keep poking around, looking for something I might be able to delete or a big file that was
written recently but thought you guys might remember some tricks I have forgotten.

Thanks in advance!
meg

Paul Neuman

unread,
Sep 13, 2022, 8:06:13 PM9/13/22
to Felton Lug
Hello Meg...

Have you thought about using a partition manager to adjust your partition sizes...there are several freebies out there like gparted (haven't tried it but heard good things)  


There are many others out there...some can cost big bux!

Hope that helps.

Kind Regards,

Paul
(Please excuse mistakes... sent from my phone)

--
You received this message because you are subscribed to the Google Groups "Felton LUG" group.
To unsubscribe from this group and stop receiving emails from it, send an email to felton-lug+...@googlegroups.com.

Larry McElhiney

unread,
Sep 13, 2022, 8:12:17 PM9/13/22
to Felton LUG
Hi Meg,

sudo find . -xdev -type f -size +100M  Gives all files over 100M in current directory.

Larry
AC9OX`

Meg McRoberts

unread,
Sep 13, 2022, 9:06:31 PM9/13/22
to Felton LUG
Thanks, Larry!  That did it -- I found a couple things but nothing that jumps out.
I do see an old docker. container under /var/lib but that doesn't explain why I
have this problem.

But Bob has volunteered Robby to look at this so I am going to take advantage
of that opportunity ;-). 

meg

--
You received this message because you are subscribed to the Google Groups "Felton LUG" group.
To unsubscribe from this group and stop receiving emails from it, send an email to felton-lug+...@googlegroups.com.

Larry McElhiney

unread,
Sep 13, 2022, 9:30:40 PM9/13/22
to Felton LUG
Hi Meg,

Well, take the opportunity to get support from Robby before he turns professional - he’s a wizard!

Glad the command worked.  (I still use UNIX fairly regularly as well as Linux.)\

Take care,

Larry
AC9OX 

Meg McRoberts

unread,
Sep 13, 2022, 10:01:00 PM9/13/22
to Felton LUG
I am so jealous that you still have a working Unix system.  It still makes me sick that
SCO never open-sourced Unix.  They did have good reasons -- they couldn't really
release it as something usable because the kernel included a bit of proprietary software
that was owned by other companies, and because they didn't want to be in the business
of maintaining it.  But I thought then -- and still think -- that they could have released that
code just for reference.  Sigh.

And yeah, Robby sounds just amazing, and he will stand on the shoulders of the giants who
are educating him now and help transmit that knowledge to future generations.  It will be much
fun to see what he does, won't it?

--
You received this message because you are subscribed to the Google Groups "Felton LUG" group.
To unsubscribe from this group and stop receiving emails from it, send an email to felton-lug+...@googlegroups.com.

Larry McElhiney

unread,
Sep 13, 2022, 11:29:58 PM9/13/22
to Felton LUG
Hi Meg,

Unfortunately, though I had several AT&T UNIX PCs when I lived in Santa Cruz, I no longer have UNIX hardware.

I have mostly Apple Macintosh computers at home and you can go to their “BSD-like” OS using the Terminal program and get your fill of __ix commands.

For real UNIX, I have a free account on https://sdf.org/ (also called freeshell).  You can get access with a web browser or any SSH client.

(BTW, my test to see if a UNIX system was built using the real stuff is 
“cal 9 1752” which should be missing days 3-13 due to the change from Julian to Georgian calendars.)

Sorry to go off track…

Larry
AC9OX

Rick Moen

unread,
Sep 14, 2022, 9:14:06 PM9/14/22
to Felton LUG
Quoting Larry McElhiney (lmcel...@gmail.com):

> sudo find . -xdev -type f -size +100M Gives all files over 100M in current
> directory.

Yep. I keep this around:
:r /usr/local/bin/largest20


#!/usr/bin/perl -w
# You can alternatively just do:
# find . -xdev -type f -print0 | xargs -r0 ls -l | sort -rn -k +5 | head -20
# Sometimes also handy: du -cks * | sort -rn
use File::Find;
@ARGV = $ENV{ PWD } unless @ARGV;
find ( sub { $size{ $File::Find::name } = -s if -f; }, @ARGV );
@sorted = sort { $size{ $b } <=> $size{ $a } } keys %size;
splice @sorted, 20 if @sorted > 20;
printf "%10d %s\n", $size{$_}, $_ for @sorted

Rick Moen

unread,
Sep 14, 2022, 9:27:18 PM9/14/22
to Felton Lug
Quoting 'Meg McRoberts' via Felton LUG (felto...@googlegroups.com):

[Summary: root filesystem full. You have a separate /home.]

> I confess that I do not monitor my disk usage regularly so I don't
> know if / has been gradually filling up or just suddenly got real big.

Well, _there's_ your biggest problem, that you don't know if something
(e.g., some subtree) has been growing slowly for a long time, or what.
Going forwards, it's a really good idea to do "df -h" occasionally, to
keep an eye on things.

Likewise, it's a fine idea to occasionally check the process table
(for different reasons, such as "Does my Web browser have a memory leak
and is grabbing more and more RAM over time?").

Some directory subtrees tend to grow "dynamically", e.g., /var/log/,
/var/tmp/, and /var/spool/ . Personally, I like to have those _not_ be
part of the rootfs, so that runaway files in one of the dynamic trees
(if that ever happens) cannot fill the root fs.

You didn't say how big your filesystems are. _Sometimes_, the admin
decided to make a key filesystem a bit too small, during partitioning,
resulting in problems later.

I also keep an eye out for files that by their nature keep growing
without anything to prune them, e.g., /home/rick/.procmail/log -- and
retrofit some simple cron job to periodically whack them down to size.
(_System_ logfiles should already be subjected to a log file _rotation_
cron job. Were that not the case, /var/log/ would soon get into
trouble.)




> I can keep poking around, looking for something I might be able to delete or a big file that waswritten recently but thought you guys might remember some tricks I have forgotten.
> Thanks in advance!meg
>
> --
> You received this message because you are subscribed to the Google Groups "Felton LUG" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to felton-lug+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/felton-lug/1939958347.84241.1663113205326%40mail.yahoo.com.

Jeff Liebermann

unread,
Sep 15, 2022, 1:13:14 AM9/15/22
to felto...@googlegroups.com
On 9/13/2022 6:06 PM, 'Meg McRoberts' via Felton LUG wrote:
Thanks, Larry!  That did it -- I found a couple things but nothing that jumps out.
I do see an old docker. container under /var/lib but that doesn't explain why I
have this problem.

I the root partition on an SSD drive?  I've seen odd problems on Windoze machines, where the user erased a very large file, but the TRIM function hadn't yet released erased blocks marked for reallocation.  The symptom was that the SSD continued to act like it was out of available space until something triggered the SSD TRIM function, when it magically showed plenty of space available.  My guess(tm) is that something similar can happen in Linux.  The fix is to manually run:

    sudo fstrim / -v

https://man7.org/linux/man-pages/man8/fstrim.8.html
https://manpages.ubuntu.com/manpages/jammy/man8/fstrim.8.html

If this works, you should probably look into why the automatic TRIM function in Linux is not running or working. TRIM (discard) is run when booting the machine.  I thought that fstrim was also run from cron, but apparently not.  See:

   systemctl status fstrim
   systemctl status fstrim.timer

I confess that I haven't seen an SSD with this problem on a Linux machine, so this is all speculation on my part.  However, I don't think it hurts to run fstrim the next time you erase big files or try to do useful work with an almost full SSD. 

If you try the fstrim command on a conventional hard disk drive, you'll get an error message something like "The discard operation is not supported".

-- 
Jeff Liebermann                 je...@cruzio.com
PO Box 272      http://www.LearnByDestroying.com
Ben Lomond CA 95005-0272
Skype: JeffLiebermann      AE6KS    831-336-2558

Jeff Liebermann

unread,
Sep 15, 2022, 1:24:34 AM9/15/22
to felto...@googlegroups.com
On 9/14/2022 10:13 PM, Jeff Liebermann wrote:

Of course, I read the docs only after I post my comments.  Sigh.
    sudo fstrim / -v

That should be:

    sudo fstrim / -av

so that all the partitions (including swap) are trimmed. 

These look useful:
https://forums.linuxmint.com/viewtopic.php?t=315625
https://forums.linuxmint.com/viewtopic.php?t=288532

Reply all
Reply to author
Forward
0 new messages