Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Disable akonadi

688 views
Skip to first unread message

Mike Easter

unread,
Sep 11, 2022, 3:21:21 PM9/11/22
to
I'm often accused of putting too much emphasis on DE 'efficiency' in the
form of ram used live to the desktop, but we do 'talk about'
lightweight, middleweight, and heavyweight DEs, so I'm 'justified'.

I'm a big proponent of how efficient Plasma KDE is in that regard, often
using less ram than XFCE and LXQt. A good example is KDE Neon which
uses in the low 400s, ie less than most xfce & lxqt based distro/s.

But, a good number of KDEs use MORE than that; so I've looked for some
'wise' methods to try to reduce the kde ram used w/o hurting anything.

I've discovered getting rid of akonadi.

akonadictl stop

That is good for about 50 meg or so and my understanding is that I'm not
hurting anything of which I am aware.

Unfortunately a useful kde resource was last updated in 2011.

> The Akonadi framework is responsible for providing applications with
> a centralized database to store, index and retrieve the user's
> personal information. This includes the user's emails, contacts,
> calendars, events, journals, alarms, notes, etc. In SC 4.4,
> KAddressBook became the first application to start using the Akonadi
> framework. In SC 4.7, KMail, KOrganizer, KJots, etc. were updated to
> use Akonadi as well. In addition, several Plasma widgets also use
> Akonadi to store and retrieve calendar events, notes, etc.

We are now in Plasma 5.2x, not SC Software Compilation anything.

I did find a discussion which is fresh called 'Akonadi is making me lose
my will to live.' at Reddit, which is somewhat useful.

eg:
> My own way to avoid this problem is simply not to use anything that
> depends on Akonadi.
>
> From my point of view, the programs that do use it are generally
> nothing like as good as the alternatives anyway. So it's a double
> win not using it.

> I avoid Akonadi like the plague. It's such a pain and it breaks super
> often for what it is. Hopefully this can be fixed someday.




--
Mike Easter

Marco Moock

unread,
Sep 11, 2022, 3:43:08 PM9/11/22
to
Am Sonntag, 11. September 2022, um 12:21:16 Uhr schrieb Mike Easter:

> That is good for about 50 meg or so and my understanding is that I'm
> not hurting anything of which I am aware.
>
> Unfortunately a useful kde resource was last updated in 2011.

How do behave applications like KOrganizer now? Will they complain or
do they just start the service?

Mike Easter

unread,
Sep 11, 2022, 3:50:27 PM9/11/22
to
Marco Moock wrote:
> How do behave applications like KOrganizer now? Will they complain or
> do they just start the service?

My understanding is that they start the service; but I haven't tested it
yet.

--
Mike Easter

David W. Hodgins

unread,
Sep 11, 2022, 5:13:00 PM9/11/22
to
On Sun, 11 Sep 2022 15:21:16 -0400, Mike Easter <Mi...@ster.invalid> wrote:
> I've discovered getting rid of akonadi.
> akonadictl stop

$ tail -n 1 ~/.config/akonadi/akonadiserverrc
StartServer=false

Applications that use it will complain if you you try to start them. I just
uninstall those applications too.

Regards, Dave Hodgins

Mike Easter

unread,
Sep 11, 2022, 5:33:44 PM9/11/22
to
David W. Hodgins wrote:
> Mike Easter wrote:
>> I've discovered getting rid of akonadi.
>> akonadictl stop
>
> $ tail -n 1 ~/.config/akonadi/akonadiserverrc
> StartServer=false
>
> Applications that use it will complain if you you try to start them. I just
> uninstall those applications too.
>
Good idea.

My current testbed Sparky 6.4 (over Deb stable) KDE w/o akonadi live to
desktop is /under/ 400 megs ram used. Apparently it doesn't have any
akonadi requiring apps, 'tho' the default boot had it active. (default
config StartServer=true)


--
Mike Easter

dillinger

unread,
Sep 12, 2022, 11:52:33 AM9/12/22
to
My KDE neon has no Akonadi installed, I haven missed it yet.
Wouldn't uninstalling Akonadi remove everything that depends upon it?

Mike Easter

unread,
Sep 12, 2022, 12:09:48 PM9/12/22
to
dillinger wrote:
> My KDE neon has no Akonadi installed, I haven missed it yet.
> Wouldn't uninstalling Akonadi remove everything that depends upon it?

I think that is one of the reasons that KDE Neon has a nice low used ram
to the live desktop.

I don't think it is necessary to uninstall it if it is disabled. The
problem is that (generally) in the default .config, it is enabled if
anything wants it. So you have to prevent that, as DWH suggests.

I just looked at my own KDE Neon (a live persistent for dl/ing,
hashchecking, authenticating, and writing linux distro/s to Ventoy or
independent USB) and it does NOT have akonadi installed, so there is no
.config/akonadi dir.

So, I guess your point is, that If I want some live KDE distro to use
less ram to the desktop, the first thing I should do is uninstall akonadi.

That may be a good path to follow rather than just disabling it. Might
as well get rid of it and everything that needs it.

--
Mike Easter

Mike Easter

unread,
Sep 12, 2022, 2:07:26 PM9/12/22
to
Mike Easter wrote:
> Might as well get rid of it and everything that needs it.

Turns out there's plenty to read by people interested in uninstalling
akonadi. For a long time.

Here's an 'extension' of the idea by one such thread comment:

> I highly doubt that it's akonadi itself that is the resource hog.
> Most likely culprit is the embedded MySQL database that akonadi uses
> for backend storage. I know that it can be a real pig with all my
> e-mail accounts and such, so I've tweaked the embedded MySQL's params
> a bit to tune it better, and it works fine.

That is, he kept akonadi but tweaked the mysql to make it 'leaner'.

--
Mike Easter

Mike Easter

unread,
Sep 12, 2022, 2:21:37 PM9/12/22
to
Mike Easter wrote:
> Mike Easter wrote:
>> Might as well get rid of it and everything that needs it.
>
> Turns out there's plenty to read by people interested in uninstalling
> akonadi.  For a long time.
>
KDE Neon also does not install kdepim.

Apparently at some time kde used a clock that required akonadi I guess
because it had some kind of calendar. The kde neon clock is digital
clock 3 by Klapetek, and it has a calendar.


--
Mike Easter

bad sector

unread,
Sep 13, 2022, 6:42:21 PM9/13/22
to
A rolling stone gathers no moss

I've been wondering about akonadi for some time, kdewallet and anything
that MUST have it are gone too.

Mike Easter

unread,
Sep 13, 2022, 7:33:43 PM9/13/22
to
bad sector wrote:
> I've been wondering about akonadi for some time, kdewallet and anything
> that MUST have it are gone too.

I tried to make MX Linux KDE leaner, but it doesn't have akonadi-server
installed by default, nor kdepim, so I couldn't figure out how to get it
down.

I tried using the MX Fluxbox and installing KDE using the MX installer,
but that was worse/more. Maybe I should've used sddm display (customary
for KDE) instead of lightdm when I was given the option during kde install.

--
Mike Easter

bad sector

unread,
Sep 23, 2022, 6:12:57 AM9/23/22
to
I can't help you. For my money THE way to do Linux is
how Slackware does it by default: you log in at cLi
level and start X *IF* and when you want to. No need
for login managers. Maybe some other distros do this
also.

Aragorn

unread,
Sep 23, 2022, 7:58:38 AM9/23/22
to
On 23.09.2022 at 10:12, bad sector scribbled:

> On Tue, 13 Sep 2022 16:33:39 -0700, Mike Easter wrote:
>
> > bad sector wrote:
> >> I've been wondering about akonadi for some time, kdewallet and
> >> anything that MUST have it are gone too.
> >
> > I tried to make MX Linux KDE leaner, but it doesn't have
> > akonadi-server installed by default, nor kdepim, so I couldn't
> > figure out how to get it down.
> >
> > I tried using the MX Fluxbox and installing KDE using the MX
> > installer, but that was worse/more. Maybe I should've used sddm
> > display (customary for KDE) instead of lightdm when I was given the
> > option during kde install.

Plasma works best with SDDM. LightDM is better suited to Xfce.

> I can't help you. For my money THE way to do Linux is
> how Slackware does it by default: you log in at cLi
> level and start X *IF* and when you want to. No need
> for login managers. Maybe some other distros do this
> also.

In a world where 99% of the commonly used applications are GUI-based,
manually starting the display server is a bit silly, and especially if
one leaves the computer running 24/7 — as I myself do (and as per
what UNIX was designed for) — and thus keeps the X server perpetually
running as well.

Slackware users tend to be on the (very) conservative side and may
still have a preference for console-based applications. So perhaps in
Slackware it makes sense to not automatically bring up the X server at
boot time. Distributions like Arch and Gentoo leave it up to the user,
because they don't have any graphical installers and you start off from
a very minimal system.

For that matter, I don't even understand why Pat V. has chosen Plasma as
the default desktop environment in Slackware, given that most Slackware
users I know prefer a no-frills window manager like fvwm and have always
passionately hated anything coming from KDE. They don't even use any
tiling-only window managers like i3 — that's more of an Arch/Gentoo
thing, purely for the geek factor of it.

--
With respect,
= Aragorn =

bad sector

unread,
Sep 23, 2022, 12:56:42 PM9/23/22
to
On Fri, 23 Sep 2022 13:58:30 +0200, Aragorn wrote:

> On 23.09.2022 at 10:12, bad sector scribbled:
>
>> On Tue, 13 Sep 2022 16:33:39 -0700, Mike Easter wrote:
>>
>> > bad sector wrote:
>> >> I've been wondering about akonadi for some time, kdewallet and
>> >> anything that MUST have it are gone too.
>> >
>> > I tried to make MX Linux KDE leaner, but it doesn't have
>> > akonadi-server installed by default, nor kdepim, so I couldn't
>> > figure out how to get it down.
>> >
>> > I tried using the MX Fluxbox and installing KDE using the MX
>> > installer, but that was worse/more. Maybe I should've used sddm
>> > display (customary for KDE) instead of lightdm when I was given the
>> > option during kde install.
>
> Plasma works best with SDDM. LightDM is better suited to Xfce.
>
>> I can't help you. For my money THE way to do Linux is
>> how Slackware does it by default: you log in at cLi
>> level and start X *IF* and when you want to. No need
>> for login managers. Maybe some other distros do this
>> also.
>
> In a world where 99% of the commonly used applications are GUI-based,
> manually starting the display server is a bit silly, and especially if
> one leaves the computer running 24/7 — as I myself do (and as per
> what UNIX was designed for) — and thus keeps the X server perpetually
> running as well.

I believe in the KISS principle, don't start what you
don't need (or what could unnecessarily cause associated
problems), no graphic nothing for booting, etc. I use
dd a LOT to back up entire 100gb partitions, never a
problem under Slackware running at 3 but in the last
year probably a dozen lockups all followed by dangerous
journal replays if doing it in a konsole window under
other distros.


> Distributions like Arch and Gentoo leave it up to the user,

Aaaaah, the proverbially freakin' USER, NOW we're talking! :-)

All users that *I* know are quite capable of

startx-kde or startx-xfce






J.O. Aho

unread,
Sep 23, 2022, 1:55:30 PM9/23/22
to
On 23/09/2022 18.56, bad sector wrote:

> I use
> dd a LOT to back up entire 100gb partitions, never a
> problem under Slackware running at 3 but in the last
> year probably a dozen lockups all followed by dangerous
> journal replays if doing it in a konsole window under
> other distros.

For me it sounds like you have mounted the partition you dd, since the
beginning the recommendation has been unmount before dd.

--

//Aho

bad sector

unread,
Sep 23, 2022, 2:18:08 PM9/23/22
to
in 30+ years I have never done that once, and according
to the laws of probability if that were the case I would
have gotten some fails at runlevel 3 as well.

Aragorn

unread,
Sep 24, 2022, 1:52:13 AM9/24/22
to
On 23.09.2022 at 16:56, bad sector scribbled:

> On Fri, 23 Sep 2022 13:58:30 +0200, Aragorn wrote:
>
> > In a world where 99% of the commonly used applications are
> > GUI-based, manually starting the display server is a bit silly, and
> > especially if one leaves the computer running 24/7 — as I myself do
> > (and as per what UNIX was designed for) — and thus keeps the X
> > server perpetually running as well.
>
> I believe in the KISS principle, don't start what you
> don't need (or what could unnecessarily cause associated
> problems), no graphic nothing for booting, etc. I use
> dd a LOT to back up entire 100gb partitions, never a
> problem under Slackware running at 3 but in the last
> year probably a dozen lockups all followed by dangerous
> journal replays if doing it in a konsole window under
> other distros.

There will always be things that you should do while completely logged
out of the GUI environment, and calling up a character-mode virtual
console for that is easy, even while you're still logged into the GUI.

As an example, I run Manjaro, which is a rolling-release distribution
based upon Arch. Whenever there are updates being pushed out, I log
out of Plasma completely, so as to minimize the amount of shared
libraries being held open, and then I switch to a tty and run the
update/upgrade process from there.

After the update/upgrade one has to reboot anyway because the update
always includes a newer (patch level to the) kernel. Besides, before
rebooting, I then also always run a complete manual fstrim on the
partitions that I keep mounted read-only during normal use of the
system — i.e. /boot, /boot/efi, /usr, /usr/local and /opt — because I
have a systemd timer that automatically trims all mounted filesystems
once a week, but fstrim does not work on read-only mounted filesystems,
and when you update the system you have to remount them all read/write
anyway.

As for dd, as J.O. Aho says, you should never run a dd process on a
(read/write) mounted partition anyway, because the content of the
partition may change before you're done copying it all.

> > Distributions like Arch and Gentoo leave it up to the user,
>
> Aaaaah, the proverbially freakin' USER, NOW we're talking! :-)
>
> All users that *I* know are quite capable of
>
> startx-kde or startx-xfce

What I meant was that they leave it up to the user to decide on whether
to enable a display manager or not.

From memory, Mageia and PCLinuxOS enable a display manager by default
but allow you to disable it post-install from within a GUI control
panel — Mageia already allows that even during the installation itself,
I believe.

If you're going to log into a GUI session anyway, then I don't see why
you should have to start X manually. It's not like it would be more
secure to do so — and do note that I'm talking of having a display
manager running at boot, not of enabling autologin, which I disapprove
of, and especially so on laptops.

bad sector

unread,
Sep 24, 2022, 7:46:25 AM9/24/22
to
On Sat, 24 Sep 2022 07:52:09 +0200, Aragorn wrote:

> On 23.09.2022 at 16:56, bad sector scribbled:
>
>> On Fri, 23 Sep 2022 13:58:30 +0200, Aragorn wrote:
>>
>> > In a world where 99% of the commonly used applications are
>> > GUI-based, manually starting the display server is a bit silly, and
>> > especially if one leaves the computer running 24/7 — as I myself do
>> > (and as per what UNIX was designed for) — and thus keeps the X
>> > server perpetually running as well.
>>
>> I believe in the KISS principle, don't start what you
>> don't need (or what could unnecessarily cause associated
>> problems), no graphic nothing for booting, etc. I use
>> dd a LOT to back up entire 100gb partitions, never a
>> problem under Slackware running at 3 but in the last
>> year probably a dozen lockups all followed by dangerous
>> journal replays if doing it in a konsole window under
>> other distros.
>
> There will always be things that you should do while completely logged
> out of the GUI environment, and calling up a character-mode virtual
> console for that is easy, even while you're still logged into the GUI.

That reminds me, I think there IS a Fn key that will give you
a (runlevel 3?) terminal in Suse (maybe in other distros too).
I'll have to investigate this though (in this respect) I always
preferred and prefer the Slackware way ..it seems more 'linear'


> As an example, I run Manjaro, which is a rolling-release distribution
> based upon Arch. Whenever there are updates being pushed out, I log
> out of Plasma completely, so as to minimize the amount of shared
> libraries being held open, and then I switch to a tty and run the
> update/upgrade process from there.

I never had issues with updates other than rare segfaults
while doing them in konsole under X. But as you say it
SHOULD be a good idea to do them in a pure terminal too.


> As for dd, as J.O. Aho says, you should never run a dd process on a
> (read/write) mounted partition anyway, because the content of the
> partition may change before you're done copying it all.

I replied to that, in many decades I've NEVER run dd
with a mounted source or target partition; I have no
time or competence to start finding out why dd runs
under konsole tend to regularly end in lockups lately
(the last few years only, never before), so long as I
can do it without X I'm happy.


>> > Distributions like Arch and Gentoo leave it up to the user,
>>
>> Aaaaah, the proverbially freakin' USER, NOW we're talking! :-)
>> All users that *I* know are quite capable of
>> startx-kde or startx-xfce
>
> What I meant was that they leave it up to the user to decide on whether
> to enable a display manager or not.
>
> From memory, Mageia and PCLinuxOS enable a display manager by default
> but allow you to disable it post-install from within a GUI control
> panel — Mageia already allows that even during the installation itself,
> I believe.

All distros *should* leave everything that is administrative
in nature to the user or sysadmin, the devs' job is to
provide the options. My 2 cents. But not all of them do.


> If you're going to log into a GUI session anyway, then I don't see why
> you should have to start X manually.

I see nothing wrong with starting X manually, or with
bailing out of it to terminal manually but without
logging-out, exactly what Slackware does. Again I'm
not saying that my needs are anyone elese's needs, only
that there's no such thing as THE standard user whose
needs need to be the only ones catered to.

> It's not like it would be more
> secure to do so — and do note that I'm talking of having a display
> manager running at boot, not of enabling autologin, which I disapprove
> of, and especially so on laptops.

See above, if autologin is what the user wants... Amazon
is such a game-changing success because Bezos got out of
bed one day and decided to show every little merchant how
it should always have been done: the customer is GOD.
It's not like "handle negative feedback correctly", it's
more like DON'T HAVE ANY, period. Ditto in defensive
driving which is not about not being at fault but about
not having any accidents.


--
Saturdays are UbuntuStudio days
Ubuntu 22.04.1 LTS (Jammy Jellyfish), Kernel=5.15.0-43-lowlatency
on x86_64,DM=sddm, DE=KDE, ST=x11,grub2, GPT, BIOS-boot
https://i.imgur.com/QKljk7F.png



Carlos E.R.

unread,
Sep 24, 2022, 7:56:08 AM9/24/22
to
On 2022-09-24 13:46, bad sector wrote:
> On Sat, 24 Sep 2022 07:52:09 +0200, Aragorn wrote:
>
>> On 23.09.2022 at 16:56, bad sector scribbled:
>>
>>> On Fri, 23 Sep 2022 13:58:30 +0200, Aragorn wrote:
>>>
>>>> In a world where 99% of the commonly used applications are
>>>> GUI-based, manually starting the display server is a bit silly, and
>>>> especially if one leaves the computer running 24/7 — as I myself do
>>>> (and as per what UNIX was designed for) — and thus keeps the X
>>>> server perpetually running as well.
>>>
>>> I believe in the KISS principle, don't start what you
>>> don't need (or what could unnecessarily cause associated
>>> problems), no graphic nothing for booting, etc. I use
>>> dd a LOT to back up entire 100gb partitions, never a
>>> problem under Slackware running at 3 but in the last
>>> year probably a dozen lockups all followed by dangerous
>>> journal replays if doing it in a konsole window under
>>> other distros.
>>
>> There will always be things that you should do while completely logged
>> out of the GUI environment, and calling up a character-mode virtual
>> console for that is easy, even while you're still logged into the GUI.
>
> That reminds me, I think there IS a Fn key that will give you
> a (runlevel 3?) terminal in Suse (maybe in other distros too).
> I'll have to investigate this though (in this respect) I always
> preferred and prefer the Slackware way ..it seems more 'linear'

In openSUSE, there are six function keys that give you six
character-mode virtual consoles, while you are still logged into the
GUI, or not.

It is not runlevel 3.

--
Cheers, Carlos.


bad sector

unread,
Sep 24, 2022, 9:58:46 AM9/24/22
to
What's the difference between that and using konsole then
(with X and any number of apps still running under it)?

Carlos E.R.

unread,
Sep 24, 2022, 10:16:08 AM9/24/22
to
Using a character-mode virtual consoles is safer for some operations,
like a "zypper dup". X may crash, but the console keeps running.

--
Cheers, Carlos.


David W. Hodgins

unread,
Sep 24, 2022, 12:57:31 PM9/24/22
to
There are fewer programs running that may update files while the dd is running.
Better yet is to use run level 1, aka emergency.target

You can switch to runlevel 1 with "telinit 1". Then reboot after the dd has
finished.

Regards, Dave Hodgins

David W. Hodgins

unread,
Sep 24, 2022, 12:57:32 PM9/24/22
to
On Sat, 24 Sep 2022 07:54:33 -0400, Carlos E.R. <robin_...@es.invalid> wrote:
> In openSUSE, there are six function keys that give you six
> character-mode virtual consoles, while you are still logged into the
> GUI, or not.
>
> It is not runlevel 3.

If you append " 3" to the command line during boot, systemd will translate
the selection of runlevel 3 to multi-user.target. With systemd, there is still
/usr/lib/systemd/system/runlevel3.target

While it's not a sysvinit run level, calling it runlevel 3 is not incorrect,
It's much less typing to just add a space and the digit 3 (or 1 for emergency
boot) rather then the target.

Regards, Dave Hodgins

Aragorn

unread,
Sep 24, 2022, 10:37:18 PM9/24/22
to
On 24.09.2022 at 12:57, David W. Hodgins scribbled:

> On Sat, 24 Sep 2022 09:58:41 -0400, bad sector
> <forg...@postit.invalid.gov> wrote:
>
> > What's the difference between that and using konsole then
> > (with X and any number of apps still running under it)?
>
> There are fewer programs running that may update files while the dd
> is running. Better yet is to use run level 1, aka emergency.target

The name probably depends on the distribution, David. In Arch and
Manjaro, it's rescue.target. ;)

> You can switch to runlevel 1 with "telinit 1". Then reboot after the
> dd has finished.

Or...

$ sudo systemctl isolate rescue.target

... or whatever the target is called in the distro of choice. ;)

David W. Hodgins

unread,
Sep 24, 2022, 11:16:00 PM9/24/22
to
On Sat, 24 Sep 2022 22:37:03 -0400, Aragorn <telc...@duck.com> wrote:
> The name probably depends on the distribution, David. In Arch and
> Manjaro, it's rescue.target. ;)

From man systemd.special ...
=====
emergency.target
A special target unit that starts an emergency shell on the main console. This target does not pull in other services or mounts. It is the most minimal version of starting the system in order
to acquire an interactive shell; the only processes running are usually just the system manager (PID 1) and the shell process.

rescue.target
A special target unit that pulls in the base system (including system mounts) and spawns a rescue shell. Isolate to this target in order to administer the system in single-user mode with all
file systems mounted but with no services running, except for the most basic. Compare with emergency.target, which is much more reduced and does not provide the file systems or most basic
services. Compare with multi-user.target, this target could be seen as single-user.target.

runlevel1.target is an alias for this target unit, for compatibility with SysV.
=====

It's a lot easier to add " 1" than to add " systemd.unit=rescue.target" to the
kernel command line or to telinit 1 rather then systemctl isolate rescue.target.

Regards, Dave Hodgins

bad sector

unread,
Sep 25, 2022, 7:05:07 AM9/25/22
to
I'll try that, but the only case I can imagine is the one of too many
files being written to the medium concurrently with the dd image and
THAT, the system should be able to manage without a lockup. I've seen
it with all my distros except Slackware where I dd only before X and
mostly in the last 2-3 years.

What couold I look at inside a locked-up system (from another booted
system) that might give me a clue about the cause. I'm pretty sure that
if I want to I can trap the thing before Christmas as I do a dd backup
about once a week.

Carlos E.R.

unread,
Sep 25, 2022, 7:48:09 AM9/25/22
to
Never happened to me since 1998. It is something you are doing, or you
have broken hardware.

>
> What couold I look at inside a locked-up system (from another booted
> system) that might give me a clue about the cause. I'm pretty sure
> that if I want to I can trap the thing before Christmas as I do a dd
> backup about once a week.
>

--
Cheers, Carlos.


bad sector

unread,
Sep 25, 2022, 12:57:59 PM9/25/22
to
all I do is

dd if=/dev/sdxy of=/somepath/data/partition-x-image.dd status=progress

if it were a hardware problem it would affect all distros in terminal or
in X


NB. I'm assuming that the lockups happen when writing an image file for
a backup but in actual fact I can't even state that, haven't watched it
that closely seeing that when I do get locked up I just boot Slackware
instead.

How about if an overdue fsck starts in the middle of a dd (in the case
of a data file being written?

David W. Hodgins

unread,
Sep 25, 2022, 2:50:17 PM9/25/22
to
On Sun, 25 Sep 2022 12:57:52 -0400, bad sector <forg...@invalid.postit.gov> wrote:
> all I do is
> dd if=/dev/sdxy of=/somepath/data/partition-x-image.dd status=progress

That is going to be incredibly slow, and will appear to lock up though if
you wait long enough it will eventually finish.

The first problem is with the command. Add the parameter bs=1M, otherwise
it defaults to bs=512. Reading/writing 512 bytes at a time means that each
block (typically 4k) gets read and written 8 times, or more if the block size
is larger. The progress will appear to be normal until the write cache is full,
after which it will appear to lock up.

The second likely problem, which will have the rest of the system lock up is
due to caching. The command will appear to run normally until the write buffers
are full at which point the system will appear to lock up as the kernel waits
for the kworker thread to write to the device. This can be reduced by fixing
the first problem and by adding a file ...
$ cat /etc/sysctl.d/tales.conf
# Reduce applications being swapped
vm.swappiness=1
# Don't shrink the inode cache
vm.vfs_cache_pressure=50

See https://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that
for an explanation.

Also make sure the file system blocks are aligned with the device i/o blocks.

Regards, Dave Hodgins

bad sector

unread,
Sep 25, 2022, 7:46:48 PM9/25/22
to
When I read this I was working AvLinux which is due for a backup so I
booted Suse-Leap and just as I expected the dd op locked me up at 60%
done, even with the bs=1M argument (while I was doing other things as
well under X). I've used the argument before, it gives initially higher
speeds but eventually settles back to around 115 MB/s. When I say lock
up I mean hard. I may be able to shade windows or move them but soon all
comes to a complete halt and will never recover short of a hardware reset.

Since I won't normally run Suse-Leap until Thursday that system is there
ready to give up its secrets about why it locked up, but the forensic
work is waaaaaaay over my head :-)

I next booted Tumbleweed and again as I expected there came the journal
replay on the target data drive. Next I opened a terminal session as
Carlos suggested, the repeated backup attempt completed.

Carlos E.R.

unread,
Sep 26, 2022, 5:32:07 AM9/26/22
to
On 2022-09-26 01:46, bad sector wrote:
> On 9/25/22 2:50 PM, David W. Hodgins wrote:
>> On Sun, 25 Sep 2022 12:57:52 -0400, bad sector
>> <forg...@invalid.postit.gov> wrote:
>>> all I do is
>>> dd if=/dev/sdxy of=/somepath/data/partition-x-image.dd status=progress

I assume that /somepath/data/ is not inside /dev/sdxy


>> That is going to be incredibly slow, and will appear to lock up though if
>> you wait long enough it will eventually finish.
>>
>> The first problem is with the command. Add the parameter bs=1M,
>> otherwise it defaults to bs=512. Reading/writing 512 bytes at a
>> time means that each block (typically 4k) gets read and written 8
>> times, or more if the block size is larger. The progress will
>> appear to be normal until the write cache is full, after which it
>> will appear to lock up.
>>
>> The second likely problem, which will have the rest of the system
>> lock up is due to caching. The command will appear to run normally
>> until the write buffers are full at which point the system will
>> appear to lock up as the kernel waits for the kworker thread to
>> write to the device. This can be reduced by fixing the first
>> problem and by adding a file ...
>> $ cat /etc/sysctl.d/tales.conf
>> # Reduce applications being swapped
>> vm.swappiness=1
>> # Don't shrink the inode cache
>> vm.vfs_cache_pressure=50
>>
>> See
>> https://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that
>> for an explanation.
>>
>> Also make sure the file system blocks are aligned with the device i/o
>> blocks.
>
> When I read this I was working AvLinux which is due for a backup so I
> booted Suse-Leap and just as I expected the dd op locked me up at 60%
> done, even with the bs=1M argument (while I was doing other things as
> well under X). I've used the argument before, it gives initially higher
> speeds but eventually settles back to around 115 MB/s. When I say lock
> up I mean hard. I may be able to shade windows or move them but soon all
> comes to a complete halt and will never recover short of a hardware reset.


That's probably due to cache/buffer exhaustion.

You could try:

dd if=/dev/sdxy of=/somepath/data/image.dd status=progress \
bs=16M oflag=direct

--
Cheers, Carlos.


bad sector

unread,
Sep 26, 2022, 7:02:27 AM9/26/22
to
On Mon, 26 Sep 2022 11:31:38 +0200, Carlos E.R. wrote:

> On 2022-09-26 01:46, bad sector wrote:
>> On 9/25/22 2:50 PM, David W. Hodgins wrote:
>>> On Sun, 25 Sep 2022 12:57:52 -0400, bad sector
>>> <forg...@invalid.postit.gov> wrote:
>>>> all I do is
>>>> dd if=/dev/sdxy of=/somepath/data/partition-x-image.dd status=progress
>
> I assume that /somepath/data/ is not inside /dev/sdxy

Of course not, source must be unmounted

...

>>> See
>>> https://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that
>>> for an explanation.
>>>
>>> Also make sure the file system blocks are aligned with the device i/o
>>> blocks.
>>
>> When I read this I was working AvLinux which is due for a backup so I
>> booted Suse-Leap and just as I expected the dd op locked me up at 60%
>> done, even with the bs=1M argument (while I was doing other things as
>> well under X). I've used the argument before, it gives initially higher
>> speeds but eventually settles back to around 115 MB/s. When I say lock
>> up I mean hard. I may be able to shade windows or move them but soon all
>> comes to a complete halt and will never recover short of a hardware reset.
>
>
> That's probably due to cache/buffer exhaustion.
>
> You could try:
>
> dd if=/dev/sdxy of=/somepath/data/image.dd status=progress \
> bs=16M oflag=direct

Thanks!

Here the bite size 16 coughed up a distinct speed bonus,
finally settling to 160 MB/s. I did it in Suse-Leap again
with LOTS of other apps loaded. Leap becauseit has been
the more problematic with thsiissue). NO LOCK this time,
time will tell!

I never used oflag before, the man page item doesn't
have a single word in it to which I could relate :-)

Also, what comes down when the last block goes through,
is it trimed down as needed, why not use bs=1G then?

I have 16 gb ram, is this at the root of the problem
in that it offers too much buffer?



--
Mondays are Artix days & OS'es are just plug-ins for my apps
Artix Linux , Kernel=5.19.10-artix1-1 on x86_64,
DM=Unknown, DE=KDE, ST=x11,grub2, GPT, BIOS-boot
https://i.imgur.com/fShcw31.png







Carlos E.R.

unread,
Sep 26, 2022, 7:24:08 AM9/26/22
to
Info is in info page, not man page.

>
> Also, what comes down when the last block goes through,
> is it trimed down as needed, why not use bs=1G then?

Because it reads 2 gig, then writes 1G, no concurrency.

>
> I have 16 gb ram, is this at the root of the problem
> in that it offers too much buffer?

No, the issue is that it fills the kernel cache with disk operation from
this command only, so that the rest of commands in the system disk
operations are starved. The direct flag disables the write cache.

It is not a lock situation, it is a very very very slow system.

Notice that the direct oflag may make writing to be a bit slower, except
that the effect is dwarfed by the cache starvation.

For example, if you write a DVD to an USB stick with dd, you may notice
(using 'top') how the cache grows by the size of the image. The kernel
makes the wrong assumption that you may want to write that to another
file, so it caches everything it can as long as there is ram.

You could try also iflag=direct and compare. Or both.

--
Cheers, Carlos.


bad sector

unread,
Sep 26, 2022, 8:04:24 AM9/26/22
to
On 9/26/22 07:20, Carlos E.R. wrote:

> It is not a lock situation, it is a very very very slow system.

Left ON overnight, still (not)locked in morning?

> Notice that the direct oflag may make writing to be a bit slower, except that the effect is dwarfed by the cache starvation.

hmmm, I would have thought that the system would not let any running executable hog more than say X% of ram. I vaguely remember something akin to this complaint decades ago when mention was made that dd made all other ops 'limp', kfind was accused as well. Has the old problem gotten even worse instead of being fixed?


> For example, if you write a DVD to an USB stick with dd, you may notice (using 'top') how the cache grows by the size of the image. The kernel makes the wrong assumption that you may want to write that to another file, so it caches everything it can as long as there is ram.
>
> You could try also iflag=direct and compare.

that's what I did :-)

Paul

unread,
Sep 26, 2022, 9:34:16 AM9/26/22
to
On 9/26/2022 8:04 AM, bad sector wrote:
> On 9/26/22 07:20, Carlos E.R. wrote:
>
>> It is not a lock situation, it is a very very very slow system.
>
> Left ON overnight, still (not)locked in morning?
>
>> Notice that the direct oflag may make writing to be a bit slower, except that the effect is dwarfed by the cache starvation.
>
> hmmm, I would have thought that the system would not let any running executable hog more than say X% of ram.

That's called a memory quota. On a Sparc, we would set that to 50%,
so that remote login users and the local user, could share the
same machine without too many fights over resources. We would not
allow a chip simulation, to take over an entire machine.

Some examples of faffing about, here.

https://unix.stackexchange.com/questions/34334/how-to-create-a-user-with-limited-ram-usage

As far as I'm concerned, "dd" does not have any memory charged to it,
to speak of. It may use the System Write Cache, but the System Write
cache is a separate facility. The "dd" process is pretty simple
when you think about it. It only needs to buffer one block if it wants.

The System Write Cache is an allocated memory (when it uses RAM,
a program starting up cannot steal it back). But the size of it is
limited to some percentage of RAM. It's still a pretty dumb idea,
because there isn't much advantage to the hard drive light blinking
after the program doing the I/O has exited. The disk activity is
a nuisance, no matter when it happens. Using a cache does not
change that aspect.

The System Read Cache, any time a program needs memory, it is
released from the System Read Cache. So that cache does not
cost anything.

I don't see a reason for your problem. On the other hand,
I can't think of a good means to debug the problem either.
I don't think Linux has any dash display with all these
buffers displayed, along with their consumption, so you
can watch a system grind to a halt because something is
exhausted.

How is it, that I've been using "dd" forever, and
never seen a problem like this ? Obviously I haven't
adjusted enough settings :-)

Since you seem to be doing a backup of some sort,
maybe this is related to the file system type of
the destination ? Maybe the resource exhaustion is
caused by a swelling of some RAM used by that file system.
And that would be separate RAM, from the System Write cache.

Paul

Carlos E.R.

unread,
Sep 26, 2022, 11:44:07 AM9/26/22
to
On 2022-09-26 14:04, bad sector wrote:
> On 9/26/22 07:20, Carlos E.R. wrote:
>
>> It is not a lock situation, it is a very very very slow system.
>
> Left ON overnight, still (not)locked in morning?
>
>> Notice that the direct oflag may make writing to be a bit slower,
>> except that the effect is dwarfed by the cache starvation.
>
> hmmm, I would have thought that the system would not let any running
> executable hog more than say X% of ram.

And it doesn't.

This is memory used by the kernel itself to cache disk reads or writes,
not by processes.

> I vaguely remember something
> akin to this complaint decades ago when mention was made that dd made
> all other ops 'limp', kfind was accused as well. Has the old problem
> gotten even worse instead of being fixed?

I am not aware of the old problem.

>
>
>> For example, if you write a DVD to an USB stick with dd, you may
>> notice (using 'top') how the cache grows by the size of the image. The
>> kernel makes the wrong assumption that you may want to write that to
>> another file, so it caches everything it can as long as there is ram.
>>
>> You could try also iflag=direct and compare.
>
> that's what I did :-)

You tried both iflag, and oflag?

--
Cheers, Carlos.


Carlos E.R.

unread,
Sep 26, 2022, 11:48:08 AM9/26/22
to
You can see it with top.

>
> How is it, that I've been using "dd" forever, and
> never seen a problem like this ? Obviously I haven't
> adjusted enough settings :-)

I have.

If you write a large image to disk, of a size comparable to RAM, it happens.

>
> Since you seem to be doing a backup of some sort,
> maybe this is related to the file system type of
> the destination ? Maybe the resource exhaustion is
> caused by a swelling of some RAM used by that file system.
> And that would be separate RAM, from the System Write cache.
>
>    Paul

--
Cheers, Carlos.


bad sector

unread,
Sep 26, 2022, 11:58:09 AM9/26/22
to
:-)))

> Since you seem to be doing a backup of some sort,
> maybe this is related to the file system type of
> the destination ? Maybe the resource exhaustion is
> caused by a swelling of some RAM used by that file system.
> And that would be separate RAM, from the System Write cache.
>
>    Paul

I don't see it being a disk or (ext4) fs issue although in the past I have suspected innocent disks because of it. It's some sort of memory managment problem, there is no disk activity at all when it starts. The mentioned kfind example is similar although I don't recall seeing kfind lock things right up. When kfind gets 'tied up in a knot' as it were there is no disk activity but if you move or shade a window you get all kinds of partial paralaysis on the monitor. I wonder if there might be clues therein that are easier to find than in dd sessions.


Carlos E.R.

unread,
Sep 26, 2022, 12:56:08 PM9/26/22
to
On 2022-09-26 17:58, bad sector wrote:
> On 9/26/22 09:34, Paul wrote:
>> On 9/26/2022 8:04 AM, bad sector wrote:
>>> On 9/26/22 07:20, Carlos E.R. wrote:

..

>> How is it, that I've been using "dd" forever, and
>> never seen a problem like this ? Obviously I haven't
>> adjusted enough settings :-)
>
> :-)))
>
>> Since you seem to be doing a backup of some sort,
>> maybe this is related to the file system type of
>> the destination ? Maybe the resource exhaustion is
>> caused by a swelling of some RAM used by that file system.
>> And that would be separate RAM, from the System Write cache.
>>
>>     Paul
>
> I don't see it being a disk or (ext4) fs issue although in the past I
> have suspected innocent disks because of it. It's some sort of memory
> managment problem, there is no disk activity at all when it starts. The
> mentioned kfind example is similar although I don't recall seeing kfind
> lock things right up. When kfind gets 'tied up in a knot' as it were
> there is no disk activity but if you move or shade a window you get all
> kinds of partial paralaysis on the monitor. I wonder if there might be
> clues therein that are easier to find than in dd sessions.

It is very simple to understand, though.

The kernel has filled all available RAM for use with buffers/cache. It
needs to free that in order to start any process or read or write
anything to the hard disk. It is not locked, just saturated and very
slow. If cache/memory is freed, the dd process will continue running and
fill more, so the situation doesn't clear.

Of course, it is not that bad if you are running in text mode, far fewer
processes competing for memory and disk i/o.

I don't know if there is a global setting limiting the buffers/cache
allocated.

--
Cheers, Carlos.


Carlos E.R.

unread,
Sep 26, 2022, 1:04:08 PM9/26/22
to
On 2022-09-23 13:58, Aragorn wrote:
> On 23.09.2022 at 10:12, bad sector scribbled:
>
>> On Tue, 13 Sep 2022 16:33:39 -0700, Mike Easter wrote:
>>
>>> bad sector wrote:
>>>> I've been wondering about akonadi for some time, kdewallet and
>>>> anything that MUST have it are gone too.
>>>
>>> I tried to make MX Linux KDE leaner, but it doesn't have
>>> akonadi-server installed by default, nor kdepim, so I couldn't
>>> figure out how to get it down.
>>>
>>> I tried using the MX Fluxbox and installing KDE using the MX
>>> installer, but that was worse/more. Maybe I should've used sddm
>>> display (customary for KDE) instead of lightdm when I was given the
>>> option during kde install.
>
> Plasma works best with SDDM. LightDM is better suited to Xfce.
>
>> I can't help you. For my money THE way to do Linux is
>> how Slackware does it by default: you log in at cLi
>> level and start X *IF* and when you want to. No need
>> for login managers. Maybe some other distros do this
>> also.
>
> In a world where 99% of the commonly used applications are GUI-based,
> manually starting the display server is a bit silly, and especially if
> one leaves the computer running 24/7 — as I myself do (and as per
> what UNIX was designed for) — and thus keeps the X server perpetually
> running as well.

There are things that do not work correctly if you start X via startx or
similar.

For example, what I call the "login manager" assumes that the person
that is doing the log in is seated at the computer, and assigns him/her
permissions to use audio, dvd, usb...

--
Cheers, Carlos.


Paul

unread,
Sep 26, 2022, 3:27:25 PM9/26/22
to
Yes, but it takes "two to tango".

The root cause is a runaway behavior, a "memory leak" in old parlance,
that does not kill KFind on its own. When any application writes
at high speed, the System Write Cache requests memory, and that is
the memory pressure needed for a lockup. One process puts pressure on the
system (KFind), that is not relieved. Then, running "dd" makes the write cache
suddenly ask for a couple more GB of RAM, and then you get a lockup.

I've run "dd" on a tmpfs, without any sign of a problem. At like
4GB/sec. Never a sign of a problem, all working as it should.

Maybe it is as Dave Hodgins says, a need to adjust system behavior.

# Reduce applications being swapped
vm.swappiness=1
# Don't shrink the inode cache
vm.vfs_cache_pressure=50

Or it could be a bug in some garbage collection or the like.

In years past, I've seen mis-adjusted things, when you "take
the pressure off them", they would drain. If KFind can tip over
a system (while not doing anything), that sounds like an actual
memory leak.

We've had browsers in the last year or so, consume memory at decent
speed while just sitting there with advertising-heavy web pages loaded.
And to me, it was the compositing process the browser uses, which
broke. "Buffers were piling up" on the browser side, and the browser
was too stupid to discard compositing buffers that were no longer
temporaly appropriate (it wasn't "dropping frames" like it should,
it was keeping every last stinking one of them). The rate of memory
consumption was consistent with browser compositing at 60FPS and the
display subsystem not consuming them.

Paul

David W. Hodgins

unread,
Sep 26, 2022, 3:39:24 PM9/26/22
to
On Mon, 26 Sep 2022 13:00:17 -0400, Carlos E.R. <robin_...@es.invalid> wrote:
> There are things that do not work correctly if you start X via startx or
> similar.
>
> For example, what I call the "login manager" assumes that the person
> that is doing the log in is seated at the computer, and assigns him/her
> permissions to use audio, dvd, usb...

It works correctly for me using Mageia 8. After login using mgetty,
$ loginctl list-sesssions shows my id has been assigned seat0 on tty1

Then running "startx startkde" the sound works, inserting a usb drive I'm
able to mount the file systems on it, etc.

Checking the journal it shows ...
login[1656]: pam_unix(login:session): session opened for user dave by LOGIN(uid=0)

It's login/pam that initiates starting the session management, not the display
manager.

Regards, Dave Hodgins

bad sector

unread,
Sep 26, 2022, 4:50:45 PM9/26/22
to
Thank you everyone. So as a mortal lowercase
u-ser I done did:

sudo dd if=/unmounted/sourcepartition of=/datadrive/fileXY.dd bs=16M oflag=direct status=progress

and for systemd distros created file
/etc/sysctl.d/tales.conf

---------------------
# Reduce applications being swapped
vm.swappiness=1
# Don't shrink the inode cache
vm.vfs_cache_pressure=50
---------------------

Then repeating the exercise again from Leap I also
started kfind to find a file I know does not exist
on all mounted media while composing this response
to see if I had a provisional workaround

Speed settled to 95 MB/s then recovered to 120 when
the other progs got killed. The dd op completed
with no issues.

Should I expect tails.conf to become unnecessary by
year's end as a result of kernel/distro/app upgrades?



Carlos E.R.

unread,
Sep 26, 2022, 6:04:08 PM9/26/22
to
On 2022-09-26 22:50, bad sector wrote:
> On Mon, 26 Sep 2022 15:27:22 -0400, Paul wrote:


>
>
> Thank you everyone. So as a mortal lowercase
> u-ser I done did:
>
> sudo dd if=/unmounted/sourcepartition of=/datadrive/fileXY.dd bs=16M oflag=direct status=progress

This alone should work.


However, you are root.

--
Cheers, Carlos.


David W. Hodgins

unread,
Sep 26, 2022, 7:24:39 PM9/26/22
to
On Mon, 26 Sep 2022 16:50:38 -0400, bad sector <forg...@postit.invalid.gov> wrote:
> and for systemd distros created file
> /etc/sysctl.d/tales.conf
>
> ---------------------
> # Reduce applications being swapped
> vm.swappiness=1
> # Don't shrink the inode cache
> vm.vfs_cache_pressure=50
> ---------------------

For non systemd distributions put the lines in /etc/sysctl.conf which is from the
procps-ng package http://sourceforge.net/projects/procps-ng/ and long pre-dates
systemd.

Regards, Dave Hodgins

bad sector

unread,
Sep 26, 2022, 7:25:31 PM9/26/22
to
I have and continue being called many exotic
names that leave no doubts but in this case
I'm not sure I understand :-)

Normally I *do* issue the dd command as root



> --
> Cheers, Carlos.


Carlos E.R.

unread,
Sep 27, 2022, 4:52:08 AM9/27/22
to
I mean that in the above command line, you are running dd as root, not
as mortal lowercase u-ser.

--
Cheers, Carlos.


Daniel65

unread,
Sep 27, 2022, 6:59:54 AM9/27/22
to
bad sector wrote on 26/9/22 9:02 pm:
> On Mon, 26 Sep 2022 11:31:38 +0200, Carlos E.R. wrote:
>
>> On 2022-09-26 01:46, bad sector wrote:
>>> On 9/25/22 2:50 PM, David W. Hodgins wrote:
>>>> On Sun, 25 Sep 2022 12:57:52 -0400, bad sector
>>>> <forg...@invalid.postit.gov> wrote:
>>>>> all I do is
>>>>> dd if=/dev/sdxy of=/somepath/data/partition-x-image.dd status=progress
>>
>> I assume that /somepath/data/ is not inside /dev/sdxy

Umm!! In the example given by bad sector, above, isn't the
/somepath/data/ (as part of of=/somepath/data/partition-x-image.dd) the
destination location (output file) rather than the input location (input
file)??
--
Daniel

Carlos E.R.

unread,
Sep 27, 2022, 7:16:08 AM9/27/22
to
Yes.

So? :-?

--
Cheers, Carlos.


Daniel65

unread,
Sep 28, 2022, 2:09:37 AM9/28/22
to
Again, from the example given above (dd if=/dev/sdxy
of=/somepath/data/partition-x-image.dd status=progress), I'm taking the
'if=/dev/sdxy' as the primary source drive and the /somepath/data/ as
the destination location ..... but, I suppose the '/somepath/' of the
destination drive could include the '/sdxy' somewhere!
--
Daniel

Paul

unread,
Sep 28, 2022, 4:01:11 AM9/28/22
to
Let us make it more realistic looking.

dd if=/dev/sdxy of=/somepath/data/partition-x-image.dd status=progress

How about this as a command example:

sudo dd if=/dev/sda1 of=/media/mint/data/partition-sda1-image.dd status=progress

The /media/mint suggests an automount of an external drive called "data"
which was just plugged in.

And because there is some kind of System Write cache, the initial
stage of the transfer can run very fast (that's if the source is
very fast). When the System Write cache is full, the rate you fill
the System Write cache, cannot be any faster than the rate that it
drains onto /media/mint/data mounted partition.

You cannot eject /media/mint/data , until the System Write cache
has drained of all /media/mint/data proposed writes. If you
pull out the cable while the drive LED is still lit, there
will be corruption (lost data).

There are some USB devices with no status LED easily visible,
and it is those devices which make eject/cable-pull more
dangerous. Because with the System Write cache enabled,
you can't really tell what is going on. The drive LED is
very important in situations like this.

Paul

Carlos E.R.

unread,
Sep 28, 2022, 5:00:10 AM9/28/22
to
Yes, exactly.

And I asked bad sector to verify that this was not happening, and he
said it wasn't.


--
Cheers, Carlos.


bad sector

unread,
Sep 28, 2022, 6:57:00 AM9/28/22
to
but you gotta admit that the supposition _was_ a bit of a curve :-)

Like most users (I think) I like to keep my rags out of
'systems' ways which tend to be rather dynamic, especially
when you use many of them. The best way to really see it
is by comparing to a slide rule that some of us still
remember; the sight-window is the processor, the sliding
rule is the data, so I keep home-business and everything
else that differs from a stock default OS (the processor)
somewhere else.

One simple example being ~/.bashrc in which I set up the
terminal prompt to be just a yellow # sign starting at /
for userMe and the same but red for root. This file is on
a data medium and all of my 7 distro OS'es have a link in
/home/userMe/.bashrc pointing to it. UserMe of course
has identical id and group appartenances across systems.
Furthermore my Slide-rule window drive has only OS'es on
it, 7*100gb with 2*100gb stby spares for experimenting
with a new OS, a common swap, BIOS boot, and unused.

At least one other user keeps house within the OS partition
so that if the data drive goes kaput (or I forget to plug
it in) I can still log in as that user to straighten things
out. The advantage of having all 100gb OS partitions is
that I can move any distro to any one of them, or even
duplicate any of them without resizing hassles. Ideally I
want to have all user files as links pointing to a common
real one but that, ALAS, still seems many parsecs ahead.
And just to stay on topic-drift I backup entire partitions
with dd which I can dd back to any OS partition, after all
*OS'es are just plugins for my favorite apps*

...and Gates hates my guts.




--
Wednesdays are Slackware days:
Slackware 15.0, Kernel=5.15.63 on x86_64,
DM=Unknown,DE=lightdm,ST=tty,grub2,GPT,
BIOS-boot,CMI8788-Oxygen.
https://i.imgur.com/946rvmL.png

Daniel65

unread,
Sep 28, 2022, 9:10:42 AM9/28/22
to
Paul wrote on 28/9/22 6:01 pm:
Thank you, Paul. .... That might require a second (or third) read to
sink in!! ;-P
--
Daniel

Carlos E.R.

unread,
Sep 28, 2022, 1:40:07 PM9/28/22
to
Well, it would be rare, but there was a possibility and it could explain
the hang you were describing :-)

If you were experiencing a similar problem in several distros, there had
to be a common cause.
:-)

--
Cheers, Carlos.


bad sector

unread,
Sep 28, 2022, 3:47:34 PM9/28/22
to
On Wed, 28 Sep 2022 19:38:45 +0200, Carlos E.R. wrote:
>
> On 2022-09-28 12:56, bad sector wrote:
>>
>> but you gotta admit that the supposition _was_ a bit of a curve :-)
>
> Well, it would be rare, but there was a possibility and it could explain
> the hang you were describing :-)
>
> If you were experiencing a similar problem in several distros, there had
> to be a common cause.

Lat night with Slackware but this time under X, 60 MB/s
and that's using bs=16M, locked up around 50 gb.

Carlos E.R.

unread,
Sep 28, 2022, 5:04:09 PM9/28/22
to
On 2022-09-28 21:47, bad sector wrote:
> On Wed, 28 Sep 2022 19:38:45 +0200, Carlos E.R. wrote:
>>
>> On 2022-09-28 12:56, bad sector wrote:
>>>
>>> but you gotta admit that the supposition _was_ a bit of a curve :-)
>>
>> Well, it would be rare, but there was a possibility and it could explain
>> the hang you were describing :-)
>>
>> If you were experiencing a similar problem in several distros, there had
>> to be a common cause.
>
> Lat night with Slackware but this time under X, 60 MB/s
> and that's using bs=16M, locked up around 50 gb.

You forgot oflag=direct?

--
Cheers, Carlos.


bad sector

unread,
Sep 28, 2022, 9:10:10 PM9/28/22
to
no




Paul

unread,
Sep 29, 2022, 12:37:08 AM9/29/22
to
On 9/28/2022 3:47 PM, bad sector wrote:
> On Wed, 28 Sep 2022 19:38:45 +0200, Carlos E.R. wrote:
>>
>> On 2022-09-28 12:56, bad sector wrote:
>>>
>>> but you gotta admit that the supposition _was_ a bit of a curve :-)
>>
>> Well, it would be rare, but there was a possibility and it could explain
>> the hang you were describing :-)
>>
>> If you were experiencing a similar problem in several distros, there had
>> to be a common cause.
>
> Lat night with Slackware but this time under X, 60 MB/s
> and that's using bs=16M, locked up around 50 gb.

Was the destination EXT4 or NTFS ?

On the NTFS, did you tick the box to select
legacy NTFS compression ? DONT DO THAT. :-)
The Properties box in Windows, allows you to tick that.

NTFS has two kinds of compression. The legacy
kind, the OS stops writing around 50-60GB of
a single solid file. And the new compression
format, is only used by the OS. The new compression
method uses a reparse point, and Linux cannot parse
that (thank you Microsoft). The Paragon NTFS driver
submission might have fixed that for us, but you know
how that story went...

If compressing the output of "dd", use your favorite
in-line compressor in the dd pipe you construct. In the
past, you could slip a "gzip -c" in there, but today,
you can play with "pigz -c" and see how much faster
that runs. I don't know if there are too many
other compressors that will run on all cores for you.

Paul

Carlos E.R.

unread,
Sep 29, 2022, 4:16:08 AM9/29/22
to
Then perhaps you need also iflag=direct

--
Cheers, Carlos.


Carlos E.R.

unread,
Sep 29, 2022, 4:28:07 AM9/29/22
to
On 2022-09-29 06:37, Paul wrote:
> On 9/28/2022 3:47 PM, bad sector wrote:
>> On Wed, 28 Sep 2022 19:38:45 +0200, Carlos E.R. wrote:
>>>
>>> On 2022-09-28 12:56, bad sector wrote:
>>>>
>>>> but you gotta admit that the supposition _was_ a bit of a curve :-)
>>>
>>> Well, it would be rare, but there was a possibility and it could explain
>>> the hang you were describing :-)
>>>
>>> If you were experiencing a similar problem in several distros, there had
>>> to be a common cause.
>>
>> Lat night with Slackware but this time under X, 60 MB/s
>> and that's using bs=16M, locked up around 50 gb.
>
> Was the destination EXT4 or NTFS ?
>
> On the NTFS, did you tick the box to select
> legacy NTFS compression ? DONT DO THAT. :-)
> The Properties box in Windows, allows you to tick that.
>
> NTFS has two kinds of compression. The legacy
> kind, the OS stops writing around 50-60GB of
> a single solid file. And the new compression
> format, is only used by the OS. The new compression
> method uses a reparse point, and Linux cannot parse
> that (thank you Microsoft). The Paragon NTFS driver
> submission might have fixed that for us, but you know
> how that story went...

Mmmm.

>
> If compressing the output of "dd", use your favorite
> in-line compressor in the dd pipe you construct. In the
> past, you could slip a "gzip -c" in there, but today,
> you can play with "pigz -c" and see how much faster
> that runs. I don't know if there are too many
> other compressors that will run on all cores for you.

I do, on laptop (16 gigs ram), running XFCE, openSUSE:

function DO{}
echo "Doing partition $1 ($2)"
echo "copy, compress, calculate md5..."
mkfifo mdpipe
dd if=/dev/$1 status=progress bs=16M | tee mdpipe | pigz > $1.gz &
md5sum -b mdpipe | tee -a md5checksum_expanded
wait
rm mdpipe
echo "$1" >> md5checksum_expanded

echo "Verify..."
pigz --test $1.gz
echo
echo "·········"
}

echo > md5checksum_expanded
time DO nvme0n1p1 "2S"
time DO nvme0n1p2 "1S"
time DO nvme0n1p4 "8S"
time DO nvme0n1p3 "51 minutes"



--
Cheers, Carlos.


bad sector

unread,
Sep 29, 2022, 7:43:23 AM9/29/22
to
On Thu, 29 Sep 2022 00:37:01 -0400, Paul wrote:

> On 9/28/2022 3:47 PM, bad sector wrote:
>> On Wed, 28 Sep 2022 19:38:45 +0200, Carlos E.R. wrote:
>>>
>>> On 2022-09-28 12:56, bad sector wrote:
>>>>
>>>> but you gotta admit that the supposition _was_ a bit of a curve :-)
>>>
>>> Well, it would be rare, but there was a possibility and it could explain
>>> the hang you were describing :-)
>>>
>>> If you were experiencing a similar problem in several distros, there had
>>> to be a common cause.
>>
>> Lat night with Slackware but this time under X, 60 MB/s
>> and that's using bs=16M, locked up around 50 gb.
>
> Was the destination EXT4 or NTFS ?

I don't use anything that has anything to do
with microcancer. I Used to, just to run my
Me80 guitar effects under vBox but have junked
even my last w7

I didn't use anything but reiserfs UNTIL we
became informed that it was dead (not the wife
but the fs, and as far as the public was concerned)

Nowadays it's ALL ext4 but there's this new
concept of gradual enough to be unnoticeable
environment modification to make linux life
less seamless to say the least, probably not
a pretty chapter in the EEE manual.

bad sector

unread,
Sep 29, 2022, 7:46:10 AM9/29/22
to
wasn't there a time when 'default' meant
the most effective M.O. least likely to
need extra switches?




Carlos E. R.

unread,
Sep 29, 2022, 7:54:57 AM9/29/22
to
Not for dd. You have always write all your options. You are god.

--
Cheers,
Carlos E.R.


0 new messages