Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Care and feeding of SSDs on Slackware

370 views
Skip to first unread message

Grant

unread,
Sep 18, 2014, 7:29:50 PM9/18/14
to
Hi,

Recently I transferred a Samsung 830 Series 128GB SSD from retired
Win7pro box to a shiny new Slackware64-14.1 box.

Here I describe some of the things needed to run Linux on an SSD.

Over provisioning
``````````````````
This is where one leaves some of the SSD unallocated. The unallocated
space is used by the SSD firmware as part of it's garbage collection.
It extends the life of the SSD and also adds to speed of operation.

Samsung recommend over provisioning at 7 to 10% on small drives:
"
The SSD will naturally use any available free space to perform its
maintenance algorithms. If you have a small SSD, on the other hand,
it is recommended to set aside some OP (between 6.7 and 10% of total
drive space) to minimize the risk of accidentally filling the drive
to capacity. While filling a drive with data isn't harmful, it will
have a severe impact on performance.
" --<http://www.samsung.com/global/business/semiconductor/minisite/SSD/us/html/about/whitepaper05.html>

So I aimed for about 10% over-provisioning, by leaving a portion of
the SSD unallocated:

# fdisk -l /dev/sda

Disk /dev/sda: 128.0 GB, 128035676160 bytes
255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x460bd378

Device Boot Start End Blocks Id System
/dev/sda1 2048 16779263 8388608 83 Linux
/dev/sda2 16779264 33556479 8388608 83 Linux
/dev/sda3 33556480 37750783 2097152 82 Linux swap
/dev/sda4 37750784 192956415 77602816 5 Extended
/dev/sda5 37752832 46141439 4194304 83 Linux
/dev/sda6 46143488 54532095 4194304 83 Linux
/dev/sda7 54534144 62922751 4194304 83 Linux
/dev/sda8 62924800 71313407 4194304 83 Linux
/dev/sda9 71315456 73412607 1048576 83 Linux
/dev/sda10 73414656 75511807 1048576 83 Linux
/dev/sda11 75513856 117456895 20971520 83 Linux
/dev/sda12 117458944 192956415 37748736 83 Linux

Notice the end sector is about 10% less than total sectors.

Another aspect of this is that the modern SSD is designed for the
NTFS filesystem, how does it know about what freespace is available
on a Linux filesystem? Thus we should use SSD over provisioning
on Linux in my opinion.

This may also explain why some SSDs have sizes in 60, 120, 240GB and
so on. There's also the difference between binary GiB and decimal GB
working in the SSD manufacturer's favour for making the space.

SSD trim
`````````
Early on, expensive SSDs were suffering from erase/write lifetime
limits, and people decided to do something to improve that. Also,
of the product's development, increases in capacity by using smaller
cells, and using MLC (the storing of 2 bits per cell as one of
four analog voltages), has decreased the erase/write life cycle
count to as low as 3000.

This is because SSDs can only be erased in much larger blocks (for
example, 128k or 256k) though they can be written in 4k blocks that
naturally suit modern hard drives and operating systems.

What SSD trim does is notify the SSD firmware when a file's contents
is no longer required, the SSD can then use this information as part
of its decision to coalesce vacant blocks to build the next erase
target block.

The first SSD survival tool is to specify 'noatime' in /etc/fstab for
SSD mountpoints, to remove these unnecessary inode updates (as long
as you're not running a server relying on file access times).

Next is the SSD trim command, the first trim method for Linux
filesystems was to specify the 'discard' option on SSD mount points,
in the /etc/fstab file.

** Note that this requires use of a filesystem that can perform the
** trim command, thus the dated reiserfs3 is out, so I'm now using
** ext4 for new systems with SSDs. (btrfs is a bit too new for me)

Problem with this early method is that Linux issued trim commands
that were executed synchronously with the business of writing new
files. So this fstab method fell into disfavour because it reduced
filesystem performance.

One doesn't want SSD garbage collection in the middle of a long file
write sequence, does one?

fstrim as cron job
```````````````````
The current preferred method is to call 'fstrim' from a root cron
job to perform the SSD filesystem trim on a regular basis. I've
seen references to once per day, or once per week. I set my system
to 04:50 each day, to be done after Slackware's normal 04:40 daily
business.

An fstrim script:

# cat /usr/local/bin/ssdtrim
#!/bin/sh
#
# ssdtrim
#
# trim SSD by issuing fstrim for each SSD active mount point, call from
# root cron job running daily or weekly
#
# Copyright (c) 2014 Grant Coady http://bugsplatter.id.au GPLv2
#
# Example crontab entry
# # ssdtrim once a day at 04:50, logs to /var/log/ssdtrim.log
# 50 4 * * * /usr/local/bin/ssdtrim >> /var/log/ssdtrim.log 2>/dev/null
# #
#
# after http://wiki.ubuntuusers.de/SSD/TRIM (very vaguely now)
#
echo "$(date +%F.%T)"

while read dire rest
do
fstrim -v $dire | gawk '
{
dir = $1
size = $2 / 2^10
sub(/:/,"", dir)
printf "%24s : %d K\n", dir, size
}'

# list active SSD mountpoints in this here document
done <<-EOF
/
/home
/var
/usr/local
/srv/common
/srv/mirror
EOF

After a reboot:
# /usr/local/bin/ssdtrim
2014-09-19.08:40:36
/ : 3130460 K
/home : 933392 K
/var : 3943312 K
/usr/local : 997860 K
/srv/common : 16067404 K
/srv/mirror : 15557272 K

I put the output redirection in the crontab so that the script could
be called casually:
# /usr/local/bin/ssdtrim
2014-09-19.08:42:39
/ : 0 K
/home : 0 K
/var : 0 K
/usr/local : 0 K
/srv/common : 0 K
/srv/mirror : 0 K

Why so many partitions? Because I can? Actually I was having big problems
with dataloss after transferring a working Slackware install to the SSD,
and the change from reiserfs3 to ext4, probably finger trouble. Solved by
a new bare metal install of Slackware64-14.1 onto ext4. We have:

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 7.8G 4.8G 2.6G 65% /
/dev/sda5 3.9G 3.0G 691M 82% /home
/dev/sda7 3.9G 51M 3.6G 2% /var
/dev/sda9 976M 1.5M 908M 1% /usr/local
/dev/sda11 20G 4.3G 15G 23% /srv/common
/dev/sda12 36G 21G 14G 62% /srv/mirror
tmpfs 1.9G 0 1.9G 0% /dev/shm
deltree:/home/common 16G 4.2G 12G 27% /home/common

Partitions 2, 6, 8, 10 are for a parallel install of Slackware, when
the next release comes out. I'm a firm believer in parallel OS installs
on important boxes, for fast switchover if the new OS install goes awry.

Swap is on partition 3, I don't expect the box to use much swap.

ssdview
````````
Rather than slog through the 'smartctl -a /dev/sda', and wonder about that
'235 Unknown_Attribute ...', I wrote a small script to summarise important
numbers from the 'smartctl -a /dev/sda' command:

# cat /usr/local/bin/ssdview
#!/bin/sh
#
# ssdview
#
# report Total Bytes Written for /dev/sda, and some other info
#
# Copyright (c) 2014 Grant Coady GPLv2 http://bugsplatter.id.au
#
# Attribute 235 POR Recovery Count is defined by Samsung Magician on Windows
#
smartctl -a /dev/sda | gawk '
{
tag = $2
gsub(/_/, " ", tag)
}
/^235/ { tag = "POR Recovery Count" }
/Power_On_Hours|Power_Cycle_Count|Wear_Leveling_Count|^235/ {
printf "%24s : %d\n", tag, $10
}
/Total_LBAs_Written/ {
tag = "Total Bytes Written"
printf "%24s : %1.3f TB\n", tag, $10 * 512 / 10^12
}'
# end

An example of gawk at its best, text processing.
Produces :

# /usr/local/bin/ssdview
Power On Hours : 10651
Power Cycle Count : 623
Wear Leveling Count : 159
POR Recovery Count : 143
Total Bytes Written : 3.412 TB

I bought the SSD above about 2.5 years ago, and until a few months ago
I used to leave the Win7 box running 24/7.

Find the latest versions of these scripts on:
<http://bugsplatter.id.au/system/d3v.html#running_an_ssd>

Enjoy!

Comments welcome.

Grant.

John K. Herreshoff

unread,
Sep 18, 2014, 10:09:37 PM9/18/14
to
> # Windows
Interesting... Please, an example fstab for ssd.

John.

--
Using the Cubic at home

JohnF

unread,
Sep 19, 2014, 12:17:51 AM9/19/14
to
Grant <o...@grrr.id.au> wrote:
> Recently I transferred a Samsung 830 Series 128GB SSD from retired
> Win7pro box to a shiny new Slackware64-14.1 box.
> Here I describe some of the things needed to run Linux on an SSD.
> [...]
> The first SSD survival tool is to specify 'noatime' in /etc/fstab for
> SSD mountpoints, to remove these unnecessary inode updates (as long
> as you're not running a server relying on file access times).
> [...]

Thanks for the interesting discussion, particularly the
noatime (or relatime), e.g.,
https://wiki.archlinux.org/index.php/fstab#atime_options
which I hadn't been aware of.
Personally, I'd decided not to bother with ssd's.
My boxes all have multiple GB's of memory, more than enough
so that everything I've accessed (or likely will access
during the next 10,000 years) is cached in memory.
And once cached, the device from which those files were
read just isn't accessed any more, except for occasional
syncs/writes/etc, so its speed just isn't relevant.
But I'd overlooked atime, and have now added relatime
in fstab to all my ext partitions.
So the only remaining noticeable slowness is during boot
and the first time files are accessed after boot, e.g., the
first time I type cc. Try launching mozilla/any_browser,
then quit/exit it and re-launch it. The first time has
a small delay, the second time is pretty instantaneous.
An ssd will speed up that first time, but have no effect
on the second,third,... times.
I can see advantages for laptops, where ssd's may
be more rugged and require less power (without being
spun up and down all the time). But I don't see any
big desktop advantage, i.e., if you have a few dollars
to spend, I'd guess that maybe they're probably
better spent on some other upgrade.
--
John Forkosh ( mailto: j...@f.com where j=john and f=forkosh )

Grant

unread,
Sep 19, 2014, 1:04:22 AM9/19/14
to
On Thu, 18 Sep 2014 22:09:37 -0400, "John K. Herreshoff" <No...@not.here> wrote:

>
>Interesting... Please, an example fstab for ssd.

root@itxmini:~# cat /etc/fstab
# /etc/fstab for slackware64-14.1 on itxmini -- 2014-09-17
#
/dev/sda1 / ext4 noatime,defaults 0 0
/dev/sda3 swap swap defaults 0 0
/dev/sda5 /home ext4 noatime,defaults 0 0
/dev/sda7 /var ext4 noatime,defaults 0 0
/dev/sda9 /usr/local ext4 noatime,defaults 0 0
/dev/sda11 /srv/common ext4 noatime,defaults 0 0
/dev/sda12 /srv/mirror ext4 noatime,defaults 0 0
#
/dev/sda2 /alt ext4 noauto,noatime,defaults 0 0
/dev/sda6 /alt/home ext4 noauto,noatime,defaults 0 0
/dev/sda8 /alt/var ext4 noauto,noatime,defaults 0 0
/dev/sda10 /alt/usr/local ext4 noauto,noatime,defaults 0 0
#
devpts /dev/pts devpts gid=5,mode=620 0 0
proc /proc proc defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
#
deltree:/home/common /home/common nfs hard,intr
deltree:/home/mirror /home/mirror nfs noauto,user,hard,intr
#
pooh:/home/backup1 /home/backup1 nfs noauto,user,hard,intr
pooh:/home/backup2 /home/backup2 nfs noauto,user,hard,intr
pooh:/home/raid/a /home/backupa nfs noauto,user,hard,intr
pooh:/home/raid/b /home/backupb nfs noauto,user,hard,intr
#

Grant.
>
>John.

Grant

unread,
Sep 19, 2014, 1:14:57 AM9/19/14
to
On Fri, 19 Sep 2014 04:17:51 +0000 (UTC), JohnF <jo...@please.see.sig.for.email.com> wrote:

>Grant <o...@grrr.id.au> wrote:
>> Recently I transferred a Samsung 830 Series 128GB SSD from retired
>> Win7pro box to a shiny new Slackware64-14.1 box.
>> Here I describe some of the things needed to run Linux on an SSD.
>> [...]
>> The first SSD survival tool is to specify 'noatime' in /etc/fstab for
>> SSD mountpoints, to remove these unnecessary inode updates (as long
>> as you're not running a server relying on file access times).
>> [...]
>
>Thanks for the interesting discussion, particularly the
>noatime (or relatime), e.g.,
> https://wiki.archlinux.org/index.php/fstab#atime_options
>which I hadn't been aware of.
> Personally, I'd decided not to bother with ssd's.

My concern is for the life of the hard drive, now I put the hard drive
into standby at bootup, adding the SSD was more because I already
had it, not because I needed the speed.

After all, I'm replacing a box with a 500MHz CPU and 256MB memory with
a quad-core 2GHz CPU and 4GB memory.

>My boxes all have multiple GB's of memory, more than enough
>so that everything I've accessed (or likely will access
[...]
>spun up and down all the time). But I don't see any
>big desktop advantage, i.e., if you have a few dollars
>to spend, I'd guess that maybe they're probably
>better spent on some other upgrade.

Yes, although this box wont have a desktop very often, though I did
install XFCE (but not KDE) just in case.

The new box also much faster than the old for editing text files,
which is what I mostly do with it when I open ssh terminals to the
box.

Grant.

Grant

unread,
Sep 19, 2014, 1:33:20 AM9/19/14
to
On Fri, 19 Sep 2014 15:14:57 +1000, Grant <o...@grrr.id.au> wrote:

>
>My concern is for the life of the hard drive, now I put the hard drive
>into standby at bootup, adding the SSD was more because I already
>had it, not because I needed the speed.
>
Which I shouldn't be, considering the 80GB hard drive in deltree has
over 60,000 without errors.

Grant.

Grant

unread,
Sep 19, 2014, 1:47:49 AM9/19/14
to
On Fri, 19 Sep 2014 15:04:22 +1000, Grant <o...@grrr.id.au> wrote:

>On Thu, 18 Sep 2014 22:09:37 -0400, "John K. Herreshoff" <No...@not.here> wrote:
>
>>
>>Interesting... Please, an example fstab for ssd.
>
Oops, I used to use reiserfs3, now I need the sixth field to get fsck?

root@itxmini:~# cat /etc/fstab
# /etc/fstab for slackware64-14.1 on itxmini -- 2014-09-17
#
/dev/sda1 / ext4 noatime,defaults 0 1
/dev/sda3 swap swap defaults 0 0
/dev/sda5 /home ext4 noatime,defaults 0 2
/dev/sda7 /var ext4 noatime,defaults 0 2
/dev/sda9 /usr/local ext4 noatime,defaults 0 2
/dev/sda11 /srv/common ext4 noatime,defaults 0 2
/dev/sda12 /srv/mirror ext4 noatime,defaults 0 2

Helmut Hullen

unread,
Sep 19, 2014, 2:44:00 AM9/19/14
to
Hallo, Grant,

Du meintest am 19.09.14:

>>> The first SSD survival tool is to specify 'noatime' in /etc/fstab
>>> for SSD mountpoints, to remove these unnecessary inode updates (as
>>> long as you're not running a server relying on file access times).
>>> [...]

>> Thanks for the interesting discussion, particularly the
>> noatime (or relatime), e.g.,
>> https://wiki.archlinux.org/index.php/fstab#atime_options
>> which I hadn't been aware of.
>> Personally, I'd decided not to bother with ssd's.

> My concern is for the life of the hard drive, now I put the hard
> drive into standby at bootup, adding the SSD was more because I
> already had it, not because I needed the speed.

Just for curiosity:
What about "relatime" instead of "noatime"? Somewhere (I don't remember
where) it was recommended.

What about "discard" as an option for SSDs?

Viele Gruesse
Helmut

"Ubuntu" - an African word, meaning "Slackware is too hard for me".

JohnF

unread,
Sep 19, 2014, 3:11:14 AM9/19/14
to
deltree=HOSTNAME, and 60,000=hours(=6.8years,24x7), I take it?
Well, rotating disk mtbf's are typically 0.25-to-1.0 million hours,
but maybe only 25,000 spin-ups, and that's only if your electrical
and physical (temp,humidity,dust) environments are nominal.
But "m"=mean/average, so you should always be worried (though
I guess you've gone past the burn-in time necessary to rule out
manufacturing defects:).
Personally, my backups for the past five years have been three
Synology DS109j NAS's on my lan (you can now get ds112j's, and even
ds115j's I think), used in a grandfather-like (but not exactly) scheme.
And rsync is the backup software. That all works great, and I'm not
worried about losing more than a day's work, though that hasn't
happened ... yet (knock on wood). And for offsite backups, I've
used Western Digital passports, which I especially like because
they're usb hub powered, and really small to easily fit in a bank
safe deposit box, where I rotate six of them (three groups of
two duplicates), four drives always in the box and two at home.
And also some usb "sticks", mostly to move stuff between my home
office and clients, but occasionally for daily backup when I'm
too lazy to power up a NAS.
But I'd never trust any one spindle (or ssd or any one device).
Real estate is location, location, location;
data is backup, backup, backup.

Grant

unread,
Sep 19, 2014, 3:33:17 AM9/19/14
to
On 19 Sep 2014 08:44:00 +0200, Hel...@Hullen.de (Helmut Hullen) wrote:

>Hallo, Grant,
>
>Du meintest am 19.09.14:
>
>>>> The first SSD survival tool is to specify 'noatime' in /etc/fstab
>>>> for SSD mountpoints, to remove these unnecessary inode updates (as
>>>> long as you're not running a server relying on file access times).
>>>> [...]
>
>>> Thanks for the interesting discussion, particularly the
>>> noatime (or relatime), e.g.,
>>> https://wiki.archlinux.org/index.php/fstab#atime_options
>>> which I hadn't been aware of.
>>> Personally, I'd decided not to bother with ssd's.
>
>> My concern is for the life of the hard drive, now I put the hard
>> drive into standby at bootup, adding the SSD was more because I
>> already had it, not because I needed the speed.
>
>Just for curiosity:
>What about "relatime" instead of "noatime"? Somewhere (I don't remember
>where) it was recommended.

Yes, relatime could be used, it didn't register with me in my reading
since many sites were recommending noatime, I didn't go further until
it was mentioned above.
>
>What about "discard" as an option for SSDs?

I discussed that in my OP, 'discard' is the earlier SSD trim technique
which may slow down performance due to the synchronous merging of trim
requests along with data deletes or rewrites.
>
>Viele Gruesse
>Helmut
>
Still forgetting the "-- " delimiter? ;o)

Grant.
--

Grant

unread,
Sep 19, 2014, 6:39:01 AM9/19/14
to
On Fri, 19 Sep 2014 07:11:14 +0000 (UTC), JohnF <jo...@please.see.sig.for.email.com> wrote:

>Grant <o...@grrr.id.au> wrote:
>> On Fri, 19 Sep 2014 15:14:57 +1000, Grant <o...@grrr.id.au> wrote:
>>>
>>>My concern is for the life of the hard drive, now I put the hard drive
>>>into standby at bootup, adding the SSD was more because I already
>>>had it, not because I needed the speed.
>>>
>> Which I shouldn't be, considering the 80GB hard drive in deltree has
>> over 60,000 without errors. Grant.
>
>deltree=HOSTNAME, and 60,000=hours(=6.8years,24x7), I take it?

Yup!

>Well, rotating disk mtbf's are typically 0.25-to-1.0 million hours,
>but maybe only 25,000 spin-ups, and that's only if your electrical
>and physical (temp,humidity,dust) environments are nominal.

Power_On_Hours - 60653
Power_Cycle_Count - 138
Temperature_Celsius - 40
Other important numbers are zero. It's a Seagate, it has those huge
scary correctable error numbers in SMART.

>But "m"=mean/average, so you should always be worried (though
>I guess you've gone past the burn-in time necessary to rule out
>manufacturing defects:).
> Personally, my backups for the past five years have been three
>Synology DS109j NAS's on my lan (you can now get ds112j's, and even
>ds115j's I think), used in a grandfather-like (but not exactly) scheme.
>And rsync is the backup software. That all works great, and I'm not
>worried about losing more than a day's work, though that hasn't
>happened ... yet (knock on wood). And for offsite backups, I've
>used Western Digital passports, which I especially like because
>they're usb hub powered, and really small to easily fit in a bank
>safe deposit box, where I rotate six of them (three groups of
>two duplicates), four drives always in the box and two at home.
>And also some usb "sticks", mostly to move stuff between my home
>office and clients, but occasionally for daily backup when I'm
>too lazy to power up a NAS.

I have a RAID6 NAS, backup to it sometimes, once I had to email a
friend to send me a lost program file. So there's a backup hardlink
style running as a cron job for my web pages and source code.

> But I'd never trust any one spindle (or ssd or any one device).
>Real estate is location, location, location;
>data is backup, backup, backup.

Yes, agree.

JohnF

unread,
Sep 19, 2014, 8:35:41 AM9/19/14
to
Grant <o...@grrr.id.au> wrote:
> On Fri, 19 Sep 2014, JohnF <jo...@please.see.sig.for.email.com> wrote:
>>Grant <o...@grrr.id.au> wrote:
I'm wary of raid for backup. Raid's intended as an availablity
solution, not a backup solution. In particular, those pesky
one-of-a-kind controllers are pretty much impossible to replace.
If your enclosure dies, you're left with a raid disk set that's
unreadable unless you get exactly the same model controller
with exactly the same software revision. With a one-disk/non-raid
nas, you can just swap out the ext-formatted disk (synology runs
a slightly-specialized linux) and pop it in anywhere, and it's
instantly mountable/readable. I have a Thermaltake BlacX duet,
http://www.thermaltake.com/products-model.aspx?id=C_00001756
which makes that even more trivial. And I tested that idea by
actually removing one of my ds109j's disks. No problem.
Moreover, my three ds109j's together cost less than most
raid nas's. And they're obviously totally redundant.
Seems like a lot more bang for the buck for backup purposes.
For availability purposes, I'd of course go with raid,
but that requirement's a whole different story.

>> But I'd never trust any one spindle (or ssd or any one device).
>>Real estate is location, location, location;
>>data is backup, backup, backup.
> Yes, agree.

Martin

unread,
Sep 19, 2014, 2:54:51 PM9/19/14
to
On 09/19/2014 01:29 AM, Grant wrote:
> Hi,
>
> Recently I transferred a Samsung 830 Series 128GB SSD from retired
> Win7pro box to a shiny new Slackware64-14.1 box.
>
> Here I describe some of the things needed to run Linux on an SSD.

good write-up. What is worth mentioning, is how you reset the trim
information for all cells of an SSD that has been used before.

It can be done using the ATA Secure Erase feature, as described for
instance here:

https://wiki.archlinux.org/index.php/SSD_Memory_Cell_Clearing

Rich

unread,
Sep 19, 2014, 3:57:32 PM9/19/14
to
JohnF <jo...@please.see.sig.for.email.com> wrote:
> > I have a RAID6 NAS, backup to it sometimes, once I had to email a
> > friend to send me a lost program file. So there's a backup hardlink
> > style running as a cron job for my web pages and source code.

> I'm wary of raid for backup. Raid's intended as an availablity
> solution, not a backup solution. In particular, those pesky
> one-of-a-kind controllers are pretty much impossible to replace.

Which is why, if one wants to do RAID, one does not use those pesky
controllers (unless one is a corporate entity with enough money to
invest in multiple backup copies of those pesky controllers).

One does RAID with a plain Linux box, and a set of plain disks,
attached to plain SATA ports, on a standard motherboard, using the
linux md tools.

Then all the parts, but the disks, can be replaced, and the disks can
still be read as a RAID set. Controller failure, switch sata ports.
Motherboard failure, switch motherboards. Sata cable failure, new sata
cable. But the RAID array still works.... (once the broken part is
replaced.

bad sector

unread,
Sep 19, 2014, 6:47:46 PM9/19/14
to
On 09/18/2014 07:29 PM, Grant wrote:

thanks, interesting writ!


> SSD trim
> `````````
> Early on, expensive SSDs were suffering from erase/write lifetime
> limits, and people decided to do something to improve that. Also,
> of the product's development, increases in capacity by using smaller
> cells, and using MLC (the storing of 2 bits per cell as one of
> four analog voltages), has decreased the erase/write life cycle
> count to as low as 3000.
>
> This is because SSDs can only be erased in much larger blocks (for
> example, 128k or 256k) though they can be written in 4k blocks that
> naturally suit modern hard drives and operating systems.
>
> What SSD trim does is notify the SSD firmware when a file's contents
> is no longer required, the SSD can then use this information as part
> of its decision to coalesce vacant blocks to build the next erase
> target block.

I'm a litle confused by this, progress has made things worse? 3000 seems
like a low value.





Grant

unread,
Sep 19, 2014, 6:58:14 PM9/19/14
to
On Fri, 19 Sep 2014 12:35:41 +0000 (UTC), JohnF <jo...@please.see.sig.for.email.com> wrote:

[...]
>I'm wary of raid for backup. Raid's intended as an availablity
>solution, not a backup solution. In particular, those pesky
>one-of-a-kind controllers are pretty much impossible to replace.

It's a slackware box running slack64-14.1 with 6 x consumer 1TB drives,
resilient to faults. Another backup is a 2TB drive in the same box,
this drive is put into standby on powerup.

Then there's an external 2TB drive mostly turned off hooked via USB 3.0
to the main windows box.

Now this new computer has USB 3.0, I may hook the external drive to
it instead, use one of the serial ports or the parallel port to control
a power relay for the external drive?

>If your enclosure dies, you're left with a raid disk set that's
>unreadable unless you get exactly the same model controller
>with exactly the same software revision.

The benefit of using Linux mdadm is that the RAID6 is not ties to the
controller, see my <http://bugsplatter.id.au/sasflash/> page for how
I repurposed a used IBM ServeRAID controller to a dumb IT mode disk
controller. I was able to transfer the six RAID member drives without
alteration to the new controller card.

After the rebuild I had a drive drop out due to seemed to be a badly
plugged in SATA power connector, the drive tested fine outside the
RAID6 group, it's now a spare.

> With a one-disk/non-raid
>nas, you can just swap out the ext-formatted disk (synology runs
>a slightly-specialized linux) and pop it in anywhere, and it's
>instantly mountable/readable. I have a Thermaltake BlacX duet,
> http://www.thermaltake.com/products-model.aspx?id=C_00001756
>which makes that even more trivial. And I tested that idea by
>actually removing one of my ds109j's disks. No problem.
>Moreover, my three ds109j's together cost less than most
>raid nas's. And they're obviously totally redundant.

I built the RAID6 box a few years ago to explore the technical aspects,
after learning about RAID6 from a friend when he was planning to buy
a NAS, he bought a Qnap box which is now on its second set of hard
drives. He had the misfortune to buy some of the poor quality Seagate
1.5TB desktop drives for the first use of the Qnap. Although a few
drives failed over the next 3.5 years, he suffered no dataloss.

I have three of the surviving hard drives, another two died in my
individual testing. They have about 28k hours up. One is in the
RAID6 box now.

He's a Windows person, this was his first contact with Linux OS.

>Seems like a lot more bang for the buck for backup purposes.
>For availability purposes, I'd of course go with raid,
>but that requirement's a whole different story.
>
I'm buying NAS hard drives now, two by 2TB drives for my new Win7
box, I plan to put them in the RAID box one day. The AU dollar may
be in for a slide, so I'm still buying computer bits before they go
too high.

Amazes me high hard drive prices are staying, as 1TB drive hasn't
gone done below the pre-flood price of quite a few years (5?) ago
now.

Grant.

Grant

unread,
Sep 19, 2014, 7:07:42 PM9/19/14
to
Thank you. I've been wondering about this aspect, since a post boot
fstrim run always produces so many blocks trimmed, and the numbers are
not going down.

I still have the 320GB hard drive all ready to boot, so I can do the
SSD clearing by swapping cables or playing with the BIOS, to boot into
the prior install of slackware. If it works...

What happened? Well, I was transferring files to the SSD via the
RAID6 box, but something went wrong, was getting data errors on the
SSD so I had to start over with a bare metal install.

But yesterday the RAID6 box failed to boot, turned out I'd wiped its
/dev/sda boot sector -- so I did have some finger trouble. Easy to
fix, boot slack install CD, mount /dev/sda and run a lilo -r ...


Box for the new computer arrived yesterday, photos here:

<http://bugsplatter.id.au/system/d3v.html#dressing_the_ga-j1900n-d3v>

Grant.

Grant

unread,
Sep 19, 2014, 8:42:27 PM9/19/14
to
Apparently some TLC (Triple Level Cell, actually 3 bits per cell) SSD
are well below minimum 1000 erase/write cycles.

The low value for erase/write cycles brought down the cost of the
technology, but the individual cells are required to hold 4 analog
data levels for MLC, eight levels for TLC. This implies dome very
fast A/D converters, guard bands to account for leakage, and still
they produce a mostly reliable product.

You could always go for the classic SLC (one bit per cell), with 30k
erase/write cycles, but be prepared to pay well for the privilege.

Grant.

Grant

unread,
Sep 19, 2014, 11:50:43 PM9/19/14
to
On Fri, 19 Sep 2014 20:54:51 +0200, Martin <m...@abc.invalid> wrote:

Turned out I had to transfer the SSD to another box to do the secure
erase, the BIOS in the GA-J1900N-D3V prevented access for the SDD
security commands :-/

But it's done, and now I install slackware yet again... Also turns
out the hard drive in the new box was damaged by finger troubles the
other day, so that drive will be wiped and repurposed too. Was going
to do this anyway.

I'd rather play safe and reinstall than try another copy/goof up. New
slack install doesn't take long.

Grant.

JohnF

unread,
Sep 20, 2014, 5:39:54 AM9/20/14
to
Grant <o...@grrr.id.au> wrote:
> On Fri, 19 Sep 2014, JohnF <jo...@please.see.sig.for.email.com> wrote:
>
> [...]
>>I'm wary of raid for backup. Raid's intended as an availablity
>>solution, not a backup solution. In particular, those pesky
>>one-of-a-kind controllers are pretty much impossible to replace.
>
> It's a slackware box running slack64-14.1 with 6 x consumer 1TB drives,
> resilient to faults. Another backup is a 2TB drive in the same box,
> this drive is put into standby on powerup.
>
> Then there's an external 2TB drive mostly turned off hooked via USB 3.0
> to the main windows box.
>
> Now this new computer has USB 3.0, I may hook the external drive to
> it instead, use one of the serial ports or the parallel port to control
> a power relay for the external drive?
>
>>If your enclosure dies, you're left with a raid disk set that's
>>unreadable unless you get exactly the same model controller
>>with exactly the same software revision.
>
> The benefit of using Linux mdadm is that the RAID6 is not ties to the
> controller, see my <http://bugsplatter.id.au/sasflash/> page for how
> I repurposed a used IBM ServeRAID controller to a dumb IT mode disk
> controller. I was able to transfer the six RAID member drives without
> alteration to the new controller card.

Thanks, ditto Rich in preceding followup, for mentioning
linux mdadm tools, which I hadn't been aware of.
Pretty impressive SASflash page. Thanks for all the info.
Probably more complexity than I want to get involved with
just for backup, but I have no 24x7 availability requirement.
At eod in my soho office, I just power everything down.
The two most expensive hard drives I ever bought were
o about $950 USD for an 8-bit 40MB Plus HardCard in maybe 1986,
o about $1050 for a scsi 2.1GB quantum empire 2100s in maybe 1992.
Today I can get a 2TB sata for let's say $100.
So on a dollars/MB basis, prices have decreased by a factor of:
o half a million compared to the 1986 Plus HardCard,
o ten thousand compared to the 1992 quantum 2100s.
So that's a factor of 50 over the ~five-years between 1986-1992,
which is a lot more than your "no change" over the last five years
(though I'd say USA prices have gone down, but not by 50x).
In any case, hard for me to complain when I still remember
paying $950 USD for a 40MB drive.
And that went into a dual-floppy IBM 8088-based pc with
256KB memory that cost ~$2500 in 1984. If you tried to figure,
say, dollars/MIP/KB memory/MB disk/whatever, I wouldn't be too
surprised if you're literally approaching a trillion to one.

Henrik Carlqvist

unread,
Sep 20, 2014, 9:19:29 AM9/20/14
to
On Fri, 19 Sep 2014 20:39:01 +1000, Grant wrote:
> Power_On_Hours - 60653
> Power_Cycle_Count - 138
> Temperature_Celsius - 40
> Other important numbers are zero. It's a Seagate, it has those huge
> scary correctable error numbers in SMART.

I also have a good old seagate disk in an old machine:

smartctl -A /dev/hda | colrm 1 4 | colrm 25 83 | tail +7 | grep er
Power_On_Hours 89181
Power_Cycle_Count 95
Temperature_Celsius 39
Hardware_ECC_Recovered 208094023

Comparing our numbers I can't help but asking myself, why do I powercycle
my machine so often? :-)

(In my case the machine has been powercycled about once every 900 hours
in average which equals to almost once every month.)

regards Henrik
--
The address in the header is only to prevent spam. My real address is:
hc351(at)poolhem.se Examples of addresses which go to spammers:
root@localhost postmaster@localhost

frank.w...@gmail.com

unread,
Sep 20, 2014, 5:59:17 AM9/20/14
to
From JohnF:
> o about $950 USD for an 8-bit 40MB Plus HardCard in maybe 1986,

I bought a 20MB HD for $800 ($40/MB) at about that same
year -- the higher cost because it was external with its
own chassis. It mounted on the side of an Amiga 500.

Frank

Michael Black

unread,
Sep 20, 2014, 12:06:13 PM9/20/14
to
I have no real idea of how much hard drives cost back then, only that they
were too expensive.

I paid $500 Canadian for a controller card, one 5.25" floppy drive and a
case and power supply for it, in July of 1984. That seemed expensive
enough, but I was ready, it had been five years of saving to cassette
tapes. With a floppy drive, I could actually do something useful with the
computer. And all I got was 360K per floppy.

I didn't get a hard drive until late 1993, I was given a broken Mac Plus
that I fixed and it needed more than one floppy drive, hence the hard
drive. It was a gift that Christmas, so I don't know how much was paid
for it.

About 2006, I paid around a hundred dollars for a 160gig hard drive, which
I've yet to use to capacity. I noticed one ad yesterday had a 120gig SSD
drive for about the same price. I don't know if that's good or bad, it
shows how much such things have dropped, but why would I want to pay as
much as I did in 2006 for the same capacity?

I think I've mentioned it before, I have three 320gig SATA drives just
sitting around because I don't have any computer that has a SATA
interface. All three drives were found in set top boxes found on the
sidewalk. Nobody would toss that 20 or 30 years ago.

Michael



Rich

unread,
Sep 20, 2014, 1:09:26 PM9/20/14
to
The most expensive (total dollar input) HD I've ever bought was a 1G
IDE years ago that cost me something like $489.

And that was after waiting for and watching it's price fall from a
starting point of about $1200 or so.

The most expensive per byte was probably the first 80MB drive I bought
in 1991. I don't remember the price anymore, but it had to be
somewhere in the $200's range.

JohnF

unread,
Sep 20, 2014, 11:11:36 PM9/20/14
to
Michael Black <et...@ncf.ca> wrote:
> On Sat, 20 Sep 2014, frank.w...@gmail.com wrote:
>
>> From JohnF:
>>> o about $950 USD for an 8-bit 40MB Plus HardCard in maybe 1986,
>>
>> I bought a 20MB HD for $800 ($40/MB) at about that same year -- the higher
>> cost because it was external with its own chassis. It mounted on the side of
>> an Amiga 500.
>>
> I have no real idea of how much hard drives cost back then,
> only that they were too expensive. [...] Michael

Not at all. Google giant magnetoresistance , e.g.,
http://www.research.ibm.com/research/gmr.html
serendipitously discovered (they weren't looking for it) in 1988,
and subsequently awarded the nobel prize in physics.
At the time of discovery, physicists didn't even believe
the effect was possible.
And at that time, disk capacities (at least in pc form factors)
were thought to be capped at ~2GB. But by the early-to-mid 1990's,
gmr had broken that barrier by an order of magnitude, and today
by three orders. It's primarily gmr that's made the capacity go up
and cost down, to a mind-boggling extent.

bad sector

unread,
Sep 20, 2014, 11:20:07 PM9/20/14
to
thanks

think I'll just sit it out


Grant

unread,
Sep 21, 2014, 4:35:06 AM9/21/14
to
On 19 Sep 2014 08:44:00 +0200, Hel...@Hullen.de (Helmut Hullen) wrote:

>Hallo, Grant,
>
>Du meintest am 19.09.14:
>
>>>> The first SSD survival tool is to specify 'noatime' in /etc/fstab
>>>> for SSD mountpoints, to remove these unnecessary inode updates (as
>>>> long as you're not running a server relying on file access times).
>>>> [...]
>
>>> Thanks for the interesting discussion, particularly the
>>> noatime (or relatime), e.g.,
>>> https://wiki.archlinux.org/index.php/fstab#atime_options
>>> which I hadn't been aware of.
>>> Personally, I'd decided not to bother with ssd's.
>
>> My concern is for the life of the hard drive, now I put the hard
>> drive into standby at bootup, adding the SSD was more because I
>> already had it, not because I needed the speed.
>
>Just for curiosity:
>What about "relatime" instead of "noatime"? Somewhere (I don't remember
>where) it was recommended.

Turns out relatime was made the default in June 2009. The noatime option
is bad for compiling kernel source.

Goes to show I should be more careful what I read.

Grant.

Grant

unread,
Sep 21, 2014, 4:40:56 AM9/21/14
to
On Fri, 19 Sep 2014 09:29:50 +1000, Grant <o...@grrr.id.au> wrote:

>The first SSD survival tool is to specify 'noatime' in /etc/fstab for
>SSD mountpoints, to remove these unnecessary inode updates (as long
>as you're not running a server relying on file access times).

Nope, 'noatime' option breaks compiling, don't do it. Besides, since
June 2009 'relatime' has been the default. Sorry about my confusion,
there's so much conflicting information out there.

grant@itxmini:~$ cat /etc/fstab
# /etc/fstab for slackware64-14.1 on itxmini -- 2014-09-20
#
/dev/sda1 / ext4 defaults 1 1
/dev/sda3 swap swap defaults 0 0
/dev/sda5 /home ext4 defaults 1 2
/dev/sda7 /var ext4 defaults 1 2
/dev/sda9 /usr/local ext4 defaults 1 2
/dev/sda11 /srv/common ext4 defaults 1 2
/dev/sda12 /srv/mirror ext4 defaults 1 2
#
/dev/sda2 /alt ext4 noauto,defaults 0 0
/dev/sda6 /alt/home ext4 noauto,defaults 0 0
/dev/sda8 /alt/var ext4 noauto,defaults 0 0
/dev/sda10 /alt/usr/local ext4 noauto,defaults 0 0

Helmut Hullen

unread,
Sep 21, 2014, 5:19:00 AM9/21/14
to
Hallo, Grant,

Du meintest am 21.09.14:

>> Just for curiosity:
>> What about "relatime" instead of "noatime"? Somewhere (I don't
>> remember where) it was recommended.

> Turns out relatime was made the default in June 2009. The noatime
> option is bad for compiling kernel source.

> Goes to show I should be more careful what I read.

Don't worry - not only the times are changing ...

Martin

unread,
Sep 21, 2014, 9:50:41 AM9/21/14
to
On 09/21/2014 10:35 AM, Grant wrote:
> The noatime option
> is bad for compiling kernel source.

Please give an example. I've used noatime for yonks and i've never had
an issue compiling my kernels because of that.

Aragorn

unread,
Sep 21, 2014, 10:32:52 AM9/21/14
to
On Sunday 21 September 2014 15:50, Martin conveyed the following to
alt.os.linux.slackware...
Neither have I, to be honest, and I've built plenty of kernels,
including some parallel builds for separate Xen dom0 and domU kernels.

--
= Aragorn =

http://www.linuxcounter.net - registrant #223157

Rich

unread,
Sep 21, 2014, 1:46:51 PM9/21/14
to
Aragorn <thor...@telenet.be.invalid> wrote:
> On Sunday 21 September 2014 15:50, Martin conveyed the following to
> alt.os.linux.slackware...

> > On 09/21/2014 10:35 AM, Grant wrote:
> >
> >> The noatime option is bad for compiling kernel source.
> >
> > Please give an example. I've used noatime for yonks and i've never had
> > an issue compiling my kernels because of that.

> Neither have I, to be honest, and I've built plenty of kernels,
> including some parallel builds for separate Xen dom0 and domU kernels.

Make normally triggers off of the modification time value:

man make (emphasis added):

Once a suitable makefile exists, each time you change some
source files, this simple shell command:

make

suffices to perform all necessary recompilations. The make
program uses the makefile data base and the last-modification
^^^^^^^^^^^^^^^^^
times of the files to decide which of the files need to be
updated. For each of those files, it issues the commands
recorded in the data base.

So unless the kernel build system is doing something really different,
"noatime" should have no effect on kernel builds.

Where it will have an effect is traditional email programs that detect
"new mail" by the atime of a mbox file differing from when you the user
last accessed that same mbox file.

Aragorn

unread,
Sep 21, 2014, 1:56:56 PM9/21/14
to
On Sunday 21 September 2014 19:46, Rich conveyed the following to
Indeed. mutt is an example of that. This is why relatime was
introduced, so that the atime would still be updated at least once every
24 hours if the file had been accessed in the meantime.

Grant

unread,
Sep 21, 2014, 6:15:51 PM9/21/14
to
Maybe it's because I use hardlinked source trees, something I've been
doing for ten years or more, I was seeing errors on a modify .config
and second compile, so I blame noatime, since this is the first time
I used noatime.

What else could corrupt compile's make action? Was doing lots of
recompiles as I adjusted .config options to track another issue.
When I deleted the source, started over with fresh source, the
compile errors started on second recompile.

Grant.

Grant

unread,
Sep 21, 2014, 8:51:56 PM9/21/14
to
On Sun, 21 Sep 2014 16:32:52 +0200, Aragorn <thor...@telenet.be.invalid> wrote:

>On Sunday 21 September 2014 15:50, Martin conveyed the following to
>alt.os.linux.slackware...
>
>> On 09/21/2014 10:35 AM, Grant wrote:
>>
>>> The noatime option is bad for compiling kernel source.
>>
>> Please give an example. I've used noatime for yonks and i've never had
>> an issue compiling my kernels because of that.
>
>Neither have I, to be honest, and I've built plenty of kernels,
>including some parallel builds for separate Xen dom0 and domU kernels.

It was unexpected, but when I removed noatime, the problem disappeared,
so the only other different thing I'm doing is the hardlinked kernel
source tree, something I've done for years, over a decade.

Example:
grant@itxmini:~$ cd linux/
grant@itxmini:~/linux$ ls -l
total 8
drwxr-xr-x 23 grant wheel 4096 Aug 4 08:25 linux-3.16/
drwxr-xr-x 24 grant wheel 4096 Sep 22 08:28 linux-3.16.3a/
grant@itxmini:~/linux$ rm -rf linux-3.16*
grant@itxmini:~/linux$ tar xJf /home/common/linux/linux-3.16.tar.xz
grant@itxmini:~/linux$ cp -al linux-3.16 linux-3.16.3a
grant@itxmini:~/linux$ cd linux-3.16.3a
grant@itxmini:~/linux/linux-3.16.3a$ zcat /home/common/linux/patch-3.16.3.gz |patch -p1
patching file Documentation/devicetree/bindings/sound/adi,axi-spdif-tx.txt
...
patching file virt/kvm/iommu.c
grant@itxmini:~/linux/linux-3.16.3a$ cp /boot/config-3.16.3a .config
grant@itxmini:~/linux/linux-3.16.3a$ time make -j5
HOSTCC scripts/basic/fixdep
...
LD [M] net/xfrm/xfrm_ipcomp.ko

real 8m17.206s
user 22m56.460s
sys 7m43.820s

Then I run a script that does the root install kernel and modules, edit
/etc/lilo.conf and perhaps boot new kernel.

The problem I noticed was after a reboot, and changing .config, the next
compile would fail, as first I thought the SSD had failed, then I reinstalled
SSD and the same problem happened again, then I removed the 'noatime' option,
the problem went away.

I write about that here, to be told I imagine it? I have no idea what the
connection between hardlink source tree and compile failure with noatime
is, what I do know is the problem went away after I removed the noatime.

What is misleading is the first compile always works, and the 'make clean;
make' sequence usually works, but I'm not used to having to do it that way.

Grant.

Aragorn

unread,
Sep 22, 2014, 8:20:45 AM9/22/14
to
On Monday 22 September 2014 02:51, Grant conveyed the following to
No, that was not what I meant to convey. I was merely hoping for some
elaboration on why you were having those problems.

> I have no idea what the connection between hardlink source tree and
> compile failure with noatime is, what I do know is the problem went
> away after I removed the noatime.

Hardlinking means that the same inodes are used as in the original
source tree, and the ctime, mtime and atime fields in the inodes will
thus be those of the files in that original source tree. So it's not
just the atime which will be different when you use hardlinks, but also
the ctime and the mtime.

> What is misleading is the first compile always works, and the 'make
> clean; make' sequence usually works, but I'm not used to having to do
> it that way.

I think that the use of a hardlinked source tree "copy" is probably what
caused the problem in the first place, possibly in combination with a
parallelized build process, which may cause timing issues. Not using
"noatime" as a mount option may also possibly then omit a race condition
within the storage device's command queue.

I will add the disclaimer here that I've never compiled anything using a
source tree on a SATA storage device, because I've always had SCSI and
SAS drives in the machines on which I built my kernels. SCSI/SAS uses
TCQ (tagged command queuing), whereas SATA uses NCQ (native command
queuing). They are similar approaches, but they're not quite the same
thing, and TCQ still outperforms NCQ.

Code optimization flags such as -O3" or "-OS" - often they are
synonymous in terms of the resulting binary code, but not always - are
also known to break compilation, which is why Gentoo still recommends
compiling with "-O2". With over-optimization, the resulting binary code
doesn't quite do what it was intended to do anymore, and becomes very
sensitive to timing issues.

Martin

unread,
Sep 22, 2014, 2:03:15 PM9/22/14
to
tbh, I have no explanation for the nature of the errors that you
experienced, but from the semantics of noatime and make they are unexpected.

Just for the crack of it I did a "copy -al" on my linux source directory
and did a couple of compiles (sequentially), without noticing anything
unusual.

So from my point of view, noatime is fully rehabilitated.

Martin

Grant

unread,
Sep 22, 2014, 5:44:39 PM9/22/14
to
I've been using hardlink'd source trees for a long time, the idea of
copying 800-900MB of source tree for the cost of a set of inodes is
attractive, faster too. One needs to make sure one's editor knows how
to break the hardlink on save, Vim does, with a .vimrc option, patch
is hardlink-aware too.
>
>> What is misleading is the first compile always works, and the 'make
>> clean; make' sequence usually works, but I'm not used to having to do
>> it that way.
>
>I think that the use of a hardlinked source tree "copy" is probably what
>caused the problem in the first place, possibly in combination with a
>parallelized build process, which may cause timing issues. Not using
>"noatime" as a mount option may also possibly then omit a race condition
>within the storage device's command queue.

Hmm, the relatime default crept in five years ago, I didn't notice.
>
>I will add the disclaimer here that I've never compiled anything using a
>source tree on a SATA storage device, because I've always had SCSI and
>SAS drives in the machines on which I built my kernels. SCSI/SAS uses
>TCQ (tagged command queuing), whereas SATA uses NCQ (native command
>queuing). They are similar approaches, but they're not quite the same
>thing, and TCQ still outperforms NCQ.
>
>Code optimization flags such as -O3" or "-OS" - often they are
>synonymous in terms of the resulting binary code, but not always - are
>also known to break compilation, which is why Gentoo still recommends
>compiling with "-O2". With over-optimization, the resulting binary code
>doesn't quite do what it was intended to do anymore, and becomes very
>sensitive to timing issues.

Yes, kernel defaults are safe, and the -Os option now has a warning in
the *config program.

Thanks,
Grant.

Grant

unread,
Sep 22, 2014, 5:47:58 PM9/22/14
to
Did you change the .config between compiles? That's what triggered the
problem. As well as the 'make -j5' or some higher value (5 is num_cpus +1).
>
>So from my point of view, noatime is fully rehabilitated.

Speak for yerself ;o)

Grant.
>
>Martin

Martin

unread,
Sep 23, 2014, 2:44:58 AM9/23/14
to
On 09/22/2014 11:47 PM, Grant wrote:

> Did you change the .config between compiles? That's what triggered the
> problem. As well as the 'make -j5' or some higher value (5 is num_cpus +1).

I did both, only using -j8 (8 being the number of cpus) since BFS as a
cpu scheduler doesn't need the addition of 1 to maximize cpu utilisation.

> Speak for yerself ;o)

always do :p


Martin


Hans

unread,
Oct 11, 2014, 3:06:03 AM10/11/14
to
On 09/21/2014 10:35 AM, Grant wrote:
> On 19 Sep 2014 08:44:00 +0200, Hel...@Hullen.de (Helmut Hullen) wrote:
>
>> Hallo, Grant,
>>
>> Du meintest am 19.09.14:
>>
>>>>> The first SSD survival tool is to specify 'noatime' in /etc/fstab
>>>>> for SSD mountpoints, to remove these unnecessary inode updates (as
>>>>> long as you're not running a server relying on file access times).
>>>>> [...]
>>
>>>> Thanks for the interesting discussion, particularly the
>>>> noatime (or relatime), e.g.,
>>>> https://wiki.archlinux.org/index.php/fstab#atime_options
>>>> which I hadn't been aware of.
>>>> Personally, I'd decided not to bother with ssd's.
>>
>>> My concern is for the life of the hard drive, now I put the hard
>>> drive into standby at bootup, adding the SSD was more because I
>>> already had it, not because I needed the speed.
>>
>> Just for curiosity:
>> What about "relatime" instead of "noatime"? Somewhere (I don't remember
>> where) it was recommended.
>
> Turns out relatime was made the default in June 2009. The noatime option
> is bad for compiling kernel source.
>

noatime and relatime are about the way the last access time is
modified. Thats about reading a file. That is different then last
modified time. Make should only care about what is modified not what
files where read since the last compile time.

--
Hans
0 new messages