Why is it not letting me extend the partition?

11 views
Skip to first unread message

Yousuf Khan

unread,
Mar 23, 2021, 11:20:58 PM3/23/21
to
So one of my oldest SSD's just finally had a bad misfire. One of its
memory cells seems to have gone bad, and it happened to be my boot
drive, so I had to restore to a new SSD from backups. That took a fair
bit of time to restore, but the new drive is twice as large as the old
one, but it created a partition that is the same size as the original. I
expected that, but I also expected that I should be able to extend the
partition after the restore to fill the new drive's size. However going
into disk management it doesn't allow me to fill up that entire drive.
Any idea what's going on here?

Yousuf Khan

John Doe

unread,
Mar 23, 2021, 11:48:59 PM3/23/21
to
You mean Microsoft disk management? Use a real partitioning utility. I got a
free one several years ago downloaded from Amazon that works... Partition
Master Technician 13.0 Portable. See if it's still available. If you make
Windows backups (like everybody should), you don't even need to keep it on
your system, just don't re-install it after the next restore.

VanguardLH

unread,
Mar 24, 2021, 12:25:51 AM3/24/21
to
There are a lot of partition manipulations that the Disk Manager in
Windows won't do. You need to use a 3rd party partition manager. There
are lots of free ones. I use Easeus Partition Master, but there are
lots of others.

You might want to investigate overprovisioning for SSDs. It prolongs
the lifespan of SSDs by giving them more room for remapping bad blocks.
SSDs are self-destructive: they have a maximum number of writes. They
will fail depending on the volume of writes you impinge on the SSD. The
SSD will likely come with a preset of 7% to 10% of its capacity to use
for overprovisioning. You can increase that. A tool might've come with
the drive, or be available from the SSD maker. However, a contiguous
span of unallocated space will increase the overprovisioning space, and
you can use a 3rd party partition manager for that, too. You could
expand the primary partition to occupy all of the unallocated space, or
you could enlarge it just shy of how much unallocated space you want to
leave to increase overprovisioning.

Paul

unread,
Mar 24, 2021, 2:15:02 AM3/24/21
to
It's GPT and you need to find a utility that does a
better job of showing the partitions.

The Microsoft Reserved partition has no recognizable
file system inside, and the information I can find suggests
it is used as a space when something needs to be adjusted. It
is a tiny supply of "slack". But, it might also function as
a "blocker" when Disk Management is at work. And then, not
every utility lists it properly. Some utilities try to "hide"
things like this, and only show data partitions.

Try Linux GDisk or Linux GParted, and see if you can
spot the blocker there. The disktype utility might work,
but the only edition available there is the Cygwin one.

disktype.exe /dev/sda

--- /dev/sda
Block device, size 2.729 TiB (3000592982016 bytes)
DOS/MBR partition map
Partition 1: 2.000 TiB (2199023255040 bytes, 4294967295 sectors from 1)
Type 0xEE (EFI GPT protective)
GPT partition map, 128 entries
Disk size 2.729 TiB (3000592982016 bytes, 5860533168 sectors)
Disk GUID EE053214-E191-B343-A670-D3A712F353DB
Partition 1: 512 MiB (536870912 bytes, 1048576 sectors from 2048)
Type EFI System (FAT) (GUID 28732AC1-1FF8-D211-BA4B-00A0C93EC93B)
Partition Name "EFI System Partition"
Partition GUID 0CF3D241-6DA1-764C-AE0F-559E55314B8C
FAT32 file system (hints score 5 of 5)
Volume size 511.0 MiB (535805952 bytes, 130812 clusters of 4 KiB)
Partition 2: 20 GiB (21474836480 bytes, 41943040 sectors from 1050624)
Type Unknown (GUID AF3DC60F-8384-7247-8E79-3D69D8477DE4)
Partition Name "MINT193"
Partition GUID 0647492B-0C78-DC4E-914C-E210AB6FF5A5
Ext3 file system
Volume name "MINT193"
UUID E96B501E-23B5-4F80-A41C-CEE6A5E1D59C (DCE, v4)
Last mounted at "/media/bullwinkle/MINT193"
Volume size 20 GiB (21474836480 bytes, 5242880 blocks of 4 KiB)
Partition 3: 16 MiB (16777216 bytes, 32768 sectors from 123930624) <=== not visible,
Type MS Reserved (GUID 16E3C9E3-5C0B-B84D-817D-F92DF00215AE) diskmgmt.msc
Partition Name "Microsoft reserved partition"
Partition GUID 0C569E59-E917-AC40-B336-E7B2527D77AD
Blank disk/medium
Partition 4: 300.4 GiB (322502360576 bytes, 629887423 sectors from 123963392)
Type Basic Data (GUID A2A0D0EB-E5B9-3344-87C0-68B6B72699C7)
Partition Name "Basic data partition" <=== actually,
Partition GUID 65A1A4E6-4F11-7944-874A-B3A515F131DE "WIN10"
NTFS file system
Volume size 300.4 GiB (322502360064 bytes, 629887422 sectors)
Partition 5: 514 MiB (538968064 bytes, 1052672 sectors from 753854464)
Type Unknown (GUID A4BB94DE-D106-404D-A16A-BFD50179D6AC)
Partition Name ""
Partition GUID 99242951-459E-1144-BF88-61517A280CCA <=== recovery
NTFS file system partition
Volume size 514.0 MiB (538967552 bytes, 1052671 sectors)

HTH,
Paul


Jeff Barnett

unread,
Mar 24, 2021, 4:01:26 AM3/24/21
to
There may be another issue. I'm thinking of Samsung over provisioning
(or is over something else?) where about 10% of disk free space is used
by the disk firmware to shuffle blocks in use in order to level wear. If
I wanted to change my SSD, I'd probably need to use the Samsung Magician
to first undo that block; then I could do my partition management; then
use Samsung again to enable the wear leveling. I presume that that more
than Samsung implements such a scheme.

This is not my area of expertise and I'm generalizing from my limited
experience using a few Samsung SSD on my systems. Perhaps someone more
knowledgeable can either poo poo my observation or, if it sounds right,
flesh out what is going on.
--
Jeff Barnett

J. P. Gilliver (John)

unread,
Mar 24, 2021, 6:18:23 AM3/24/21
to
On Tue, 23 Mar 2021 at 23:25:49, VanguardLH <V...@nguard.LH> wrote (my
responses usually follow points raised):
>Yousuf Khan <bbb...@spammenot.yahoo.com> wrote:
[]
>> drive, so I had to restore to a new SSD from backups. That took a fair
>> bit of time to restore, but the new drive is twice as large as the old
>> one, but it created a partition that is the same size as the original. I
>> expected that, but I also expected that I should be able to extend the
>> partition after the restore to fill the new drive's size. However going
>> into disk management it doesn't allow me to fill up that entire drive.
>> Any idea what's going on here?
>>
>> Yousuf Khan
>
>There are a lot of partition manipulations that the Disk Manager in
>Windows won't do. You need to use a 3rd party partition manager. There
>are lots of free ones. I use Easeus Partition Master, but there are
>lots of others.

(I use that one too. It was the first one I tried and does what I want,
so I haven't tried any others, so can't say if it's better or worse than
any. The UI is similar to the Windows one - but then maybe they all
are.)
>
>You might want to investigate overprovisioning for SSDs. It prolongs
>the lifespan of SSDs by giving them more room for remapping bad blocks.
>SSDs are self-destructive: they have a maximum number of writes. They
>will fail depending on the volume of writes you impinge on the SSD. The
>SSD will likely come with a preset of 7% to 10% of its capacity to use
>for overprovisioning. You can increase that. A tool might've come with
>the drive, or be available from the SSD maker. However, a contiguous
>span of unallocated space will increase the overprovisioning space, and
>you can use a 3rd party partition manager for that, too. You could
>expand the primary partition to occupy all of the unallocated space, or
>you could enlarge it just shy of how much unallocated space you want to
>leave to increase overprovisioning.

How does the firmware (or whatever) in the SSD _know_ how much space
you've left unallocated, if you use any partitioning utility other than
one from the SSD maker (which presumably has some way of "telling" the
firmware)?

If, after some while using an SSD, it has used up some of the slack,
because of some cells having been worn out, does the apparent total size
of the SSD - including unallocated space - appear (either in
manufacturer's own or some third-party partitioning utility) smaller
than when that utility is run on it when nearly new?

If - assuming you _can_ - you reduce the space for overprovisioning to
zero (obviously unwise), will the SSD "brick" either immediately, or
very shortly afterwards (i. e. as soon as another cell fails)?

If, once an SSD _has_ "bricked" [and is one of the ones that goes to
read-only rather than truly bricking], can you - obviously in a dock on
a different machine - change (increase) its overprovisioning allowance
and bring it back to life, at least temporarily?
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

"I'm tired of all this nonsense about beauty being only skin-deep. That's deep
enough. What do you want, an adorable pancreas?" - Jean Kerr

Paul

unread,
Mar 24, 2021, 6:39:34 AM3/24/21
to
Wear leveling is done in the virtual to physical translation
inside the drive. Sector 1 is not stored in offset 1 of the
flash. Your data is "sprayed" all over the place in there.
If you lose the virtual to physical map inside the SSD, the
data recovery specialist will not be able to "put the
blocks back in order".

The drive declares a capacity. It's a call in the ATA/ATAPI
protocol. The sizing was settled in a law suit long ago, which
penalized a company for attempting to lie about the capacity.
The capacity on a 1TB drive, will be some number of
cylinders larger than 1e12 bytes. The size is an odd number,
so some CHS habits of yore, continue to work. The size is not
actually a rounded number that customers would enjoy, it's
a number used to keep snotty softwares happy.

Any spares pool, and spares management for wear leveling,
is behind the scenes and does not influence drive operation.
The spares pool means the physical surface inside the drive,
is somewhat larger than the virtual presentation to the outside
world.

We can Secure Erase the drive. All this does, is remove
memory of what was there previously (Secure Erase being
suitable before selling on the drive).

We can TRIM a drive, and this is an opportunity for the
OS, to deliver a "hint" to the drive, as to what virtual
areas of the 1TB, are not actually in usage by the OS.
If you've removed the partition table from the drive,
then the OS during TRIM, could tell the drive that the
entire surface is unused, then all LBAs are put in the
spares, ready to be used on the next write(s). You might
be able to deliver this news from the ToolKit software,
if the GUI in the OS had no mechanism for it. (Maybe
you can do it from Diskpart, but I haven't checked.)

The SMART table gives information about Reallocations,
which are permanently spared out blocks. As the drive
gets older, the controller may mark portions of it as
unusable. But, because there is virtual to physical
translation, as long as there are sufficient blocks
to present a 1TB surface, we can't tell from the outside,
it's in trouble. However, if you have the ToolKit for
the drive installed, it can take a reading every day,
and extrapolate remaining life (using either the
number of writes to cells, or, using the reallocation
total to predict the drive is in trouble). A drive
can die before the warranty period is up, or before the
wear life has expired. SMART allows this to be tracked.

There is a "critical data" storage area, which may
receive a lot more writes than the average cell. Perhaps
it's constructed from SLC cells. If this is damaged, that
can lead to instant drive death, because the drive
has lost its spares table, its map of virtual to
physical and so on. Some drives may have sufficient wear
life, but a failure to record critical data, means they
poop out early. And maybe this isn't covered all that
well from a SMART perspective.

But generally, all corner cases ignored, you just use
SSDs in the same way you'd use an HDD. You don't need to
pamper them. The ToolKit will tell you if your pattern
is abusive, and with any luck, warn you before the drive
takes a dive. But like any device, you should have
backups for any eventuality. Regular hard drives can
die instantly, if the power (like +12V), rises above
+15V or so. So if someone tells me they have a 33TB array
and no backups, all I have to do is warn them that the
ATX PSU is a liability and could, if it chose to, ruin
the entire array (redundancy and all) in one fell swoop.

We had a server at work, providing licensed software to
500 engineers. One day, at 2PM in the afternoon, the
controller firmware in the RAID controller card, wrote
zeros across the array, down low. Wiping out some critical
structure for the file system. Instantly, 500 engineers
had no software. Most went home for the day :-) Paid of course.
Costing the company a lost-work fortune. While RAIDs are
nice and all, they do have some (rather unfortunate)
common mode failure modes.

A second RAID controller of the same model, did the same
thing to its RAID array. Nobody went home for that one,
and at least then they were thinking it was a firmware
bug in the RAID card.

Summary - No, the SSD has no excuses. It's either ready
for service, or its not. There are no in-between
states where a partition boundary cannot move.
The ToolKit software each brand provides, will
have rudimentary extrapolation of life-remaining.
As long as some life remains, you can move
partition boundaries or do anything else involving
writes.

Paul

Chris Elvidge

unread,
Mar 24, 2021, 6:47:17 AM3/24/21
to
Without a current layout diagram it's impossible to say what's wrong.
Is the free space into which you want to expand the partition contiguous
with the partition you want to expand? Is it the partition you wish to
expand the boot partition?
See here:
https://answers.microsoft.com/en-us/windows/forum/all/how-to-expand-boot-partition/69767a28-2efb-4a13-9c7b-2462a09bf629

--
Chris Elvidge
England

Paul

unread,
Mar 24, 2021, 8:24:39 AM3/24/21
to
J. P. Gilliver (John) wrote:

>
> If, after some while using an SSD, it has used up some of the slack,
> because of some cells having been worn out, does the apparent total size
> of the SSD - including unallocated space - appear (either in
> manufacturer's own or some third-party partitioning utility) smaller
> than when that utility is run on it when nearly new?

The declared size of an SSD does not change.

The declared size of an HDD does not change.

What happens under the covers, is not on display.

The reason you cannot arbitrarily move the end of a drive,
is because some structures are up there, which don't appear
in diagrams. This too is a secret.

Any time something under the covers breaks, the
storage device will say "I cannot perform my function,
therefore I will brick". That is preferable to moving
the end of the drive and damaging the backup GPT partition,
the RAID metadata, or the Dynamic Disk declaration.

Paul

Paul

unread,
Mar 24, 2021, 8:43:36 AM3/24/21
to
One thing you can try.

Boot from your Linux LiveDVD USB stick.

Attempt to mount the partitions on the disk. Then

cat /etc/mtab

Look at the mount points. Are any "ro" for
read-only, instead of "rw" for read-write ?
It's possible to mark a storage device as
read-only, but I've not been able to find
sufficient diagrams of the details. It may
be a flag located next to the VolumeID 32 bit
number in the MBR. The partition headers may
have a similar mechanism, but I got no hints at
all there.

https://linux.die.net/man/8/hdparm

https://www.geeksforgeeks.org/hdparm-command-in-linux-with-examples/

sudo hdparm -I /dev/sda # Dump info

sudo hdparm -r0 /dev/sda # set ReadOnly flag to zero, make drive ReadWrite.
# reboot recommended, as Ripley would say.

Diskpart in Windows likely has a similar function,
but we're not sure it works. The threads I could find
were not conclusive. Otherwise I would have done a Windows one for you.

In any case, the *boot* drive, should not be the
same drive you experiment with. On Windows, maybe
C: is on /dev/sda, whereas /dev/sdb is the "broken"
drive needing modification. And a reboot maybe.
No OS need behave well when it comes to corner conditions.
F5 (refresh) doesn't work at all levels.

Paul

Yousuf Khan

unread,
Mar 24, 2021, 8:45:57 AM3/24/21
to
Okay, I figured it out, I was just being fooled into thinking it wasn't
working. Due to the fact that the new drive was exactly twice as big as
the previous drive, I thought it was telling me that the current size
was its maximum limit, and that it couldn't add any more of the drive
space. But in actual fact it was telling me that it could add an
additional amount of space that just so happened to be exactly the same
numerically as the existing space. So I got fooled into thinking the
wrong thing. I added the additional space without problem.

On an alternate note, the old drive now has one tiny little bad sector
hole in it, that I'm thinking the drive can deprovision, and carry on
without in the future. Is there something that can allow the drive
electronics to carry on an internal test and remove the bad sectors?

Yousuf Khan

Paul

unread,
Mar 24, 2021, 9:31:09 AM3/24/21
to
Yousuf Khan wrote:

>
> On an alternate note, the old drive now has one tiny little bad sector
> hole in it, that I'm thinking the drive can deprovision, and carry on
> without in the future. Is there something that can allow the drive
> electronics to carry on an internal test and remove the bad sectors?
>
> Yousuf Khan

Testing burns wear life.

*******

A sector has three states (for this discussion):

1) Error free (in TLC/QLC era, highly unlikely)

2) Errors present, ECC can correct.

3) Errors present, ECC cannot correct. tiny little bad sector.

If (3) were marked with "write, but do immediate read verify",
this would allow evaluating the material in question, after
it was put in the free pool. The "questionable status" should
follow the block around, until it can be ascertained that it
is (1) or (2) again. If it showed up (3) on a retry, it should
be thrown into the old sock drawer. Any "write attempt", is
an excellent time to be checking credentials of the block.

The procedure should be similar to hard drives, economical
in nature, yet not endangering user data. To do walking-ones
or a GALPAT on the flash block, that would be seriously naughty
and pointless. You could burn out the entire block wear life, then
conclude there is nothing wrong with the block :-)

Seagate has a field on their hard drives, called "CurrentPending".
For the longest while, I took that at face value. However,
that field isn't what it appears. It only seems to increment
when the drive is in serious trouble and has run out of spares
at some level. It's unclear whether there is an "honest"
item in the SMART table, keeping track of items like (3) so
a customer can judge how bad things are.

SMART is generally not completely honest anyway. There's some info,
but they are dishonest so that users do not "cherry pick" drives,
and send back the ones that have a tiny blemish when purchased.

On hard drives, at one time it was considered to be OK for a
drive to leave the factory, with 100,000 errored sectors on it.
That's because the yields were bad, and the science could not
keep up. Now, if SMART was completely honest about your drive,
imagine how you'd freak out if you saw "100,000" in some table.
This is why the scheme is intentionally biased so drive devices
look "perfect" when they leave the factory, when we know there
is metadata inside indicating the drive is not perfect. Especially
with TLC or QLC. SSD drives do not leave the factory with
a state of (1) over 100% of the surface. There is lots of (2),
and more (2) the longer the new drive sits on the shelf. That's
why, if you want to bench a modern SSD, you should write it from
end to end first. This removes the degree of errored-ness on
the surface, before you do your read benchmark test. If the drive
was SLC or MLC, I would not be doing this... It would not need it.

The Corsair Neutron I bought, on first test, I was getting 125 to 130MB/sec
on reads. Dreadful. The performance popped up, after a refresh. I still took
it back to the store for a refund the next morning, because
(maybe) the manufacturer would like some feedback on what
I think of them.

Paul

Ken Blake

unread,
Mar 24, 2021, 10:58:18 AM3/24/21
to
It's probably because there's no free space contiguous to the partition
you want to expand. You need to use a third-party partition manager.


--
Ken

J. P. Gilliver (John)

unread,
Mar 24, 2021, 12:30:34 PM3/24/21
to
On Wed, 24 Mar 2021 at 08:24:36, Paul <nos...@needed.invalid> wrote (my
responses usually follow points raised):
>J. P. Gilliver (John) wrote:
>
>> If, after some while using an SSD, it has used up some of the slack,
>>because of some cells having been worn out, does the apparent total
>>size of the SSD - including unallocated space - appear (either in
>>manufacturer's own or some third-party partitioning utility) smaller
>>than when that utility is run on it when nearly new?
>
>The declared size of an SSD does not change.
>
>The declared size of an HDD does not change.
>
>What happens under the covers, is not on display.

That's what I thought.
>
>The reason you cannot arbitrarily move the end of a drive,
>is because some structures are up there, which don't appear
>in diagrams. This too is a secret.
>
>Any time something under the covers breaks, the
>storage device will say "I cannot perform my function,
>therefore I will brick". That is preferable to moving
>the end of the drive and damaging the backup GPT partition,
>the RAID metadata, or the Dynamic Disk declaration.
>
> Paul

So how come our colleague is telling us we can change the amount of
"overprovisioning", even using one of many partition managers _other_
that one made by the SSD manufacturer? How does the drive firmware (or
whatever) _know_ that we've given it more to play with?
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

It's no good pointing out facts.
- John Samuel (@Puddle575 on Twitter), 2020-3-7

Paul

unread,
Mar 24, 2021, 12:44:21 PM3/24/21
to
J. P. Gilliver (John) wrote:

> So how come our colleague is telling us we can change the amount of
> "overprovisioning", even using one of many partition managers _other_
> that one made by the SSD manufacturer? How does the drive firmware (or
> whatever) _know_ that we've given it more to play with?

Once you've set the size of the device, it's
not a good idea to change it. That's all I can
tell you.

If you don't want to *use* the whole device, that's your business.
I've set up SSDs this way before. As you write C: and materials
"recirculate" as part of wear leveling, the virtually unused
portion continues to float in the free pool, offering more
opportunities for wear leveling or consolidation. You don't
have to do anything. You could make a D: partition, keep it empty,
issue a "TRIM" command, to leave no uncertainty as to what your
intention is. Then delete D: once the "signaling" step is complete.

+-----+-----------------+--------------------+
| MBR | C: NTFS | <unallocated> |
+-----+-----------------+--------------------+
\__ This much extra__/
in free pool

Paul

VanguardLH

unread,
Mar 24, 2021, 7:15:52 PM3/24/21
to
"J. P. Gilliver (John)" <G6...@255soft.uk> wrote:

> How does the firmware (or whatever) in the SSD _know_ how much space
> you've left unallocated, if you use any partitioning utility other
> than one from the SSD maker (which presumably has some way of
> "telling" the firmware)?

Changing the amount of unallocated space on the SSD is how the tools
from the SSD makers work, too. You can use their tool, or you can use a
partitioning tool.

> If, after some while using an SSD, it has used up some of the slack,
> because of some cells having been worn out, does the apparent total
> size of the SSD - including unallocated space - appear (either in
> manufacturer's own or some third-party partitioning utility) smaller
> than when that utility is run on it when nearly new?

The amount of overprovisioning space set at the factory is never
available for you to change. If they set 7% space for overprovisioning,
you'll never be able to allocate that space to any partition. That
space is not visible, fixed, and set at the factory. For example, they
might sell a 128GB SSD, but usuable capacity is only 100GB. This is the
static overprovisioning set at the factory.

From the usable capacity of the drive, unallocated space is used for
dynamic overprovisioning. Typically you find that you cannot use all
unallocated space for a partition. There's some that cannot be
partitioned; however, by making partition(s) smaller then there is more
unallocated space available for use by dynamic overprovisioning. It's
dynamic because it changes with the amount of write delta (stored data
changes). The unallocated space is a reserve. Not all of it may get
used.

Individual cells don't get remapped. Blocks of cells get remapped. If
you were to reduce the OP using unallocated space, the previously marked
bad blocks would have to get re-remapped to blocks within the partition.
Those bad blocks are still marked as bad, so remapping has to be
elsewhere. Might you lose information in the blocks in the dynamic OP
space when you reduce it? That I don't know. Partition managers don't
know about how the content of unallocated space is used.

The SSD makers are so terse as to be sometimes unusably vague in their
responses. Samsung said "Over Provisioning can only be performed on the
last accessible partition." What does that mean? Unallocated space
must be located after the last partition? Well, although by accident,
that's how I (and Samsung Magician) have done it. The SSD shows up with
1 partition consuming all usuable capacity, and I or Samsung Magician
ended up shrinking the partition to make room for unallocated space at
the end. However, SSD makers seem to be alchemists or witches: once
they decide on their magic brew of ingredients, they keep it a secret.

I have increased OP using Samsung Magician, and decreased it, too. All
that it did was change the size of the unallocated space by shrinking or
enlarging the last partition, so the unallocated space change was after
the last partition. When shrinking the unallocated space, it was not
apparent in Samsung Magician that any bad cell blocks that got remapped
to unallocated space either got re-remapped into the static OP space
which would reduce endurance. Since the firmware had marked a block as
bad, it still gets remapped into static or dynamic OP. If unallocated
space were reduced to zero (no dynamic OP), static OP gets used for the
remappings. However, I haven't found anything that discusses for
remappings into dynamic OP when the unallocated space is shrunk.
Samsung Magician's OP adjustment looks to be nothing more than a limited
partition manager to shrink or enlarge the last partition, which is the
same you could do using a partition manager. I suspect any remap
targets in the dynamic OP do not get written into the static OP, so you
could end up with data corruption. A bad block got mapped into dynamic
OP, you reduced the size of dynamic OP which means some of those
mappings there are gone, and they are not written into static OP. Maybe
Samsung's Magician is smart enough to remap the dynamic OP remaps into
static OP, but I don't see that happening yet it could keep that
invisible to the user. Only if I had a huge number of remappings stored
in dynamic OP and then shrunk the unallocated space might I see the
extra time spent to copy those remappings into static OP when compared
to using a partition tool just just enlarge the last partition.

Since the information doesn't seem available, I err on the side of
caution: I only reduce dynamic OP immediately after enlarging it should
I decide the extra OP consumed a bit more than I want to lose in
capacity in the last partition. Once I set dynamic OP and have used the
computer for a while, I don't reduce dynamic OP. I have yet to find out
what happens to the remappings in dynamic OP when it is reduced. If I
later need more space in the partition, I get a bigger drive, clone to
it, and decide on dynamic OP at that time. With a bigger drive, I
probably will reduce the percentage of dynamic OP since it would be a
huge waste of space. For a drive clone, the static or dynamic
remappings from the old drive aren't copied to the new drive. The new
drive will have its own independent remappings, and the reads during the
clone are going to copy from the remaps from the old drive into the the
new drive's partition(s). Old remappings vaporize during the copy to a
different drive.

Unless reducing the dynamic OP size (unallocated space) is done very
early after creating it to reduce the chance of new remappings happening
between defining the unallocated space and then reducing its size, I
would be leery of reducing unallocated space on an SSD after lots of use
for a long time. Cells will go bad in SSDs, and why remapping is
needed. I don't see any tools that move remappings from dynamic OP when
it gets reduced, and the sectors where were the remapping get moved to
static OP. You can decide not to use dynamic OP at all, and hope the
factory-set static OP works okay for you for however long you own the
SSD. You can decide to sacrifice some capacity to define dynamic OP,
but I would recommend only creating it, perhaps later enlarging it, but
not to shrink it. I just can't find info on what happens to the remaps
in dynamic OP when it is shrunk.

Overprovisioning, whether fixed (static, set by factory) or dynamic
(unallocated space within the usuable space after static OP) always
reduces capacity of the drive. The reward is reducing write
amplication, increased performance (but not better than factory-time
performance), and endurance. You trade some of one for the other. It's
like insurance: the more you buy, the less money you have now, but you
hope you won't be spending a lot more later.

> If - assuming you _can_ - you reduce the space for overprovisioning to
> zero (obviously unwise), will the SSD "brick" either immediately, or
> very shortly afterwards (i. e. as soon as another cell fails)?

Since the cell block is still marked as bad, it still needs to get
remapped. With no dynamic OP, static OP gets used. If you create
dynamic OP (unallocated space) where some remaps could get stored, what
happens to the remaps there when you shrink the dynamic OP? Sure, the
bad blocks are still marked bad, so future writes will remap the bad
block into static OP, but happened to the data in the remaps in dynamic
OP when it went away? Don't know. I don't see any SSD tool or
partition manager will write the remaps from dynamic OP into static OP
before reducing dynamic OP. After defining dynamic OP, reducing it
could cause data loss.

If you just must reduce dynamic OP because you need that unallocated
space to get allocated into a partition, your real need is a bigger
drive. When you clone (copy) the old SSD to a new SSD, none of the
remaps in the old SSD carry to the new SSD. When you get the new SSD,
you could change the size (percentage) of unallocated space to change
the size of dynamic OP, but I would do that immediately after the clone
(or restore from backup image). I'd want to reduce the unallocated
space on the new bigger SSD as soon as possible, and might even use a
bootable partition manager to do that before the OS loads the first
time. I cannot find what happens to the remaps in dynamic OP when it
gets reduced.

> If, once an SSD _has_ "bricked" [and is one of the ones that goes to
> read-only rather than truly bricking], can you - obviously in a dock on
> a different machine - change (increase) its overprovisioning allowance
> and bring it back to life, at least temporarily?

Never tested that. Usually I replace drives before they run out of free
space (within a partition) with bigger drives, or I figure out how to
move data off the old drive to make for more free space. If I had an
SSD that catastrophically failed into read-only mode, I'd get a new (and
probably bigger) SSD and clone from old to new, then discard the old.

Besides my desire to up capacity with a new drive when an old drive gets
over around 80% full, and if I don't want to move files off of it to get
back a huge chunk to become free space, I know SSDs are self
destructive, so I expect them to fail unless I replace them beforehand.
From my readings, and although they only give a 1-year warranty, most
SSD makers seem to plan on a MTBF of 10 years, but that's under a write
volume "typical" of consumer use (they have some spec that simulates
typical write volume, but I've not seen those docs). Under business or
server use, MTBF is expected to be much lower. I doubt that I would
keep any SSD for more than 5 years in my personal computers. I up the
dynamic OP to add insurance, because I size drives far beyond expected
usage. Doubling is usually my minimum upsize scale.

I wouldn't plan on getting my SSD anywhere near its maximum write cycle
count that would read-only brick it. SMART does not report the number
of write cycles, but Samsung's Magician tool does. It must request info
from firmware that is not part of the SMART table. My current 1 TB NVMe
m.2 SSD is about 25% full after a year's use of my latest build.
Consumption won't change as much in the future (i.e., it pretty much
flattened after a few months), but if it gets to 80% would then be when
I consider getting another matching NVMe m.2 SSD, or replace the old 1
TB one with 2TB, or larger, and cloning would erase all those old remaps
in the old drive (the new drive won't have those). Based on my past
experience and usage, I expect my current build to last another 7 years
until I the itch gets too unbearable to do a new build. 20% got used
for dynamic OP just as insurance to get an 8-year lifespan, but I doubt
I will ever get close to bricking the SSD.

I could probably just use the 10% minimum for static OP, but I'm willing
to spend some capacity as insurance. More than for endurance, I added
dynamic OP to keep up the performance of the SSD. After a year, or
more, of use, lots of users have reported their SSDs don't perform like
when new. The NVMe m.2 SSD is a 5 times faster (sequential, and more
than 4 times for random) for both reads and writes than my old SATA SSD
drive, and I don't want to lose that joy of speed that I felt at the
start.

I might be getting older and slower, but not something I want for my
computer hardware as it ages.

VanguardLH

unread,
Mar 24, 2021, 8:00:06 PM3/24/21
to
"J. P. Gilliver (John)" <G6...@255soft.uk> wrote:

> So how come our colleague is telling us we can change the amount of
> "overprovisioning", even using one of many partition managers _other_
> that one made by the SSD manufacturer? How does the drive firmware
> (or whatever) _know_ that we've given it more to play with?

Static OP: What the factory defines. Fixed. The OS, software, and you
have no access. Not part of usable space.

Dynamic OP: You define unallocated space on the drive. You can shrink a
partition to make more unallocated space, or expand a
partition to make less unallocated space (but might cause
data loss for remaps stored within the dynamic OP). (*)

(*) I've not found info on what happens to remaps stored in the dynamic
OP when the unallocated space is reduced (and the reduction covers
the sectors for the remaps).
Reply all
Reply to author
Forward
0 new messages