All serious full disk encryption schemes I have looked into use a static password for authentication. For example, TrueCrypt supports two-factor authentication with keyfiles, but not for system partitions. It's possible to use a Yubikey in static mode as a second factor with TrueCrypt full disk mode. But in both cases the second factor is really just a part of the static password that the user chooses to not memorize.
Clearly full-disk encryption requires authenticating users before the OS boots, so interactive challenge-response protocols involving a remote host won't work. But I don't see any insurmountable obstacles to implementing a secure, pre-boot one-time password mechanism.
Why is support for strong multi-factor authentication not more common in full disk schemes? Are there any viable implementations? Are static passwords considered good enough because an adversary capable of defeating them in a pre-boot context is probably also capable of recovering the encryption key (not authentication key) after any form of authentication, regardless of how many factors?
I expect the reason you mostly see static with FDE products is a business decision. It is a little easier to develop a system that supports multiple types of static token than one that supports multiple types of challenge/response, and since two-factor is typically positioned as an optional extra by FDE vendors, having it support multiple brands of token is good for sales
While BitLocker has a recovery key capability that is not multifactor, proper key management (rotation - e.g., through Microsoft BitLocker Administration and Monitoring - MBAM) can effectively render this point moot.
Azure Disk Storage Server-Side Encryption (also referred to as encryption-at-rest or Azure Storage encryption) is always enabled and automatically encrypts data stored on Azure managed disks (OS and data disks) when persisting on the Storage Clusters. When configured with a Disk Encryption Set (DES), it supports customer-managed keys as well. It doesn't encrypt temp disks or disk caches. For full details, see Server-side encryption of Azure Disk Storage.
Encryption at host is a Virtual Machine option that enhances Azure Disk Storage Server-Side Encryption to ensure that all temp disks and disk caches are encrypted at rest and flow encrypted to the Storage clusters. For full details, see Encryption at host - End-to-end encryption for your VM data.
Azure Disk Encryption helps protect and safeguard your data to meet your organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs by using the DM-Crypt feature of Linux or the BitLocker feature of Windows. ADE is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets, with the option to encrypt with a key encryption key (KEK). For full details, see Azure Disk Encryption for Linux VMs or Azure Disk Encryption for Windows VMs.
Confidential disk encryption binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk. Encryption at host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption. For full details, see DCasv5 and ECasv5 series confidential VMs.
Encryption is part of a layered approach to security and should be used with other recommendations to secure Virtual Machines and their disks. For full details, see Security recommendations for virtual machines in Azure and Restrict import/export access to managed disks.
Persistent Disk performance scales with the size of the disk and with the number of vCPUs on your VM instance. Choose from the range of disk performance options that fit your business goals, and only pay for the storage you use.
Automatically encrypt your data before it travels outside of your instance to Persistent Disk storage. Each Persistent Disk remains encrypted with system-defined keys or with customer-supplied keys. Google distributes Persistent Disk data across multiple physical disks, ensuring the ultimate level of security. When a disk is deleted, we discard the keys, rendering the data irretrievable.
Protect your data with cross-zone synchronous replication, cross-region asynchronous replication, disk snapshots, and disk clones to ensure that data is recoverable when and where you need it. Replicating data to multiple points of presence gives your workload higher resilience and allows you to implement a multi-zone or multi-region business continuity strategy.
If you have a clean disk, and write a 1 to it, then overwrite that 1 with a zero, the new "zero" will be slightly less "zero" than if you wrote a zero to that space on a clean disk, and even less than if you wrote a "zero" over a "zero".
For your pron stash that you don't want Mom to find: Delete the file, overwrite all free space with zeros (sdelete will do the job). To my knowledge, no one has EVER demonstrated the ability to recover data from a normal hard disk after an over-write of any kind. The theoretical possibility is there, but no one's shown it can be done. Even if it can, it's going to be monstrously expensive and slow, and probably can't recover all desired data.
If it's worth millions of dollars, or if people are going to die if the info is revealed, take the drive apart and sand-blast the magnetic media off of the disk surface (don't forget proper air filtering - some of that stuff might be nasty). Congratulations - the data can't be recovered. If you happen to have access to foundry that does aluminum, you could always toss the platters into the next batch (the platters are often aluminum with magnetic oxide coatings). By melting the platters, you again free up the magnetic particles and let them float around. As a bonus, aluminum is usually melted in electric arc furnaces, which will surely play hob with the magnetic fields even before they slag the platters down.
Agree with the above answer, it is mostly paranoia. If you are a home user, then a single pass low level format will do the trick.There are many theories about how effective multiple wipes are, (some even go as far as reccomending 35 wipes!!) but generally a one pass wipe is good enough. Destroying the disk by bending it, breaking the disk plates (using a hammer) or drilling holes through it is a good way to safeguard your personal data but it depends on whether you want to use the disk again. Also, if you are disposing of your old machine by resale, without a disk you may realise up to a 40% reduction in value(depending on the machine).
Due to Privacy laws these days, organisations are paranoid about having their information leak into the open, since they can face litigation and fines. That is contributing to the sensitivity around disk wiping standards.
For modern hard drives, one pass is sufficient to destroy its data. Doing anything more (from 2 passes all the way up to the mythical 35 passes) is an urban legend and gives a wasteful false sense of security. I have not seen any evidence of data being recovered after a single pass wipe. See this article for more detail: -explains-why-you-only-have-to-wipe-a-disk-once-to-erase-it/
Up to Yosemite I regularly used Disk Utility with Erase > Security Options... > writes a single pass of zeros over the entire disk...as a simple method to fully check that a disk is fully safe before installing a new system or backup on it.
What are the other options to fully really erase a disk, whatever the technology used might be, the purpose being to verify that one can write everywhere on a disk without any bad surprise later?
You don't need to do a secure erase of an SSD because a a standard erase is already more than enough to secure your data. The reason you needed multiple passes or even the DoD 7 pass secure erase was because with traditional hard drives (HDDs) the data was stored on magnetic platters which left a residual magnetic imprint even when wiped. This is how COTS (Commercial Off the Shelf) utilities like Disk Drill software is able to reconstruct a drive. This is not the case with an SSD; nothing is magnetized.
We can also create redundancy with the drives themselves, this would be RAID, or a Redundant Array of Independent Disks. Most of the redundancy in RAID comes from using multiple drives within a single array, where you can store some or even all of that data on that redundant drive. This way if you do lose one of those physical drives, you have separate pieces of that data stored on other multiple drives as part of that array.
This might be something like /dev/sdb or /dev/hdb (but not like /dev/sdb1, that's a partition). You can use sudo fdisk -l to list all connected storage devices, and find your external hard drive there.
This will overwrite the whole disk with zeros and is considerably faster than generating gigabytes of random data. Like all the other tools this won't take care of blocks that were mapped out for whatever reason (write errors, reserved, etc.), but it's highly unlikely your buyer will have the tools and the knowledge to recover anything from those blocks.
Darik's Boot and Nuke (commonly known as DBAN) [...] is designed to securely erase a hard disk until data is permanently removed and no longer recoverable, which is achieved by overwriting the data with random numbers generated by Mersenne twister or ISAAC (a PRNG). The Gutmann method, Quick Erase, DoD Short (3 passes), and DOD 5220.22-M (7 passes) are also included as options to handle Data remanence.
DBAN can be booted from a floppy disk, CD, DVD, or USB flash drive and it is based on Linux. It supports PATA (IDE), SCSI and SATA hard drives. DBAN can be configured to automatically wipe every hard disk that it sees on a system, making it very useful for unattended data destruction scenarios. DBAN exists for Intel x86 and PowerPC systems.
aa06259810