Process for Secure Wipe of SSD Drives

Eric B
Eric B used Ask the Experts™
Secure Wipe SSD Drive

I've seen a lot of things on the web and searched here. I just want something succinct and usable.

I've stored up about three dozen SSD's and need to securely wipe them.

Does this plan sound good?  Additions?

Take one of the drives, initialize and format it
Activate Bitlocker on the drive
Fill up drive with files from \windows\system32\*.exe
Dupe this drive over the remaining drives of that same size to overwrite all sectors (hopefully)
          (This way individual sectors won't still have any data)
Use Parted Magic or Crucial's SSD software Erase Drive to low level initialize the disk
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Dr. KlahnPrincipal Software Engineer

There is no point in putting data on the drives before doing a security erase.  Anything on the drive will be overwritten, possibly multiple times, when a security erase is done.  It also encourages plaintext leakage (see attachment below.)

There are two ways to wipe drives, one by writing data over them and one by using the Security Erase feature built into the drives.  When overwriting data the best approach is to write random data several times, as in the DOD erasure standard.  However, it takes time to overwrite a drive several times, and every time an SSD is written to its remaining lifetime is diminished.

If the application not require military level erasure, but will accept something that can not be recovered except by a determined government agency, then look into the ATA Secure Erase command instead.

[attachment below from recent correspondence on this topic]

This page gives perhaps the best explanation I've seen:

Note that (see the section on hdparm) in some cases it is
possible to erase a drive exceedingly quickly, in just a
couple of minutes.  This is done by making the drive forget
its encryption key.

Once the key is forgotten and it has generated a new one,
there is no way to get the data off that disk except by
brute-force attempts to crack the key.  This is not
practical within a human lifetime.  However, a determined
attack by a professional cryptographer could crack the key
in less time if it were known ahead of time that two disk
blocks had the same contents due to "plaintext leakage."

Whether this method of erasure meets your client requirements
is something you would have to determine.  If your contract
states "DOD standard / enhanced overwrite erasure" then only
overwriting to DOD standard would fill the bill.

Also note that hdparm can read out the drive's estimate of
how long it takes to do a security erase.

Here is another page on how to do a security erase using
linux hdparm:

Here is a link to the UCSD bootable download to do security erase:

Personally I would use a bootable linux CD and execute the hdparm
commands to (a) find out if the drive supports secure erase and (b)
find out how long it will take before actually (c) erasing the
drive, but you pays your money and you takes your choice.
Eric BIT specialist


My question is specifically for solid state drives.

It appears your answer is for standard drives.
Dr. KlahnPrincipal Software Engineer

In this respect SSDs and hard drives are identical and can be treated identically.
Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

The best way to make sure SSDs are secured is to initially encrypt them all with bitlocker already.  When you're done, just delete the partition and you no longer have to worry about it.

If you hadn't already encrypted with bitlocker, then you would need to do multiple wipes becaue SSD drives aren't guaranteed to be erased fully with a single pass.

The quickest way is to smash them to bits, if you don't want them fully reused.  You need to make sure the actual silicon chips are destroyed and not just the plastic packaging.
Eric BIT specialist


OK, so you are saying my method should be good? Fill the drive, activate bitlocker, then erase the partition?

I don't understand what you're saying about multiple wipes, with magnetic drives there was some shadowing of the old data which could be retrieved by extreme methods but I didn't think this was true with solid-state.

The only issue I could imagine would be that we cannot dictate which sector each write goes to so it is challenging to know that we have ovewritten every sector.  Therefore I decided just to fill it them the way up.
Therefore I decided just to fill it them the way up.

way to go. but do not do this with files or you'd have problems with reserved space, shadow copies from the filesystem structure.

the simplest is using dd : write all zeroes, then random bytes over the whole thing.

truthfully, recovering data is much harder than it seems so simply zeroing everything should be more than enough. if you have reasons to believe a resourcefull organisation may want that data, then i would assume money is not an issue so open the drive and drill through or melt the memory chip
Top Expert 2013

i have read that formatting makes them unrecovereable, but did not test it yet
Not sure where you may have read that, but that's wrong.

With Trim commands and garbage collection, a single pass may not wipe all the data correctly.  You can use the manufacturer utilities to more properly erase the SSD.  Here are some links to get you started.
Intel Solid State Toolbox
Corsair SSD Toolbox
SanDisk SSD Toolbox
Samsung Magician Software
OCZ Toolbox

Really, they best thing to do is to encrypt the SSD to start with. Then your garbage collected block will be encrypted.  You just have to delete the partition and destroy the recovery key and you won't have to worry about the data being recovered.
Its probably worth doing a Cost benefit analysis on Destruction-Replacement -v- time spent erasing

With the low cost of consumer quality drives, replacement is obviously the way to go, as time is too expensive.
With Enterprise quality it may still be viable to erase and reuse
for pattern in grg ùelm khgyzse ; do echo "$pattern" | dd if=/dev/stdin of=/dev/sdc ; done

the above performs a 3 pass variable pattern length multiple overwrite of the whole drive ( next to military grade except for the pattern lengths and textual content ) in a one liner, and should run within minutes on an SSD.

i'm unsure buying a new drive is more "cost-effective" even in terms of time consumption.

let's be reasonable : most companies are easy enough to hack to bother with more interesting things than erasing drives "more" securely. expensive firewalls, blindly encrypting everything and using state of the art and mostly uselesly complex techniques for erasing drives is barely a childish view of security and a way to make people feel safe while ignoring the obvious.
IT specialist
From what I've read, multiple pass wipes can destroy an SSD.
I will just stick with my process.
the above was merely an answer to the above post which suggested to throw the drive out for economical reasons.
which i believe is totally crazy both economically and ecologically.

low-end quality will supposedly die after about 10k writes.
the cheapest ones are often defective enough that some cells die after a few hundred.
which they compensate by having a number of spare cells.
you won't kill the drive but reduce its lifetime significantly if you do so often.

zeroing is your best bet if you do not need military grade.
filling the drive up is more or less equivalent to a single-pass wipe ( including the reduced lifetime ), except that you'll miss some parts of the drive and probably keep part of the data easily recoverable.
My Comment advocated a Cost Benefit analysis, and of course one of the valid options is to consider replacing a drive if the cost of re-use (including time spent wiping the drive) is higher than the cost of replacement (including device destruction).

Consumer quality drives, probably fall into a replacement category, especially as wiping may reduce their life significantly

Enterprise quality drives, probably fall into a reuse category

YMMV - i was merely pointing out that there is a cost to someones time, and replacement may be a viable option, and may be become more so as the cost of Enterprise drives continues to decline
From what I've read, multiple pass wipes can destroy an SSD.
You'd need a lot of passes to destroy a SSD.

To save time, the trick to getting the multiple pass wipes to the SSD would be to fill it to 90%-95% with 0s, then do you multiple passes with the remaining  5%-10% to get the trim to swap out the "unused" extra cells.  You don't really need that many multiple passes.  Then repeat the fill with 1s, then random data.

Start encrypting your SSD and you don't have to worry about wiping.  Just delete the partition and start over.

the above was merely an answer to the above post which suggested to throw the drive out for economical reasons.
which i believe is totally crazy both economically and ecologically.

Nobody is throwing out new drives.  Most people just reuse them until the actual cost of using them and their age makes it economically unfeasible to keep.

Typically, 3-5 years is how long you might keep a spinning disk around depending on the capacity you purchased.  By that time, you need to replace the disk due to age.  Hopefually any data is backed up.  Even with an SSD, you would want to copy that data off to another SSD by 3-5 years.  If you're keeping a disk drive longer than 5 years, you're playing Russian roulette with that data.  There's a steady decline in the number of total drives that survive each year.

You don't keep the old 5-1/4" 2GB disk drive around for 20 years, even if you could spin it up.  They wouldn't be economical to run.  That data fit into a USB stick even just 10 years ago.
agreed ( mostly ) :

you change drives because they suvived long enough to exceed their usefulness and the speed, size and power consumption of your whole 20 disks raid array makes running it pointless compared to a couple of usb keys or more recent drives.

you change drives because the surrounding hardware is falling appart and there is little point in using them in newer cases given the previous will soon be true ( this is by far the most frequent scenario. i'd say after around 10 years rather than 3-4 if you buy adequate drives for your workload )

you do not drill through a drive and grind it in your kitchen sink because you are too lazy to write a single one liner or are afraid to reduce it's lifetime by most likely a few weeks or possibly months depending on the workload. a single pass zero wipe typically represents a few minutes of the lifetime of a drive in a home computer or windows server.

@gerald : no hard feelings, and i do believe it is worth pointing out such options in some cases. i just do not believe this situation justifies this in any way. give the drive to your kid and he'll happily copy a bunch of films and series to share with his school friends resulting in a multiple pass wipe within days.

actually, since moor's law does not apply to disks any more, many vendors are adding planned obsolescence. that would also justify changing drives early or even preventively given the fact the obsolescence is occuring after a number of spins so a bunch of drives bought at the same time will die roughly at the same time : last a few years, die within weeks. blame netapp and seagate on that one.
Top Expert 2013

here they use Parted Magic for it - nearly free :
i'd say after around 10 years rather than 3-4 if you buy adequate drives for your workload

10 years only works for a backup disk that you're not using it daily.  If you're heavily using a system, you should expect much shorter life times, although newer devices last longer than they used to.

In the previous decade, any server disks that were running 24/7 would have a very high probability of failure if you turned off the server and left it off for more than a day.  While it can run longer than 3-5 years, any sysadmin that did not plan for replacements within that time frame is incompetent.  I'm not saying cheap management would approve or allow a replacement, just that the sysadmin should have planned for it and given them reasons why they should replace it.  The onus then goes to management to decide.
I wont fight over this. But i have setup many platforms that proove otherwise. hitachi disks are by far my best experience so far. And i have exerience for lots of very heavily used bunch of disks.

My belief is you should setup platforms so either the likely lifespan of you disks including a reasonable number of spares should be around ten years ; or setup platforms so disk sizes seldom matter;  pr have future plans that involve reusing older drives.
By the time you reach the 5 year mark, it frequently becomes more cost effective to get a new larger capacity system to replace the current system instead of constantly purchasing and swapping out replacement disks and parts.  You are likely not paying for a service contract and the equipment is getting older.  Newer systems tend to have better performance or capabilities or features.  You'd also be using less electricity.

While I have dealt with systems older than 5 years, it's a constant struggle and waste of time finding replacemnets and dealing with old software needed to manage the hardware.  You should also replace it so you don't have issues with being hacked.  If your cost benefit analysis neglects the time spent on managing the system, then you're not doing it correctly.

Sure, you can run disk systems all the way to 10 years, but it's not an ideal way to maintain reliability and it doesn't actually cost less.  When I've run systems to 10 years it's usually through attrition, with newer systems replacing old ones as they break down, and usually done in a rotational cycle, with anything over 5 years being used as working spares and extra backups rather than heavy production.  You'd generally replace 20%-25% of your systems each year.  That's partly how they last 10 years, because half of them had already failed and I've been pulling the "good" disks for replacements to maintain the less worse systems.  If that's not your experience, then your systems were not that heavily used, or your management was too cheap to consider the productivity of their employees.
Eric BIT specialist


How about if I take out a 500gb SSD from a server and want to use it in a workstation or laptop?  Or donate to a school?  Or recycle

Here's the process I'm planning to use

Boot Parted Magic

Attach SSD via USB

Launch Erase Disk
Block wiping - write zeroes  (uses UNIX dd command)
Took 5 minutes to test on a 60gb Intel SSD via USB 3.0 with odd pauses during the process

Then Erase Disk
Secure erase - ATA devices (voltage spike to wipe all)
If you're reusing it for the current work, then you really don't need to fully erase it.  If you're donating it, then write one or 2 passes of 0s should be enough.  No need for secure erase unless you keep important data on it.

If you had important data on it, then I would reuse it internally for a bit, but this time have bitlocker encryption enabled.
Eric BIT specialist


Why would I need multiple dd passes on an SSD?  In case I missed a block?

From what I understood the old-school hard drives had magnetic residue/shadoes which could be read after being overritten only once or twice.

Further, after I write all the zeros from what I understand the secure erase ATA command wipes the disk and the buffer.
one pass is more than enough and i hardly believe a school would bother trying to grab data from the write buffer. this is all overkill. just zero the drive with dd, and be done with it.

i don't think SSDs have such things as magnetic reasonance but who knows. you may use random bytes for better efficiency. actually even with spinning drives, there are lots of theories and even specific erase pattern that are supposed to maximise the efficiency of the overwrite... but very few demonstrations of actual retrieval of anything useable after even a basic single pass zeroing. and i consider 10bytes straight as something usable. that's barely a word, shorter than your credit card number or bitcoins.

@serialband : interesting point of view. i've seen cases matching what you describe. but also had machines with over 10 years uptime. with obviously zero hardware intervention. there is no good or bad duration. it just depends on what you use your machines for. and also some machines tend to break faster than others, espetially when they are overused and poorly racked.
I've had machines last more 10 years, but not the entire original batch.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial