Improve company productivity with a Business Account.Sign Up

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1047
  • Last Modified:

Securely erasing files on AS/400

We are going through a PCI audit and they are asking us to ensure that when we delete any file that contains Card Holder Data (CC Number, Name, Exp Date, Etc) it is deleted in such a way the operating system or utility can not recover it - basically we need to delete the file and have it overwritten multiple times on our AS/400. This would be like the SRM (Secure Remove) in UNIX, Shred in Linux, and PGPShred in Windows.

How could I accomplish this on our AS/400 - both in a standard library and on the IFS?

Here is the PCI-DSS 2.0 section that we are trying to comply with:

9.10.2 Render cardholder data on electronic media unrecoverable so that cardholder data cannot be reconstructed.

Verify that cardholder data on electronic media is rendered unrecoverable via a secure wipe program in accordance with industry-accepted standards for secure deletion, or otherwise physically destroying the media (for example, degaussing).
0
SamSchulman
Asked:
SamSchulman
  • 2
2 Solutions
 
btanExec ConsultantCommented:
0
 
Gary PattersonVP Technology / Senior Consultant Commented:
Just so we are on the same page - this part of section 9 deals with decommissioning media that is no longer needed:

9.10 Destroy media containing cardholder data when it is no longer needed for business or legal reasons as follows:

9.10.1 Cross-cut shred, incinerate, or pulp hardcopy materials

9.10.2 Purge, degauss, shred, or otherwise destroy electronic media so that cardholder data cannot be reconstructed

http://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf (Page 54)

So this section applies, for example, to disk units that are removed from the system due to maintenance or upgrades- not to "live" systems.  

IBM offers a "disk sanitize" tool that is adequate for this purpose:

http://www-01.ibm.com/support/docview.wss?uid=nas8N1014286

Of course, this only works with disk units that are functional.   Nonfunctional disks need to be physically destroyed in order to be in compliance.

Decommissioned backup tapes need to be securely erased, too.  The only practical way to to this in any volume is using a degausser.

Destroy optical media.  Larger shredders can handle optical disks.

- Gary Patterson
0
 
daveslaterCommented:
Also remember that the system i stores data in a completely different way to other systems. Data is scattered across multiple disks therefore one disk with-out the fully array is useless.
When we are deleting records from a Credit Card files we use a two phase approach.
1. Read the record; then update the details with *Hival
2. Physically delete the record
This takes a bit longer but we only delete about 200 CC details per day so the overhead is not an issue - any undelete utilits can then only pick up *Hival and not the actual data.
Dave
0
 
Gary PattersonVP Technology / Senior Consultant Commented:
@daveslater - A couple of comments:

1) Even with single level store architecture and block-level RAID (including all of the the most common levels - 0,1,5,6,10) full blocks of contiguous data still get written out to physical disk units.  So it is certainly possible to recover full rows or groups of rows, or small IFS files or chunks of larger files from a single decommissioned IBM i / iSeries / AS/400 drive (or from a SAN connected to the same) out of a RAID set that uses any of the block striping RAID methods.

RAID 2 & 3 use bit-level and byte-level striping.  To recover usable data from these sets, all but one disk unit in the set is required.

IBM i and Midrange External Storage Redbook has some good diagrams:

http://www.redbooks.ibm.com/abstracts/SG247668.html?Open

2) Row-level data destruction doesn't apply to PCI DSS-2 9.10.2 compliance, which was the question, so I'm going a little off-topic here - apologies in advance.  

Our DB2 row-level data destruction process is similar to yours, except we don't overwrite with ones (*HIVAL).  Instead we use one of two selectable patterns:  a randomly-generated overwrite pattern and the triple-pass method (zeros, then ones, then a random pattern).  

In real world situations, anything more than a simple overwrite in my opinion is overkill (and a potential performance nightmare), but security people (and yes, that's one of the hats I wear) often write "conservative" specifications, since reducing risk is the name of the security game.  So by enabling use of a single-pass of random data and the triple-pass method, you've got options available that cover most of the "soft" data destruction requirements that you are likely to get hit with.

Here's an interesting paper on the subject of data recovery from overwritten disks - it changed my mind about the need for complex or "multi-pass" drive wiping procedures.

http://privazer.com/overwriting_hard_drive_data.The_great_controversy.pdf

- Gary Patterson
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Building an Effective Phishing Protection Program

Join Director of Product Management Todd OBoyle on April 26th as he covers the key elements of a phishing protection program. Whether you’re an old hat at phishing education or considering starting a program -- we'll discuss critical components that should be in any program.

  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now