GETTING READY FOR PCI DSS 3.0 AND BEYOND: A NEW FOCUS ON TESTING

To get a sense of where the PCI Data Security Standard (DSS) is heading, it helps to take a look beyond the actual language in the requirements. In August, PCI published a DSS 3.0 best practices document that provided additional context for the 12 DSS requirements and their almost 300 sub-controls. It’s well worth looking at. The key point is that PCI compliance is not a project you do once a year just for the official assessments.

The best practice is for DSS compliance to be a continual process: the controls should be well-integrated into daily IT operations and they should be monitored.

Hold that thought.

Clear and Present Dangers

One criticism of DSS is that it doesn’t take into account real-world threats. There’s some truth to this, though, the standard has addressed the most common threats at least since version 2.0—these are the injection style attacks we’ve written about.

In Requirement 6, “develop and maintain secure systems and applications,” there are sub-controls devoted to SQL and OS injection (6.5.1), buffer overflows (6.5.2), cross-site scripting (6.5.7), and cryptographic storage vulnerabilities (6.5.3)—think Pass the Hash. By my count, they’ve covered all the major bases—with one exception, which I’ll get to below.

The deeper problems are that these checks aren’t done on a more regular basis—as part of “business as usual”—and the official standard is not clear about what constitutes an adequate sample size when testing.

While it’s a PCI best practice to perform automated scanning for vulnerabilities and try to cover every port, file, URL, etc., it may not be practical in many scenarios, especially for large enterprises. Companies will then have to conduct a more selective testing regiment.

If you can’t test it all, then what constitutes an adequate sample?

This question is taken up in some detail in the PCI best practices. The answer they give is that the “samples must be sufficiently large to provide assurance that controls are implemented as expected.” Fair enough.

The other criteria that’s supposed to inform the sampling decision is an organization’s own risk profile.

Content at Risk

In other words, companies are supposed to know where cardholder data is located at all times, minimize what’s stored if possible, and make sure it’s protected. This information then should guide IT in deciding those apps and software on which to focus the testing efforts.

Not only should testing be performed more frequently, it’s also critical to have a current inventory, according to PCI, of the data that’s potentially hackable—let’s call it data at risk—and users who have access.

For Metadata Era readers, this is basically the Varonis “know your data” mantra. It becomes even more important because of a new attack vector that has not (yet) been directly addressed by PCI DSS. I’m referring to phishing and social engineering, which has been implicated in at least one of the major retail incidents in the last year.

Unlike the older style of injection attacks that targeted web and other back-end servers, phishing now opens the potential entry points to include every user’s desktop or laptop.

Effectively, any employee receiving a mail—an intern or the CEO­­—is at risk. Phishing obviously increases the chances of hackers getting inside and therefore raises the stakes for knowing and monitoring your data at all times, not just once a year.



  • Love
  • Save
    Add a blog to Bloglovin’
    Enter the full blog address (e.g. https://www.fashionsquad.com)
    We're working on your request. This will take just a minute...