With 20 years of experience, Sherri Davidoff has seen many changes in the cybersecurity industry.
"When I first started, if somebody broke into a computer, it was called an information security incident -- we didn't call it cybersecurity back then," Davidoff said.
After getting her start on MIT's Incident Response Team in 2000, she first heard the term data breach in 2005. At the time, she was creating incident response policies for Children's Hospital in Boston, working with local organizations and hospitals to figure out how to deal with newfangled things, such as computer security incidents and HIPAA compliance.
Nearly two decades later, Davidoff used her experience to write Data Breaches: Crisis and Opportunity in an effort to help everyone -- from IT staff and network managers to cyber insurance companies, attorneys, forensics teams and incident response teams -- mitigate data breach risks and tackle the never-ending task of keeping networked systems safe and sound.
"When you look across different breaches, there are ways that you can respond that will help to maintain trust with all your key stakeholders and will help you clean things up and investigate in a fairly efficient manner," Davidoff said. "In writing the book, I was hoping to analyze all these cases and identify common threats and common takeaways that we can all use to protect ourselves in our organizations."
Here, Davidoff offers further insight into data breach risk factors and how to best learn from the past to prepare for the future.
In the book, you mention five data breach risk factors. Can you give a rundown of them?
Sherri Davidoff: There are five factors that increase the risk of a data breach. Number one is access -- the risk of a data breach increases the more people have access to and the more ways there are to access it.
Next, the risk of a breach increases with the amount of time that data is retained. Many organizations are retaining data indefinitely, and that places them at a very elevated risk of a data breach.
Number three, the risk of a breach increases with the number of copies of the data that exists -- so, proliferation. Many organizations have multiple copies of the same data -- sometimes, dozens of copies. Every time you send an email, you have a copy of the data on your computer, you send it, and there's a copy of the data on the server, and the recipient might download a copy of the data.
The fourth risk factor is liquidity -- how easy it is to transfer your data from one place to another and to process it. We have been making a huge push in our modern world to make data more and more liquid so that we can use it. We are data-driven organizations, and that also puts us at great risk of a breach. When you look at Anthem, for example -- how hackers may have stolen 80 million Social Security numbers -- that would not be possible if they were all printed on paper one at a time. The fact that there are all these teeny tiny snippets of electronic information and they're loaded into a database makes it possible to steal huge volumes of information.
The final data breach risk factor is value: The risk of the breach increases with the value of the information because that gives criminals incentive to target it. The emergence of the dark web is something that I go into in the book -- the technologies that underlie it, the economy that is created and the effect that that has on the risk of data breaches occurring. As the dark web has flourished and as dark net markets have emerged, so too the risk of data breaches has increased.
What are some of the common threads across numerous data breaches you've seen?
Davidoff: I analyzed dozens of different breaches, if not hundreds, and came up with a data breach response model called the DRAMA model. The idea is that, to develop your response, you have to, as an organization, realize that a breach occurred. That's not the same thing as detection because, a lot of times, a low-level IT person will detect it, but the organization as a whole will not recognize that.
Then, when there is a crisis, you have to act. That's one of the biggest mistakes organizations make -- they know there's a problem, they know there's a breach and their attorneys tell them, 'Shh, don't say anything.' Or they just wait. You saw that, for example, in the Equifax breach, where the board wasn't even notified until nearly three weeks after the executive team knew. Once you know that there's a breach, you have to act quickly, then you have to maintain your response over, often, years because you might get involved in litigation or investigations.
Sherri DavidoffCEO, LMG Security
And then, finally, the last A is that you have to adapt. I like to say that every crisis is an opportunity. And a data breach is no exception. This is probably the biggest takeaway of the book: Data breaches are crises. In the past, as an industry, we have treated them like incidents -- a totally different thing.
My hypothesis going into writing the book was that a data breach is a crisis and it has to be treated that way. I researched crisis management models and found they fit very well. The researcher that I found was most useful, Steven Fink, maintains that you have to manage the crisis itself, as well as the perception of that crisis. That's where, a lot of times, organizations forget to respond. They might take care of changing everybody's passwords but forget about the PR [public relations] statement that needs to go with it.
In the book, I go through some big train-wreck cases. I dissect cases you've seen in the news and figure out exactly where they went off the rails and why, as well as what you can do to protect your company and make sure that its responses are better off.
Another topic you mention in your book is dark breaches. What are these unreported breaches?
Davidoff: In my almost 20 years in cybersecurity, I can count on one hand the number of cases I've handled that have hit the news. That tells me that, for every case you see in the news, there's probably 99 others that never got reported.
In order for a breach to get reported, it first has to be detected -- often, it doesn't even get detected. Then, it has to be recognized as a breach. Then, the executive team at that organization has to choose to disclose it, or maybe they get outed by a third party. All three of those things have to happen before it even gets into the news cycle. Everything you read about in the news is a skewed sample. There are reasons these breaches are being published. The ones that don't get published are still just as important and can often impact organizations in different ways.
Why aren't all breaches reported? And, better yet, should they be?
Davidoff: A good thought exercise is to compare payment card breaches to, say, health information breaches. With payment card breaches, criminals are going to take that information and use it as quickly as they can in most cases. Typically, third parties, like banks or card brands, can identify a common point of purchase and tell when a merchant has been hacked. That's very different than if, for example, health information got breached -- there may not be anything that allows you to trace that back to a source. You might not be able to say, 'Oh, this came from X hospital,' because that information is not specific to that organization. It's harder for a third party to determine when a healthcare organization has been breached as opposed to a merchant.
Nowadays, we see a lot of healthcare breaches reported in the news, and that is partly because there are strict regulations about what counts as a breach in healthcare, and if you don't report a breach within a certain period of time, you can get fined. Under HIPAA, unauthorized access is presumed to be a breach unless you have evidence to demonstrate otherwise. That presumption of a breach is different than what you see with most other types of data. If Social Security numbers are stolen from a financial institution, for example, you might not have that same presumption that it's a breach unless you can prove otherwise.
Will healthcare-like regulations ever expand across other industries and sectors?
Davidoff: Right now, there are laws in all 50 states, and most data breach notification laws emerged from one California law [California Security Breach Information Act] that came out in 2003 designed to mitigate risks associated with identity theft. For example, if Social Security numbers got stolen, you would have to notify people whose Social Security numbers are exposed. Nowadays, that same law doesn't make as much sense because, frankly, everybody's Social Security numbers have already been stolen. I mean, if you just look at the number of breaches, it's statistically highly likely that your Social Security number is out there.
What we really need are different kinds of regulations. Right now, unfortunately, it's almost like when your immune system attacks itself -- we have this huge reaction to Social Security numbers getting stolen, and people have to do notifications. But is that really the biggest risk? I worry a lot more about emails or health information that gets stolen -- they're not always covered under state laws. And what about the information collected by third-party apps -- your social media login and passwords, things like that? Not all information is equally protected, and the protections we have in place are not necessarily in line with our modern needs.