Kit Wai Chan - Fotolia
Fred Cohen got noticed for his groundbreaking research in computer viruses when he was a graduate student at the University of Southern California's School of Engineering. In addition to his pioneering work on information assurance, Cohen has led research on the use of deception for information protection, an area in which he holds several patents.
Today, Cohen is a globally recognized expert in strategic security and information protection. As the CEO of Management Analytics, a security and risk management consultancy he founded, Cohen provides advisory and litigation services; he is also an expert witness and digital forensics examiner and author of numerous books and articles, including recent work on artificial intelligence and machine learning at TechVision Research, where he serves as a principal consulting analyst.
Cohen holds a doctorate in electrical engineering from the University of Southern California, a master's degree in information sciences from the University of Pittsburgh and a bachelor of science from Carnegie Mellon University.
A longtime entrepreneur and angel investor, Cohen is the president of the Pebble Beach chapter of the Keiretsu Forum. He leads several efforts to promote business development, including pre-seed Can Do Funds, Angel to Exit and the Monterey Incubator. Marcus Ranum caught up with Cohen to talk about strategic security, AI and machine learning and the true meaning of information protection.
You've stated that most of what we're doing in computer security is tactics. So I'm wondering what you think strategic security looks like.
Fred Cohen: There are a lot of elements of strategy. For one thing, architectural-level protections have been largely ignored. When you're talking about strategy, you need to first understand what you're trying to accomplish and then help business to succeed. You start out with, 'What does the business do with the function you perform?' Then you ask, 'What would cause the organization harm?' What if information wasn't available or didn't have integrity; [it] wasn't reliable or authentic -- if it wasn't kept confidential and you couldn't control its use? What if you weren't transparent about it or how it came to be and couldn't track it to its origin? There are these various properties we talk about as information protection, and those properties should affect the business.
From that, you develop governance. Governance has to identify the duties of the organization and the people in the organization -- you have legal duties, regulatory intervention and who's in charge. If you're working for a city government, is the mayor in charge or the city council? How do you handle internal decisions, how important are the audit results and so forth? These are structural decisions -- strategic security decisions -- about how you define what the duties are.
Fred CohenCEO, Management Analytics
Then you do risk management. Risk management turns the duty to protect into decisions about what to protect and how well. Lastly, you have to manage it -- managing the people and processes. All of this structure forms the strategic context of what you're trying to achieve.
At a tactical level, you do the operational things: You decide you're going to have an intrusion detection system to detect certain types of known attacks [and so on]. But when you do that, you have a reason for it -- and the reason lies in the strategy that says 'this is important' and 'this would be cost-effective and reasonable.'
That's a fantastic sketch of enterprise-level security. What does that kind of strategic security analysis look like at a national level? How does a nation do a cybersecurity strategy?
Cohen: When you talk about cyber and security -- to get cybersecurity -- those are two words a lot of people use, and they don't know what they mean.
OK, you got me there.
Cohen: The first thing is to define what it is you're trying to achieve. Security means the feeling of safety -- when I looked it up in the dictionary back in 1980. I work in information protection, with protection being 'keeping people from harm.' The bits are going to be fine -- you can't harm a one or a zero; the harm comes as a consequence of changes in the external environment. I parse cybersecurity as 'keeping people safe from the effects of cybernetic control systems.'
Recognize, first off, that these cybernetic systems take risks in return for rewards. So, it may be risky to have all your data in your phone, but on the other hand you can call 911 when you're not in your house. You might be in a boat a half mile from shore and still be able to use your cell phone -- that's a pretty good reward for the risk. We have all these capabilities, and the net effect is that the world is a much better place.
The risk and reward go hand in hand. When you talk about the negative potential effects, you need to balance them against the positive effects. Your strategy as a nation-state should focus on the benefits of cybernetics and then assuring that those benefits are realized. The security part of a nation's cybersecurity strategy is making sure it works.
What do you think of the rush to outsource management of technical resources? Do you think that's a good thing overall?
Cohen: When outsourcing started to pop up, my first reaction was, 'Oh, that's bad! That's terrible! You're trusting people you don't know and can't trust.'
The most damage is done by trusted insiders, which begs the question of what constitutes an insider: An insider is a person with access. It doesn't matter whether you're my employee or my consultant or my friend or my family member. If they're behaving in a trustworthy fashion, then it's fine. The question of who pays them or where they work has less to do with the outcome.
How do you establish trust? That's much more complicated, and that's what it all comes down to. One interesting piece of research done over the years, by a part of the U.S. government, is the basis for clearing people: What do you check? They look at whether you're possibly compromised with a lie on your form -- they don't care if you used drugs 20 years ago; they care if you lie about using drugs. They've been spending a lifetime understanding the basis for trust or lack of trust. They're also looking at turning behavior -- when people change loyalty -- trying to understand how to detect a change in trustworthiness based on behavior and to detect it in time to mitigate the potentially serious consequences. There is a lot of research into this, and none of it says people are less trustworthy if they are outsourced.
How do you see the current push for artificial intelligence and machine learning playing out? There's a lot of marketing devoted to it.
Cohen: I'm a guy that says, 'Aye, aye, aye!' at AI. I was in graduate school at University of Pittsburgh, and our first assignment in the AI class was to write an essay on the difference between artificial intelligence and natural stupidity. The problem is that natural stupidity is still better than artificial intelligence at this time. What they call artificial intelligence is really algorithms and analytic processes that take historical information or algorithmic history and try to produce different outcomes based on different inputs. In that sense, it's no different than the human mind. The problem is that the so-called intelligence of it never matches what we consider 'intelligence.'
I'd say they are approaches for encoding knowledge; it boils down to how good of a job you do at teaching the thing.
Cohen: The term knowledge is problematic as well. Trying to keep it to intelligence -- which is also problematic -- it comes down to what utility does that computer bring? If you're trying to have a computer that does a better job than people at making surfaces smooth, and the broad category of methods by which it does that are what we might call AI -- learned behavior or expert systems or whatever -- if it does a better job of making smooth wood, then it's a good idea for that application. What AI lacks is what we'll call general intelligence -- that is, the ability, based on novel situations and novel information, to make decisions that people might reasonably agree are better rather than worse.
People aren't always all that intelligent. All people are able to learn things that are pretty complex and take into account the context of the world around us. We have hardware that translates colors and shapes into a representation of some sort; the tests to do that with computers haven't been as successful, although certainly there is a lot of progress. They can find faces or objects -- that's something we see AI programs doing better and better. It's probably still a long way off before we get to purely intelligent behavior.
The general thrust of the marketing of AI in computer security is: 'You don't need to have as many people worrying about strategic security, because we'll have machine learning watching over it for you.'
Cohen: People are easily fooled, as are neural networks, as are computers. The idea of whether or not you can fool it is a big thing. The idea of whether you can fool all the people all the time is another bigger problem. The advertising in the '80s was about neural networks, and it didn't work, so they changed the terminology. And they're still marketing different terms today.
I like the top-down way you seem to use for approaching problems; it's very structured. Where did you learn that trick?
Cohen: Giving definitions at the beginning is how science works.
Worked for Socrates…
Cohen: Right. It's a way of making sure we've both done our background work and we are talking about the same thing.
Yes, it's a function call. When I invoked Socrates, you immediately understood I was talking about dialectical methods.
Cohen: I was educated as an electrical engineer, but I've studied philosophy of science and other matters. My intent in life has been to be knowledgeable, well-educated and thoughtful. You should be looking at your assumptions and derive from that whatever you want. When people ask me, 'How do you attack a system?' I reply, 'Start with the assumptions and see what happens if you violate them.'
Understanding the nature of how things are put together and how people put ideas together -- that's fundamental to having an analytic approach.
Dig Deeper on Careers and certifications
Dynatrace transcends multi-cloud serverless for near-omniscient observability
CIO interview: Ian Cohen, CPIO, Acacium Group
Election data revolutionizes the running of campaigns
Executive interview: Toyota’s Melody Ayeli on the significant role of ITAM