Serg Nvns - Fotolia
Social engineering attacks use an attack vector that never goes out of style. Phishing and pretexting have consistently been high on the list of attacks in the annual Verizon Data Breach Investigations Report, or DBIR, and credentials stolen via phishing are a common starting point for malicious actors.
Despite the prevalence of social engineering -- sometimes known as human manipulation -- behind phishing attacks, fake news, election influence and corporate espionage, it can be difficult to fix the root problem: the willingness of humans to fall for manipulation.
In an interview at RSA Conference 2018, Rachel Tobac, CEO of San Francisco-based SocialProof Security and chair of the board for the nonprofit Women in Security and Privacy, discussed how enterprises can train employees more effectively and how individuals can train themselves to spot social engineering attacks.
What does SocialProof Security do to train people on their social engineering attack risk?
Rachel Tobac: We look everything up that we can find about them online through OSINT, open source intelligence. We keep a record of that, scrub all that information and then report it back to you, so you know where you're leaking information on Instagram, different websites, Twitter, Facebook, etc.
And so we'll do kind of like that OSINT assessment, the risk assessment. We also do social engineering awareness training, and we also do vishing, which is actual hacking. We'll call you on the phone and see how much information we can get.
And from what you've seen, social engineering attacks are not going away?
Tobac: No, it's not going away.
Some experts have mentioned that there can be a generational gap -- that older people tend to be more susceptible to social engineering attacks. Is that something you've seen?
Tobac: We do know there's differences in the way that the different folks in generations respond. We know that older people, older generations respond more to threats of urgency. So, 'You have to do this now, or you're going to be fired' -- that type of thing.
Whereas younger people are more susceptible to reward-based, emotional pieces. For instance, 'Click here for a free lunch.' Or, 'Click here for a free trip to Coachella,' or something like that. So, there are different methods we would use along the generational line, but we haven't seen that there is one generation that doesn't fall for it.
All generations have gaps. That's how I would probably rephrase it.
How do we improve literacy around social engineering so that people can recognize attempted attacks?
Tobac: I think the really important thing is to make it very simple for people. They don't want to hear a bunch of fear, uncertainty and doubt. And they don't want to hear a bunch of technical stuff, because the average person is not an infosec professional. They just want to live their lives and post stuff on Instagram.
So, if we can keep it extremely simple, don't give them the unnecessary details and just have small 15-minute updates every quarter from their businesses, they're going to have a much stronger security posture than if we just do a once-a-year training that gets a little technical, a little deep in the details and it isn't personal to them. The more personal you can make it, the more consistent it is. That's your best chance of keeping people on top of it.
Consistent, short and frequent ...
Tobac: And personal. It's got to be personal. So, if we just say, 'This is how so and so fell for something.' They're like, 'Well, that would never happen to me.'
But if it's, 'Let me look at your Instagram. OK, this is what I would say to you. This is how I would hack you.' They're like, 'Oh, I get it. I need to understand that I shouldn't let you authenticate with stuff that's public information.'
So, should enterprises be doing more training that shows where employee weaknesses are?
Tobac: I would say the best thing that an enterprise company can do is create a positive reporting culture, because a lot of times, people feel really embarrassed when they do 'the wrong thing,' right? They clicked on a link, maybe they gave out information online [or] maybe they did a little data leakage on Twitter; they're a little awkward or uncomfortable about it.
We want to make it so that people feel comfortable coming forward, and we reward those people. We'll say, 'Hey, we know that felt awkward. Thank you. Here's a gift card.' Something like that, right? 'You're protecting our company's data. You're protecting our money. Thank you for doing that.'
Because if we know about it and we know that you clicked it, we can quarantine your device, [and] we can fix it. But if we don't know about it, that's a bigger problem.
So, first, positive reporting culture -- really important. And then, second, you got to have that awareness training, and it's got to be frequent. It's got to be personal. It doesn't have to be long.
Part of the 2018 DBIR results was the finding that if you clicked once, you're more likely to do it again. Do you encounter that?
Tobac: You're never going to get a click-through rate to zero.
It's not going to solve all your problems, but what it will do is it will create a security culture that allows people to come forward and be positively reinforced for telling you, 'Hey, I think I might have screwed up.' And that is what will change the game for you.
If you can get people to a place where they're comfortable talking about these things or comfortable talking about phishing, phishing data leakage and they don't feel scared by them, you're going to change your risk. You're going to lower it.
Part of the messaging before RSA Conference this year was the idea of human manipulation. It feels like semantics, but does it point to the fact that social engineering attacks have much wider implications than just phishing?
Tobac: Oh, yeah. Oh, for sure.
Something that you'll see on Twitter a lot -- which I think is really funny -- is the phrase 'social engineering,' a lot of times, is used not in the context of infosec; it's used the in the context of politics. If you just search social engineering [on Twitter], it's all just politics-related social engineering discussion. We use words in so many different ways in different fields, so you can never argue about what words mean what, right?
Human manipulation is what somebody who wasn't in infosec would use to describe social engineering, which is the way that we described it in infosec. So, we're all talking about the same thing. It's just some of us have more data than others to talk about it on.
But the way that they talk about it in politics is the dissemination of false information, right? And so this is way more far-reaching than just a phishing email or a vishing call or an on-site attack. It also has to do with how our politics are run and how we release information and how people interpret it.
Philip Tully, principal data scientist for ZeroFOX, built a machine learning AI script that basically automated the process of a spear phishing attack by pulling in a user's Twitter data and crafting a personalized message and link. He claimed a 35% click-through rate, which is far higher than phishing attacks usually garner. What do you think of this?
Tobac: I've been doing different interviews at RSAC, and the big question is, where is social engineering going? What's the future of social engineering? And I keep saying, 'Watch out for social engineering using AI,' because think about all the stuff that we do. When I do my OSINT assessments, it takes me 100 hours, but I could train a computer to do it in 30 seconds.
I absolutely believe we're moving into the age of social engineering using AI. I think you're probably going to see AI scraping everything that they know about a target, developing the best pretext for that target and then just hitting them on all fronts.
If social engineering attacks are that much more targeted -- and, theoretically, more believable -- how do you train the people to even be aware of that?
Tobac: There are a couple of things that you can do. Machine learning is still going to use what it can find on you, so think about what you post online. If you have a bunch of pictures of you in front of your workstation, and I can see all of the software that you have running, AI is going to see that, too. So, that doesn't change. Don't take pictures in front of your workstation. That doesn't change whether it's a human attacking you or AI. There are things that you can do to make sure that your data leakage is low.
And then in terms of your interests, that's something where you have to be politely paranoid. Something that I always see is someone might say, 'Hey, I know that you're interested in this. Check out my book. Here's my link.' And I'll say, 'That sounds great. You can send me a picture of your book or the name of your book. I'm going to go look that up myself.'
That might offend people, but that's just what I do. If you tweet at me a link, I'm not going to look at it. It's never going to happen. I appreciate you trying, but it's never going to happen.
So, be politely paranoid, and be aware of your data leakage, like always. AI can only find what's there.
Being aware of what you post online sounds easy in a general sense, but being aware of details that could be used in a social engineering attack would be far more difficult, right?
Tobac: Basically, just think if you've ever posted anything online, somebody can use that to try and build a rapport and authenticate with you on that piece of information. So, if you've posted about, 'I love Panera bagels,' or you post about, 'My favorite place to vacation is Jamaica,' then you need to be aware if somebody tries to talk to you about that.
You have to kind of think, 'Did I ever post about something like that? Do I let people know that about me?' And if that is the case, [be] politely paranoid.
Tobac: I like to call it politely paranoid because I don't feel like skepticism helps people. I feel like being skeptical might make you cynical, and you might not be able to do the things you want to do. And people don't always want to be skeptical about their friends. Do you know what I mean?
So, rather than not trusting somebody, I would just say, 'It's not your fault.' I'm not skeptical of you, because that's like putting it on the other person. I'm just politely paranoid in general. That's me. So, if you're talking to me ... it has nothing to do with you. I'm not skeptical of you. I'm just politely paranoid. That's me. Words matter.
I think people don't want to offend people. That's their biggest fear. People are going to hate me if I ask to talk to them in another way so that I can verify who they are. But, instead, it's just saying, 'That's on me. I'm politely paranoid. Sorry about that. Let's talk in a different way.'
How can people look out for accounts impersonating people that you do know?
Tobac: That's tough. You have to, again, be politely paranoid. And if something looks a little off -- say, we always talk on Twitter or something and then, one day, your name has slightly changed and I'm like, 'Hey, what's going on?' But it's actually a second version of you on Twitter or something like that. I would need to do that cross-referencing, and if I had your phone number, I'd probably give that phone number a call, or I'd reach out to you over email or something different that I already knew. And if I can't authenticate with a different method, I just can't trust it.
So, real-world two-factor authentication.
Tobac: Real-world two-factor. Don't trust. Still verify at least three times, pretty much.
It's awkward, right? Because I'll have friends talk to me on Twitter, and then something about their page looks different. And they're like, 'Hey, let me text you,' and I'm like, 'I'm not going to give you my phone number.' And they're like, 'We've been talking for years. Like, what? You're not going to give me your phone number?' I'm like, 'No, I'm so sorry. You have all my other contact information. If you want to reach out to me there, you can. I can call your number that I know, but I'm not going to write it down right now.' And they're like, 'Screw you,' but that's just how it is, you know?
Things can be compromised, too. Somebody could compromise your account in attempt to target me or somebody else that you know. So, it's unfortunate. It makes talking on the internet awkward sometimes, but that's OK.
Talking on the internet is already kind of awkward.
Tobac: It's already pretty awkward, yeah.