Social Engineering

How could Twitter have stopped the attack? (Part 2)

Posted on 2020-07-22 by Matt Strahan in Business Security , Social Engineering


Last week Twitter had a successful social engineering attack that pushed through a Bitcoin scam. The scam netted about $120k for the scammers, but for Twitter it caused huge damage to their brand with the news of this attack going around the world.

Although we don’t have any hidden information about the Twitter hack that’s not already public, I thought it would be fun to look at the kinds of security controls that would help stop this kind of attack.

Yesterday we looked at all the multi-X controls. Today we’ll be looking at other strategies that can help mitigate the compromise.

Continue reading

How could Twitter have stopped the attack? (Part 1)

Posted on 2020-07-21 by Matt Strahan in Business Security , Social Engineering


Last week Twitter had a successful social engineering attack that pushed through a Bitcoin scam. The scam netted about $120k for the scammers, but for Twitter it caused huge damage to their brand with the news of this attack going around the world.

Even with the greatest of anti-phishing and anti-malware security stack, social engineering attacks are extremely difficult to stop. In our social engineering exercises we may call a 5% response rate to a social engineering attack a good result, but for many organisations just having one response is a catastrophic scenario.

Many guides when they talk about social engineering talk about user training and “users being the weakest link”. While security awareness is important, the social engineers are smart. It’s almost impossible to tell the difference between what is real and what isn’t. Why are we blaming users when they’re being put in an impossible situation?

Continue reading

The value of experience (or "don't fire the person that got phished")

Posted on 2020-01-15 by Matt Strahan in Social Engineering


When performing social engineering attacks, physical intrusion attacks, or red teams we have to be particularly careful. At all times we have to be aware that we’re not dealing with emotionless systems here, but with real people who are often just trying to do their jobs. What’s more, the people on the other end can feel mislead, manipulated, and betrayed. Perhaps the hardest challenge of designing an effective user awareness programme is getting the desired outcome of increased security when you’re dealing with real people. People who have emotions and potentially unpredictable behaviours.

With systems, getting the outcome of being more secure generally means fixing bugs, tightening configuration, or implementing more controls. Usually once you’re done you can look back and say “yep things are more secure”.

People are more complex than that. Actions that you take can backfire and make things worse. I’d like to talk about perhaps the most extreme of these actions: disciplinary action including firing the person who got phished.

Continue reading