Silvano Rebai - Fotolia
Observe, orient, decide and act. These are the components of a decision cycle developed by U.S. Air Force Col....
John Boyd called an OODA loop. In a dogfight, the pilots who can observe the situation, orient themselves, make a decision and act first are the ones who fly home safely.
The OODA loop is the exclusive purview of fighter pilots, but it can be applied to any situation in which we face adversaries.
Consider this situation: a storm knocked the power out and you decide to break into my house. When you start to break in, the OODA cycle starts, and you are immediately at a disadvantage. You are not familiar with my house, so you stumble around trying to both observe and orient yourself.
Unfortunately, you knock over a lamp and step on noisy toys. I hear the noises and can instantly tell where you are and know all of my options. While you are still trying to observe and orient, I'm making decisions and acting. Those in familiar surroundings have the OODA loop's advantage.
Similar to the situation above, we should also use the advantage of the OODA loop in our networks, but we don't. The majority of organizations discover that they have been compromised via third-party notification. Understanding why is critical to developing better strategies to protect our computing environments.
Simply put, our networks are not noisy -- we use firewalls, intrusion detection systems/intrusion prevention systems, data loss prevention software, endpoint security and application whitelisting, all of which can be evaded by attackers. We configure our systems to log suspicious activity, but most logs are infrequently reviewed, and separating important information from unimportant info can be challenging.
Careful attackers stand a real chance of moving throughout our networks without generating actionable noise, as our networks lack the lamps and toys that give away attackers in the real world. We talk about detective controls, but the majority of our defensive focus is on patching, hardening, firewalling and other efforts to keep attackers out.
It is often said that defenders need to be right 100% of the time, while attackers only need to be right once. We need to change the game and take back the OODA advantage.
Applying the OODA loop to information security
While we can't actually place noisy toys or tippy lamps on our networks, we can create obstacles for attackers to bump into. Networks can be broken up into business-focused virtual LANs (VLANs) with access control lists between VLANs to allow all legitimate traffic through while blocking and logging everything else.
If an attacker manages to compromise a host on the network, then they will likely scan for other targets, tripping inter-VLAN block rules and generating alerts. We can then start making decisions and acting while the attacker is still observing the network.
Web applications and common targets can be difficult to secure, as attackers use automated web spiders to map out sites; this is where WebLabyrinth comes in. WebLabyrinth is a free tool that creates a fake webpage containing multiple fake links. When a user clicks on any of the links, another fake page with more fake links is generated. This helps to place web crawlers into an infinite loop, crashing most tools and frustrating attackers. If access to the fake pages generates an alert, we gain the OODA advantage.
Placing resources on the network that appear interesting but have no actual business purpose can provide similar benefits. These resources may include user accounts, database records, open ports, IP addresses, hosts or even vulnerabilities. Creating an environment that meets all of the attacker's expectations, but that in actuality pushes the attacker to behave the way the defender wants him to, is an incredibly powerful way to gain the OODA advantage.
In The Art of War Sun Tzu stated, "All war is based on deception ... When we are near, we must make the enemy think we are far away ... When far away, we must make them believe we are near."
From this, we learn to design our cyber deception campaign to achieve specific goals. If we want to collect information about the attacker, we can intentionally make certain highly monitored resources appear weaker or of greater value.
A few thousand years after The Art of War, deception was used to great success when the Allied powers implemented a plan called Operation Bodyguard to convince the Axis powers that the D-Day invasion would cross the English Channel at Calais, France. For this to work, it had to be completely believable. Any flaw in the deception could have tipped off the Axis and potentially disrupted the entire D-Day invasion.
Things are similar in the world of cyber, as the best deception campaigns appear realistic. This is critical because once an attacker discovers the deception, they can change their behavior in unexpected ways. Subtle, planned deception is often preferable to obvious and noisy tactics.
Furthermore, cyber deception creates an environment that appears real and normal to the attacker, but that includes noisy obstacles. The attacker may evade some or even most of these obstacles, but they only need to bump one and we gain the OODA advantage. If done well, to avoid detection, the attacker has to be right 100% of the time, and defenders need to be right just once. It's finally time to take back the advantage.
To learn more about using cyber deception to give defenders the OODA advantage, visit Kevin at SANS Minneapolis 2018 in June, where he will deliver the keynote address on this topic. In early 2019, look for a new SANS course on cyber deception.
About the author: Kevin Fiscus is a principal instructor with the SANS Institute and the founder of, and lead consultant for, Cyber Defense Advisors Inc. At Cyber Defense Advisors, he performs security and risk assessments, vulnerability and penetration testing, security program design, cyber deception campaign design, policy development, and security awareness with a focus on serving the needs of small and midsize organizations. Kevin has over 28 years of IT experience and has focused exclusively on information security for the past 17.