List of InfoSec Cognitive Biases

Wed, Apr 15, 2020 7-minute read

The mind is an incredibly complex organ. While all of us attempt to be mostly logical and rational in our day-to-day thought processes and decision making, we are hampered by an enormous number of cognitive biases. Cognitive biases are specific natural tendencies of human thought that often result in irrational decision making, and there are hundreds of them. Everybody has them them and is impacted by them – it is only through awareness that you can take steps to counteract them.  

One of my favourite examples is Loss Aversion. Imagine a game that costs $100 to enter. Most folks would decline to play this game if the odds were that you had a 49% chance of losing your money, and a 51% chance of doubling it. A purely rational decision maker would play this game as often as they could.

The key realization is that being aware of biases helps you limit how much impact they have on your decision making.

InfoSec Cognitive Biases

Like every other avenue of human thought, one place we are impacted by cognitive biases is the Information Security community. While many traditional cognitive biases apply directly to the Information Security context, there are plenty that are unique to our space and are worthy of additional awareness.

This post started as a thread on Twitter, and with the participation of several folks has become quite a useful list of thinking patterns to be careful about when making decisions in the realm of Information Security. Thank you to @passingthehash, @gdbassett, @kjstillabower, @joonatankauppi, @marshray, and @mrjbaker for your contributions!

One important point is that a cognitive bias is completely different than being factually incorrect. A cognitive bias represents a flawed mode of thinking, not a flawed thought. For example, a website that disallows special characters in a user name is a decision, not a bias. While a cognitive bias may have been involved in arriving at a factually incorrect decision, the decision itself is not the bias.

Do you have any biases you find common or unique to the security industry? Please comment below and I’ll add them!

Absolutism bias

Description: The tendency to undervalue mitigations that have less-than-ideal security, yet still produce significant risk and harm reduction
Example: Criticizing users or applications that use SMS-based two-factor authentication, despite the alternative often being no two-factor authentication at all.

Actor bias

Description: The tendency to include actor / operator intent in evaluating the security of a system
Example: Designing cryptographic master keys that allow “the good people” to decrypt private data of “the bad people”.

Anchoring bias

Description: The tendency to let early facts you learn in a security investigation overly influence your decision-making process
Example: Dismissing an attack as “just drive-by ransomware”, missing attackers that use ransomware to burn infrastructure after a much more damaging intrusion.

Authority bias

Description: The tendency to overvalue the opinions that an expert in one domain has about an unrelated domain
Example: Computer Security experts discussing geopolitical events.

Availability bias

Description: The tendency to focus on applications or systems that are recent, nearby, or under active development
Example: Doing deep security analysis of a new buildout at corporate head offices, while systems at an acquired branch office go unpatched.

Bandwagon bias

Description: The tendency to assign excessive merit to a behaviour or technology because others have done so, or that it has historically been done that way
Example: Websites that prevent copy + paste of passwords, despite making it difficult for users of password managers

Burner bias

Description: The tendency to overestimate one’s ambient risk when at industry events, or to adopt some security practices only at those events
Example: Only using a VPN or being suspicious of ATM machines while at Black Hat / DEF CON.

Capability bias

Description: The tendency to overvalue the defensive impact of mitigating a published attack when viewed in the context of an adversary that can adapt
Example: Blocking PowerShell on a server, while still allowing arbitrary unsigned executables.

Commutative bias

Description: The tendency to undervalue the likelihood of an attack that only requires the linking of two or more highly-likely events
Example: Thinking that an internal system is highly protected, despite everybody in the company having access to it - and phishing campaigns industry-wide having nearly a 100% success rate.

Domain bias

Description: The tendency to focus on risks and solutions closely related to one’s domain of expertise, rather than all risks to a system
Example: Cryptographic experts adding hardware security modules to an architecture, despite pressing application and network security weaknesses.

Endorsement bias

Description: The tendency to place trust in systems or mechanisms that have financial ability as their only barrier to entry
Example: Making security decisions based on “signed code”, despite code signing certificates being available to anybody for $85.

Environment bias

Description: The tendency to undervalue risks to a system when analyzed against minor changes to its threat model
Example: Useful “find my phone” applications that become weapons in the context of domestic abuse.

Fatalism bias

Description: The tendency to think of a system as only compromised or not, without investing in post-breach processes and controls
Example: Threat modeling sessions that include the phrase, “well if they got in there, it’s game over.”

Headline bias

Description: The tendency to use the summary / headline of an event to understand risk, rather than working to understand mitigating conditions
Example: Mocking Linux for the CVE-2019-14287 “SUDO Backdoor”, despite most articles properly explaining the rare and nonstandard configuration that would lead to this being a security vulnerability.

High-profile bias

Description: The tendency to prioritize high-profile events in the media, rather than risks associated with the target environment
Example: Rushing to address CPU side-channel attacks, despite a large fleet of unpatched servers.

Hyperfocus bias

Description: The tendency to inconsistently evaluate security of an application based on its unique capabilities
Example: Criticizing an application for a flaw in a security feature that no comparable application even implements

Impact bias

Description: The tendency to require working proof of a weakness (or impact of a weakness) in a system to sufficiently account for its risk
Example: An unmitigated SQL injection bug that doesn’t get fixed until you demonstrate the extraction of data.

Measurability bias

Description: The tendency to place inappropriate weight on the security of a system based on analysis of a measurable security property without regard to context
Example: Criticizing (or applauding) the cryptographic cipher strength used in a system, even when that use has no confidentiality or integrity impact.

More-is-better bias

Description: The tendency to believe that measurable security settings continue to provide return on investment as that control is increased in the “more secure” direction
Example: Recognizing that never-expiring passwords might be a risk, so aggressively pursuing shorter and shorter password expiration durations.

Motivation bias

Description: The tendency to undervalue the risk to a system due to perceived lack of motivation of attackers to target that system
Example: Acknowledging a vulnerability yet dismissing the impact because attackers wouldn’t be interested - despite the existence of threat groups that scan the entire internet daily to compromise anything they find exposed.

Novelty bias

Description: The tendency to focus on mitigating the novel aspects of an attack, rather than the root causes and more core defensive mitigations
Example: Focusing on unique command-and-control mechanisms leveraged by an actor, rather than mitigating how they got access in the first place.

Obscurity bias

Description: The tendency to overvalue the security benefit of keeping implementation details secret
Example: Requiring security pen testers to engage in “black box” audits of applications, rather than providing access to source code.

Popularity bias

Description: The tendency to inconsistently evaluate security of an application based on its popularity
Example: Criticizing a popular application for a security weakness that all comparable applications also exhibit.

Publicity bias

Description: The tendency to overestimate the soundness of a decision until subject to broader scrutiny
Example: Deciding to not fix a security issue, yet reversing on this decision as management or the public learns about the risk.

Selection bias

Description: The tendency to make absolute security judgments based on a non-statistical observation of outcomes
Example: Evaluating the security of an application based on the number of CVEs reported on it without accounting for popularity or amount of focus given by security researchers.