Minimizing Cognitive Bias in Cybersecurity

Availability bias is the human tendency to think that examples of things that come readily to mind are more representative than is actually the case, which can hamper critical thinking and, as a result, the validity of our decisions.
Availability bias is the human tendency to think that examples of things that come readily to mind are more representative than is actually the case, which can hamper critical thinking and, as a result, the validity of our decisions.

Guest OpEd by Margaret Cunningham, Principal Research Scientist, Forcepoint

It is 7am, and ten minutes into the morning you have already dismissed the first round of notifications littering your screen.

Respond, dismiss, delete, save, and keep going.

It is impossible to escape the endless stream of decisions required to get through the day?

While some people try to minimize unnecessary decision-making (notably, Steve Jobs’ daily uniform), it is exceptionally difficult to succeed.

For instance, research shows that the average person faces upwards of  200 decisions about food alone every single day.

Choosing an outfit and deciding on an entrée are not life changing decisions, but the mental energy to make these selections comes from the same finite reservoir as the energy needed to make much larger, more important decisions.

The human brain has many different goals and purposes, one of which is to conserve energy. One way brains conserve energy is to use mental shortcuts or heuristics.

These rules of thumb allow people to do many complex tasks, but when we take a closer look at how they impact our decisions, we find that taking shortcuts can lead to cognitive bias and reasoning errors.

The impact of cognitive biases has no boundaries, and cybersecurity decisions at all levels of an organization are impacted by bias.

Let’s look at three common cognitive biases, explore how they impact different areas of cybersecurity decision making, and finally identify strategies for mitigating the negative impact of cognitive bias.

Priorities, people, and purchases

Building a strategy to protect against cyber threats requires understanding and prioritizing efforts to address existing or potential threats.

Availability bias impacts what agency leaders and cybersecurity experts perceive as high priority threats. The more often someone encounters specific information, the more readily accessible it is in their memory.

If the news is full of privacy breaches carried out by foreign adversaries, that type of threat will be top-of-mind, which may drive leaders to overestimate the likelihood of being targeted with such an attack.

In reality, reports seen on the news may not even apply to their industry or may be an extreme outlier (hence their newsworthiness). Still, availability bias may lead them to hone in on potential outside threats, perhaps at the expense of more urgent internal ones.

Another challenge for cybersecurity professionals is identifying user characteristics that pose the greatest risk to an organization’s information system.

Grouping people together based on specific characteristics or attributes can be both convenient and effective, but it also introduces the risk of representativeness bias.

Representativeness bias occurs when we erroneously group people (or other things) together based on qualities that are considered normal or typical for that group.

For instance, if you made the statement, “older people are riskier users because they are less technologically savvy than their younger counterparts” you would likely observe affirmative nods from around the room.

However, when we take a closer look at the numbers in current research, we find that younger people are actually far more likely to share passwords and they often reuse the same ones across domains.

If sharing a streaming service log-in is ultimately sharing banking information or corporate information due to reusing credentials, the younger user is far riskier than the older user.

Fear, uncertainty, and doubt (FUD) also impact decision making, and as an industry, cybersecurity thrives on using negative language that highlights risk or loss in marketing materials.

This works because the way information is presented, or framed, shapes our purchasing decisions.  Framing effect is a type of cognitive bias impacted by how a choice is worded.

People are biased towards selecting outcomes that focus on positive, or “sure thing,” outcomes. They are also more willing to choose riskier, or more expensive, options if they are faced with a potential loss.

The outcome of the framing effect in the cybersecurity industry is that decision-makers may choose overkill solutions that address specific, low-probability risks.

While all-or-nothing security may seem like a sure thing, and a way to avoid risks, bloated solutions can negatively impact employees’ ability to actually do their jobs.

People are their most resilient and creative when faced with barriers or security friction, and imaginative security workarounds to poorly selected security solutions may end up being riskier than the original perceived threat.

Minimizing bias

These biases are only a small sample of how the cybersecurity industry is shaped by human decision making.

To address the impact of cognitive bias, we must focus on understanding people and how people make decisions at the individual and organizational level in the cybersecurity industry.

This means raising awareness of common cognitive biases across agencies and within our security teams to better identify situations where critical decisions are susceptible to the negative impact of mental shortcuts.

Beyond awareness, analytics can also help remedy some cognitive biases and their accompanying concerns.

For instance, instead of relying on anecdotal or stereotypical assumptions that certain user groups are riskier than others, use of behavioral analytics can help by building a data-driven understanding of human behavior and risk.

The ability to apply security controls without relying on broad, inflexible rules (whether for a specific group, or for an entire agency) also solves the problem of overkill cyber solutions that may seem appealing due to availability bias and framing effects.

The bottom line is that cognitive biases shape our cybersecurity decisions from the keyboard to the boardroom, and these decisions ultimately determine the effectiveness of our cybersecurity solutions.

By improving our understanding of biases, it becomes easier to identify and mitigate the impact of flawed reasoning and decision-making conventions. More tangibly, understanding user behavior at the individual level can also help minimize the degree to which cognitive biases drive an agency’s security posture.

(Learn how best to protect remote workers as they use the web and applications differently. Cloud applications are key, but so is protecting them. Forcepoint’s cloud-based web security enables work from home employees safe access to the data they need. Courtesy of Forcepoint and YouTube. Posted on Jun 30, 2020.)

About the Editor

Dr. Margaret Cunningham is Principal Research Scientist for Human Behavior within Forcepoint X-Labs.

Previously, Cunningham supported technology acquisition, research and development, operational testing and evaluation, and integration for the U.S. Department of Homeland Security and U.S. Coast Guard.

Margaret Cunningham, Ph.D., Principal Research Scientist at Forcepoint
Margaret Cunningham, Ph.D., Principal Research Scientist at Forcepoint

Dr. Margaret Cunningham is a Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab.

Previously, Margaret provided Human Systems Integration support for the Department of Homeland Security Science & Technology Directorate and the U.S. Coast Guard, as well as in the healthcare domain.

Her areas of expertise include technology development and acquisition, technology integration, operational testing and assessment, and threat modeling.

In her current role at Forcepoint, she serves as the behavioral science subject matter expert in an interdisciplinary security team driving the development of scalable human-centric security solutions.