Cognitive Biases in Hacking

Introduction

Recently, while reading about cognitive biases, it occurred to me that I was guilty of many of them in my hacking process. In this post I will describe a few of the common biases that may occur in your thought process, with examples for each. Note that some of these biases may be subtypes of confirmation bias - I have added them anyway to illustrate them better. Anyways, here we go!

Here we go


Anchoring Bias

Anchoring Bias is defined as an overreliance on the first piece of information that you receive, and using it as a reference point for further information. You may have started with a particular methodology as a hacker, and refuse to expand or deviate from your tried-and-tested method. This is an example of how anchoring bias may be preventing you from evolving your skillset. Likewise, just because you see that a bounty paid $100,000, you should not assume that the vulnerability was necessarily complex. The first piece of information (the bounty amount) may mislead you to believe that the rest of the information is similarly extreme.

Anchoring Bias


Availability Heuristic

Availability Heuristic describes behaviour where individuals rely on accessible information too much. When you are approaching a new program, you may conduct some initial research of vulnerabilities that have already been identified. However, this may lead to an emphasis on existing types of issues. This is very visible on Tiktok’s bug bounty program, where the availability heuristic has caused a proliferation of XSS findings. Of course, this can be an efficient way to approach a target, but you should be aware of the risk of missing other, more severe issues.

Availability


Bandwagon Effect

The Bandwagon Effect describes behaviour where an individual conforms to the actions of a group. This is particularly obvious where vulnerabilities fall in and out of fashion. Older vulnerabilities still exist in a lot of modern software, but they are not searched for, as they are not “in fashion”. You may hunt for newer, more fashionable issues like web cache deception, rather than hunting for older yet nonetheless prevalent issues such as deserialization.

Bandwagon


Confirmation Bias

This is probably the most common bias that occurs in the hacking process. As a refresher as to what confirmation bias is:

Confirmation bias is the tendency to seek out information that confirms one’s existing beliefs or hypotheses.

You can see this bias taking shape when you are too lazy to test some endpoints for access control issues, just because all of the endpoints that you have tested so far are secure. You are assuming, without evidence, that the ones you have yet to test have been secured as well. This assumption applies far beyond access control and is probably the bias that I am most guilty of. Even as recently as last week, I was able to uncover several serious issues with pomme, just because most people assumed that the program was already well-tested and secure.

Confirm


Gambler’s Fallacy

The Gambler’s Fallacy states that people tend to assume that past outcomes can influence future outcomes in a random event, even when there is no evidence or statistical link that could cause this. An endpoint that has been secure for the past 5 years may not necessarily be secure tomorrow, as someone may push some bad code.

Edit: pmnh made a great point that this fallacy may apply in the other direction - you should not assume that you will have a great month of findings next month, just because you had three good months in a row. This, of course, views bounties as a “random event” - which they are not entirely, as there is skill involved - but chance is definitely a factor, and therefore this idea should be entertained.

Gambler


Hindsight Bias

People tend to overestimate their abilities when applied to events that have already occurred. For example, you may read a bug report and think “I could have found this easily”, when in reality, it may have required a significant amount of skill to find it. Luck always plays a factor, of course, but everything looks easier in hindsight.

Fall


Sunk Cost Fallacy

The previous cognitive biases seem to encourage blind persistence. This is, of course, not correct. You may spend five hours testing something and find nothing, but convince yourself that there is a bug just around the corner. This may lead you to waste another five hours testing a secure feature. You should move on once you have thoroughly tested the feature to the point where you are satisfied that you cannot provoke any interesting behaviour from it. Spending too long on a feature can be detrimental. Of course, there is often application familiarity value associated with deep-diving, so moving on should be considered carefully.

Swim


Conclusion

To conclude, it is important to examine your own behaviour for cognitive biases, as it may be holding you back on your road to becoming a better hacker. I hope that this post was insightful or helpful in some way.

Bye