Where we start by observing that both our senses and our interpretation machinery are not designed to capture reality as it is, that instead we are designed to make “useful” mistakes, and end up realising that scientists are stupid (using a strict Cipolla definition of stupidity).

My starting point is the least expected place: a public monograph available on the CIA website (!?). The full citation is: Heuer Jr, R. J. (1999). Psychology of intelligence analysis. Lulu. com.

Of all possible sources, I’m using this because it has a nice, plain-English description and discussion of “Cognitive Bias”: it explains intuitively why cognitive bias happens, and why it is a very big problem when one wants to understand reality as it truly is (as CIA analysts should). Here is the crucial conclusion:

Cognitive biases are similar to optical illusions in that the error remains compelling even when one is fully aware of its nature. Awareness of the bias, by itself, does not produce a more accurate perception. Cognitive biases, therefore, are, exceedingly difficult to overcome.

This further extends the concern explained in my previous post: I concluded that pure thought can easily allow to systematically make undetected mistakes, and decided that I ought to try to minimise this danger. The CIA monograph tells me that cognitive bias is “exceedingly difficult to overcome”, even when you are fully aware of it. Ouch, this is not going to be easy. I will look at the “CIA approved” catalogue of cognitive biases in a future post, but for now I’m more interested in exploring why they exist. As a shorthand, I’ll provisionally conclude that the evidence tell us that we don’t see things as they are, but apply useful heuristics instead.

This isn’t surprising: at some level we all know it, and it is easily established empirically. However, for reasons that I’ll start discussing below, it is also somewhat ignored by classic thinking both in the evolutionary and economics fields. Artem Kaznatcheev and colleagues over at EGG have been exploring this sort “blindness” for quite a while, I could link to many posts, but will start with the one that was able to get my own thoughts clarified, it’s entitled “Interface theory of perception can overcome the rationality fetish“. If you find my own explorations somewhat amusing, I would strongly encourage you to explore EGG in depth (especially if maths doesn’t scare you).

Anyway, I’m indebted with Artem for many reasons (I also wish to note how all the EGG team splendidly exemplifies the power of open science), and I especially like the post linked above because it uses the “rationality fetish” expression, you’ll see why as I proceed. More to the point, Artem’s essay shows how error (intended here as a faulty/misleading/incorrect representation of reality) creeps into two domains: straightforward perception and cognitive representations.

The first bit actually refers to Hoffman’s Interface Theory of Perception [see Hoffman, D.D. (2009). The interface theory of perception. In: Dickinson, S., Tarr, M., Leonardis, A., & Schiele, B. (Eds.), Object categorization: Computer and human vision perspectives. Cambridge University Press, Cambridge.]: the way I understand it, the basic idea is that organisms perceive the world in a way that maximises fitness, and that usually implies hiding the complex aspects of reality and build representations that are useful, not truthful. Hoffman uses the term “interface”, in the sense of “user interface” of computer software. In the case of software (and I know a thing or two about this!) the things you see on the screen are designed to allow you to do your thing in the easiest possible way, and it is fair to say that the function of the interface is to hide the complexity under the hood. This is almost always true for software, but I’m not entirely convinced that it applies to the evolution of perception; to know why, please refer to my comment on Artem’s post here. Anyway, for the purpose of this post, I don’t need to discuss my perplexities in detail, it is enough to note that it’s quite reasonable to say that “the function of perception is to maximise fitness, not to model reality faithfully, so that systematic mistakes and approximations are the norm, not the exception“.



Interestingly, the same applies to indirect interpretations of reality: by this I mean the understanding of complex interactions, or our internal representation of phenomena that can be modelled successfully using derived concepts that are not direct representations of the physical reality (the typical example is how we “understand” people by interpreting their intentions). Once again, Artem comes to my rescue, with another post “Evolving useful delusions to promote cooperation“. The title says it all, right? There is plenty of maths in Artem’s article, so I’ll try to summarise it in plain English (corrections are welcome!).

Classic game theory revolves around a basic principle: given a certain set of mathematical rules that define the interaction between two or more players, one can usually find out what is the optimal strategy for each player. In other words, by modelling somewhat simplified versions of real interactions (in Artem’s case, using a model that describes the possible variations of the classic Prisoners’ dilemma game), maths can show us what are the supposedly “rational” behaviours for the different players. It’s a powerful approach, and a frequently misused one as well.

What is important of Artem’s effort (with Marcel Montrey and Thomas Shultz) is that they produced a game model that allows actors to maintain and modify their own, idiosyncratic representation of reality (in this case, each player acts based on their own evaluation of the actual rules that govern the game, and their experience at each round shapes and updates their own evaluation), and has shown how and when the players in an arena will develop representations that do not match the real rules at all. Crucially, what may happen in certain circumstances is that all players will develop towards a representation of the game that favours collaboration, even when the “best” mathematically defined strategy would be to always “cheat”.

In my interpretation, both domains (perception and representation, or Interface Theory and Artem’s game-theory simulation) provide are examples of how natural selection produces systems that maximise fitness: to do so, they represent the features of reality that happen to be useful, obscuring what is real, but irrelevant, but also what happens to be real and counter-productive. The representation is also a classic case of self-sustaining cognitive attraction: when agents drift towards cooperation, the game changes so that the best behaviour is to keep the magic alive and keep cooperating. The players find themselves in a world that really is different from what the original rules would suggest to a naive, “a priori” observer. Artem’s model is, in other words, a mathematical representation of the mechanism that allows self-sustaining cognitive attractors to shape our experiences.

Perhaps unsurprisingly, both theoretical efforts require us to re-evaluate what we consider to be an error. In the case of a prisoner dilemma arena, where most players drift toward cooperation even if the payoff rules indicate that the rational strategy is to defect, agents are actually maximising their fitness (or profit) and they do so because they evolved a misrepresentation of the actual rules. Is this an error? If your aim is to find out what the rules are, then yes, it is. But the players are designed to maximise their fitness, their representations of what happens are instrumental, not the final goal. Under these circumstances, taking the “rational” (note the scare-quotes, rational here means the fetishistic mathematically derived “best strategy”) stance, and always defect, actually minimises fitness. It means that avoiding the “rational” choice, given the actual purpose, is not an error at all.

This is a cute thought, and something most of us will find it easy to recognise at some deep, intuitive level: the artificial arena settings devised in Artem’s experiment provide a distilled case where acting “rationally” is the stupid option. Once again, note the scare-quotes around “rationally”: they are there to mean the “mathematically calculated optimal strategy”, or what looks like it when one doesn’t account for the full complexity of the arena. Also, “stupid” here follows Cipolla’s definition in a literal way: acting “rationally” is stupid because it does not produce the desired effect.

The corollary observation is that scientist that insist with this interpretation of “rationality” and keep considering the wrong strategy “rational” are actually being stupid (in Cipolla’s sense). They get fooled by maths, and fail to see that the lovely equations they use to calculate the optimal strategy are just plainly wrong: they are wrong because they don’t show the optimal strategy, and they don’t do so because they are not capturing the full complexity of the arena. Specifically, they fail to represent the evolutionary potential of it. In this specific case, the internal representations that players develop are simply less wrong than the standard mathematical solution, but in a deeper way: they represent the situation on the field, not the actual rules of the game, they faithfully model how the game actually does develop.

For me, the implications are overwhelming, and will have to spend quite some time thinking/writing about them. To finish off this post, I’ll list a few implications, and hopefully discuss them in the future:

Mathematical models are tricky, applying objectively true mathematical theory to reality is a risky business. If one doesn’t capture/represent the relevant aspects of reality (in the game-theory case, the standard approach does not represent the ability of players to learn from experience [Update, following the kind correction in the comments] consider the fact that all players have their own pre-existing bias, nor it accounts for indirect effects across multiple individuals and across generations[/Update]), the whole effort will look as rigorous as it gets, but would still be wrong and/or meaningless. This is a dangerous kind of error, and I suspect it’s quite common in the science business. The “frame problem” (will explain what it is in a dedicated post) may be a meaningless issue. We don’t have a mysterious ability to understand how to interpret reality correctly: we are in the business of extracting information that allows us to function, and that doesn’t require to build faithful representations at all. I can’t trust my own perceptions, and neither my own most profound “understandings”. They may well be useful in my everyday life, but that doesn’t mean they are faithful representations of what is really out there. My own assumption: that “understanding reality as faithfully as possible is a good thing to do” may be wrong, after all. All of the above seem to indicate that it is unnecessary and sometimes counter-productive.

Once more, the outlook is pretty bleak! But hey, no one said it would be easy…