Bounded rationality

When individuals do pay attention, it is not always the case that “individuals know their needs and desires best” (Miller, 2020). Sociologists and Economists in the field of Behavioral Science such as Cass Sunstein, Richard Thaler in the book “Nudge” (Thaler, Sunstein, 2009) and Kahneman, Tversky work in Prospect Theory (Kahneman, Tversky, 1979) are providing a more realistic picture of how real decision making operates. Human rationality as a market participant occurs under limited conditions is more accurately labeled “bounded rationality”. Economist Daniel Kahneman presents an overview of the learnings from the field of Behavioral Economics through a biological picture of cognition and it’s biases in the book “Thinking Fast and Slow” (Kahneman, 2011). The same biases are explored and linked to their causes and origins in neuroanatomy, endocrinology and social feedback interactions over different timescales in Robert Sapolsky’s “Behave” (Sapolsky, 2017). This section presents a summary from these sources on the deviations from rational decisions in human cognition due to systematic bias, competing decision systems and discounted risk perception.

In this section

Cognitive bias

Cognitive biases here is defined as any systematic error of judgement from the expected normal or rational response. Biases in their numerous forms share similar causes from the limitations of information processing of the human brain and the difference between the subjective reality experienced inside the individual’s brain and the outside reality. The consequences of biases can impose costs on the individual, others or both. Biases are diverse - from logical fallacies, discrimination of others, comparisons, selective memory and attention, and attunements of optimism or pessimism on unrelated markers. Some biases can be thought of as imprecise, shortcut approximation heuristics that may work well in practice most of the time but have a high residual error. Others appear to be mal-adaptive artifacts of the biological reality of cognition. Decisions in a self-contained biological system are conditioned on a subjective local reality and biases are a reminder of the crude approximation of this local reality to the real global reality. This subjective reality bias takes several forms such as priming and framing.

Priming and framing

Priming and framing can alter how an individual makes a decision by shifting how the information is presented. Priming is the tendency of information from an exposure just before being presented with a decision to leak into that decision. Anchoring is a type of priming where exposure to a number influences the range of the set of options considered in preparing a response. Example of anchoring effects is the difference in response to the question, “how old do you think I am?” when prior to asking the question the person was presented with an unrelated fact “This year Marco Rubio turns 49” or “this year Justin Bieber turns 26”. The responses are likely to be higher in the later and lower in the former case even though they have nothing to do with the actual question. Framing is a variation on how a proposal is worded or presented that alters the level of support for the proposal, such as the following two questions.

  1. Would you accept a gamble that offers a 10% chance to win $95 and a 90% chance to lose $5?

  2. Would you pay $5 to participate in a lottery that offers a 10% chance to win $100 and 90% chance to win nothing? -Kahneman, 2011 Thinking Fast and Slow

The second choice is preferred, even though they are mathematically identical proposals.

Cognitive bias is not an exception, it is ubiquitous and careful reflection on one’s past decisions is likely to confirm to oneself that no one is immune. Bias is not uniform and some bias can be mitigated or completely eliminated with greater awareness and attention. This deliberate cognitive force however represents only a fraction of the totality of daily decisions, the “tip of the iceberg” while many decisions fly under the radar governed by an autopilot system. The interaction of these two systems have implications for the frequency and consequence of bias, and it’s accumulated lifetime consequences.

Two decision makers - the fast and the slow system

Kahneman presents decision making as the result of interaction between two systems in the brain - the “fast”, and the “slow” system. The fast and slow dichotomy is popular in neuroscience and Sapolsky also adopts this framework (Saposky, 2017). Each of the two systems operates with unique neurological anatomy which has adapted to different needs and circumstances with their own advantages and disadvantages.

Fast - freeze, flight, fight

The fast system is the body’s alarm system. It is impulsive, imprecise and composed of primitive regions of the brain in the midbrain and the hindbrain sections that are shared with reptiles and fish. The fast system is always on, operating continuously and subconsciously nudging and guiding where attention is focused and in some cases triggers reflex actions. The fast system is also in tune with the emotional center - the amygdala and has a direct reach to nonverbal expressions that can be difficult to suppress, and in some cases extremely fast micro-expressions leak signals of the fast system out of reach of the slow system (Sapolsky, 2017 ; Kahneman, 2011).

Slow - the executive center

The slow system is associated with conscious attention and the prefrontal cortex which distinguishes humans and primates from other mammals. This system is associated with fine motor controls, impulse control, and deliberation either number crunching or social calculus. The two systems operate together where the fast system alerts the slow system, focuses its attention and suggests options to weigh and deliberate on. The slow system then is an executive center that interrupts and decides between competing suggestions. The slow system is responsible for impulse control to say “No”. Its vulnerabilities are that it is limited by attention. It functions like a cursor and can only process one item at a time based on where attention is focused. There is relatively little capacity in short term memory available for this processing RAM for the slow system and it is comparatively energy intensive with noticeable responsiveness to blood glucose levels (Sapolsky, 2017 ; Kahneman, 2011).

The handicap of inequality - inhibited slow thinking

Both systems are vulnerable to cognitive biases, although the slow system can generalize and overcome new bias on the fly so long as it has the appropriate awareness. The slow system is able to mobilize the impulse control mechanism to detect and interrupt biases that it is aware of. The fast system can also overcome bias over time with learning through repetition and practice. Stress either short term or chronic can inhibit the function of the slow system as well as fatigue. In one research study reported by Sapolsky, a systematic bias was reported in judges' sentences based on the time since their last meal. The sentences were harsher and made fewer nuanced considerations that are associated with use of the slow system (Sapolsky, 2017). Through the book, Sapolsky makes the case that socioeconomic inequality can result in a systematic disadvantage throughout life both in the way that the individuals are treated by others in a social environment and their own under-performance of decision making through a range of mechanisms involving the short term and long term effects of stress on the brain (Sapolsky, 2017). One area of cognitive bias that can have long term economic consequences for the individual and can distort the function of markets is risk perception biases.

Risk perception

Selecting the rational choice is easiest when the outcomes between the two cases are certain, but once probabilities are assigned a new form of bias emerges. Risk perception is a pervasive and an unambiguous distortion of decision-making from expected rational behavior. Daniel Kahneman and Amos Tversky have published the dynamics of human decision under risk and developed a predictive model for it’s deviation from rationality under the name Prospect Theory (Kahneman, Tversky 1979). One of the main insights is that the rational expectations for probabilistic risks does not scale linearly with respect to their probabilities. Instead, a perception function converts actual risk probabilities into subjective probabilities by applying a discount.

The discount is applied according to a logarithmic scale instead of a linear one in a similar manner to how other signals are processed such as light and sound. There are other discount biases applied - one example - “loss aversion” - is illustrated in Figure C.3 where the discount adds a bias that gives a systematically higher weight to avoiding a loss than for securing a gain (Kahneman, Tversky 1979). Other examples of discounting are sense of control, familiarity, smooth/erratic changes and present/future.

Situations where one has a higher perception of control such as fatality while driving receive lower discount than uncontrolled risks like shark attacks or lightning strikes. Similarly risks with erratic patterns receive a larger discount than smooth. Earthquakes, natural disasters occurring in unpredictable erratic patterns over time receive greater discount than smooth gradual changes such as erosion or resource depletion. General familiarity has a similar effect as sense of control. Familiar activities like drowning receive a lower discount compared to less familiar topics like radiation and chemical exposure. Risks occurring in the future receive an extra discount from the present even after accounting for the range of uncertainty. The effect is as though there is a systematic devaluation of everything in the future compared to the present. The further into the future, the larger the discount (Kahneman, 2011). The effect of risk perception biases influence suboptimal choices on a wide range of risk-based decisions such as dietary choice, exercise and retirement savings and investment (Kahneman, 2011 ; Thaler, Sunstein, 2009). The problem of individuals not making good choices for themselves is a challenge because behavioral compliance works best when individuals feel a sense of agency and do not feel that their choices are being restrained.

Nudging behavior with choice architecture

One strategy identified to resolve the dilemma of autonomy for imperfect decision makers is known as “choice architecture” presented by economist Richard Thaler and legal scholar Cass Sunstein in the book “Nudge” (Thaler, Sunstein, 2009). The architecture is the set of choices individuals are faced with and in particular default settings and whether they are opt-in or opt-out. Getting “in the ballpark” can be a problem when some individuals are stuck in locally optimal but globally sub-optimal situations in terms of their life circumstance and perceived choices. Getting them closer to the ballpark of what others face so that they are positioned with the best possible chance of good outcomes. Also the range of variation of downside risk is limited in case of routine expected errors of oversight or inattention.

The Singapore government is familiar with choice architecture. An example of this strategy was applied to organ transplant. The problem of organ transplants is a clear example of Hardin’s tragedy of the commons applied to public goods. Ostrom’s analysis of overcoming the tragedy of the commons was for common pool resources, but not for open systems where there are no practical means of excluding individuals from enjoying the benefits or bearing the costs - public goods (positive externalities) or public bads (negative externalities) (Ostrom, 1990). The individual cost-benefit prospects of voluntary participation in organ transplants shares a common dilemma of civic participation, voting. Everyone benefits when everyone else is participating. If any individual withdraws their participation they still receive most of the benefit from everyone, so no-one has a particularly compelling self-interested motivation to voluntarily opt-in. A simple workaround for this dilemma is to shift from an opt-in, to an opt-out choice so that in one simple move the incentive choices are reversed from a market failure to a market success. The Human Organs and Transplant Act (HOTA) legally enables hospitals to remove kidneys, heart, liver on death and applies to all Singapore Citizens and Permanent Residents over the age of 21. Citizens are free however to opt-out of the scheme by filling out a simple form (Singapore MOH, 2020).

Choice architecture begs the questions of who should be the architect, what would guide their decision process to decide on the defaults, the opt-in and opt-outs? The liberalization model side-steps these questions by deferring decisions to individuals. It doesn’t avoid the question completely. It’s default yardstick of success stated implicitly or explicitly is per capita income - GDP. While the criticisms of GDP are numerous, it is simple and logically consistent with the capitalist model of individual self-interested profit maximization. If humans are not so singularly preoccupied with private profit-maximization, how would prosperity be measured in a meaningful way? How could prosperity be applied universally for a social species that judges their satisfaction based on their local subjective experiences?

Last updated