Choosing an AI model: the Monty Hall decision-making paradox

Table of contents

Choosing an AI model today is not a neutral decision: the decision-making paradox reminiscent of the Monty Hall problem

For many businesses, choosing a generative AI model seems like a choice between equivalent options. Similar demos, comparable benchmarks, and converging technological promises. Yet behind this apparent symmetry lies a more complex dynamic: the information landscape has already changed, and with it the prospects for strategic success.

The analogy with the famous ‘Monty Hall problem’ helps us to understand the nature of risk. In this well-known probabilistic paradox, once a door has been opened and a negative outcome revealed, intuition suggests that the probabilities of the remaining options are equal. In reality, they are not. The information introduced alters the probabilistic structure of the system (Selvin, 1975; Rosenhouse, 2009). The paradox arises not from mathematics, but from the inability to correctly update beliefs in the light of new information (Tversky & Kahneman, 1974; Kahneman, 2011).

The same dynamic is evident today in many decisions regarding the adoption of AI models.

The first choice and the illusion of neutrality

Initial decisions regarding AI are often made with limited information. Brand visibility, market timing, the speed of experimentation and technological reputation all influence initial preferences. This initial choice, however, does not become more certain simply because time passes or because certain alternatives are ruled out.

As projects progress, new constraints come to light: security requirements, data residency, integration complexity, cost predictability, latency, auditability, and the supplier’s roadmap. Each constraint narrows down the options. What remains is not a neutral list of candidates, but the result of an asymmetric, informed filtering process (McKinsey & Company, 2023).

The crucial point is that this filtering alters the long-term probabilities of success, even though the decision-making framework appears unchanged.

Bayesian reasoning and AI strategy

The Monty Hall problem is a classic example of conditional probability. When an option is ruled out on the basis of information that was not initially available, the probability distribution shifts. It does not ‘reset’.

In the context of enterprise AI, the role of the ‘open door’ is played by the internal selection process. If dozens of models prove incompatible with compliance, infrastructure or governance requirements, the probability of success is not evenly distributed across the remaining options. It becomes concentrated.

The risk is treating the final shortlist as a 50/50 decision. In reality, one of the options may be the result of a process of progressive validation, whilst the other may represent an initial choice that has never really been reassessed.

Lack of transparency and information asymmetry

The complexity is compounded by another factor: much of the filtering is invisible. Decisions regarding training datasets, reinforcement objectives, architectural trade-offs, security constraints and ecosystem dependencies are rarely transparent at the interface level (Bommasani et al., 2021).

Decision-makers sense that something has changed, but struggle to articulate the asymmetry. This lack of clarity explains the oscillation between overconfidence and indecision observed in many organisations. Some leaders assume that the options are interchangeable; others perceive an implicit risk but are unable to justify it internally (Dietvorst et al., 2015; Burton et al., 2020).

Both reactions stem from the same mechanism: a failure to update probabilities in light of new information.

Strategic implications for businesses and boards

The point is not to choose the ‘best model’ in an absolute sense. It is to recognise that sticking to one’s initial decision is not a neutral act. It means assuming that the assumptions made under conditions of initial uncertainty remain valid despite the emergence of constraints and additional information.

In a context where models are evolving rapidly and governance frameworks are struggling to keep pace (Gartner, 2024), this inertia can result in technological lock-in or exposure to unforeseen risks.

For boards of directors and C-level executives, decision-making processes become crucial. The selection of an AI model should include:

– structured periodic reviews
– clarification of initial assumptions
– mapping of constraints that have emerged
– analysis of ecosystem dependencies

Non si tratta di cambiare modello con frequenza opportunistica, ma di verificare se le probabilità di successo strategico si siano già spostate.

The greatest risk is not change, but failing to keep up to date

The lesson of the Monty Hall problem is not about a mathematical puzzle. It is about the discipline of updating one’s beliefs. Rational decisions require us to revise our beliefs when the information available changes, even if our intuition suggests that things remain the same.

In the context of AI, the risk is not switching between models too frequently. It is failing to realise that the competitive, regulatory and infrastructural landscape has already altered the odds.

The sense of unease that many executives feel when choosing a model is not due to technological confusion. It is a rational indication that the system has become conditional. And in a conditional environment, failing to update the system amounts to taking an implicit risk. (photo by Amélie Mourichon on Unsplash)

References

Agostini, M. (2024–2025). Articoli su AI agents e architetture modulari. Medium.
Bommasani, R., et al. (2021). On the opportunities and risks of foundation models. Stanford University.
Burton, J. W., Stein, M. K., & Jensen, T. B. (2020). Algorithm aversion. European Journal of Information Systems, 29(3), 220–239.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion. Journal of Experimental Psychology: General, 144(1), 114–126.
Gartner. (2024). Hype cycle for artificial intelligence.
Kahneman, D. (2011). Thinking, fast and slow.
McKinsey & Company. (2023). The state of AI in 2023.
Rosenhouse, J. (2009). The Monty Hall problem.
Selvin, S. (1975). A problem in probability. The American Statistician, 29(1), 67.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty. Science, 185(4157), 1124–1131.

ALL RIGHTS RESERVED ©

SUPPORT STARTUPBUSINESS

Was this article useful to you?

A small donation helps us keep producing independent content.

    Subscribe to the newsletter