ChatGPT and the Monty Hall Problem

Ben Lengerich
2 min readJan 2, 2023

--

#ChatGPT is amazing! Here’s a fun little example I made where it goes wrong, prioritizing pattern matching over logical reasoning.

Inspired by this explanation of psychological biases underlying Monty Hall intuition, I decided to ask #ChatGPT. I begin playing a version as the host. ChatGPT eagerly identifies the Monty Hall problem and tries to explain it (incorrectly).

ChatGPT immediately assumes we are playing the Monty Hall game.

Pressing on, I ask ChatGPT to play the first round of the game. With some pressure, it selects door C.

I tell ChatGPT that we proceed to the second stage of the game. It now correctly describes the Monty Hall problem from the perspective of the host, forgetting that I am the host (not ChatGPT).

Remember, I never told ChatGPT that we’re playing the Monty Hall problem (because we’re playing a variant). I correct this error and ChatGPT apologizes.

Now, in the second stage, I (the host) make it easy. I show ChatGPT that behind door B is a good prize and allow ChatGPT to switch doors if it wants. ChatGPT switches — to the wrong doors (A or C)! ChatGPT proceeds to re-explain the Monty Hall problem to me.

So the model’s eagerness to use pattern matching rather than logical reasoning can lead it astray. Will be exciting to think about this problem as LLMs are incredible and going to change tech.

--

--

Ben Lengerich
Ben Lengerich

Written by Ben Lengerich

Asst Prof @ UW-Madison. Writing about AI, ML, Precision Medicine, and Quant Econ.

No responses yet