How I use Generative AI to Think Better
Better Thinking Through Play
The enduring appeal of Sherlock Holmes meets the deep satisfaction of Sudoku? Welcome to Murdle1. Created by G.T. Karber, these fun and addictive puzzles are reminiscent of the game Clue. You get three suspects, three weapons, three locations, and one victim. Your job, given a set of clues, is to use deductive reasoning to figure out who done it.
This morning as I was playing the game, I got “stumped.” Actually, a common cognitive bias jumped up and bit me in the backside, and like with so many biases, I was blissfully, frustratingly unaware of it. I’ll give you the puzzle (and the solution in a footnote), but the real reason I share this story is not to steal someone else’s cool story-driven logic game, but to demonstrate how I used AI to identify my struggle and come up with an approach to “think” better.
Here’s one of the problems with “rational” thinking. When you argue with yourself, you always win. We are not naturally the brilliant thinkers we “think” we are because the mind operates to reduce cognitive dissonance. What is cognitive dissonance? Well, two things. First, the most common definition is it’s a psychological term that describes the mental discomfort you experience when you hold two or more contradictory beliefs, especially about yourself. For example, “I’m smart” and “I can’t figure this out” contradict each other. The other thing cognitive dissonance represents is ambiguity, and biologically this means your brain is racing to resolve the confusion and identify a pattern match. Which is it? Are you smart, or are you dumb? Neural networks are organized into complementary, competitive networks, and those are expensive to operate. With only 2% to 5% of the body’s mass, the brain nevertheless consumes about 20% of your daily energy budget. That is insane. So one thing these networks do is try to find a winner. When they compare two things, the “winning” choice literally puts the other “competing” signal-processing neurons “to sleep.” It suppresses them to avoid wasting energy. We have our answer; quit wasting resources trying to decide.
This, for example, is the reason when you’re asked to think of a famous actor’s name and you get it wrong, you can’t think of any other similar actors, even though they’re “on the tip of your tongue.” You do know them, but you can’t access them because the “winning answer” suppressed the other options to save mental energy. Why decide something and keep fighting about it? If a healthy brain is anything, it is not a dysfunctional company or family. It doesn’t decide something and keep fighting about it, resisting the decision with every dying breath. It picks an answer, commits to it, and runs full steam ahead as if the other options never existed. Energy management. It’s a thing. Lisa Feldman Barrett calls this “managing your body budget”; it’s also called allostasis2.
So what does this have to do with cognitive dissonance? The brain does not want to waste energy arguing with itself. It wants to get to a decision quick and save resource energy for the really important tasks… like trying to decide if you should buy that bag of Cheetos in the airport newsstand. (Don’t do it. Don’t!)
In practical terms, what this amounts to is that your brain is mostly always looking for ways to save energy and think faster by taking shortcuts. Daniel Kahneman and Amos Tversky discovered that we operate under two thinking systems, one fast, one slow. They called them System 1 and System 2.
Puzzles often require us to engage our System 2 thinking, but we naturally resist this because it’s expensive, energetically and attentionally. And this is where generative AI comes in. When you get stuck, you can ask AI to give you a nudge. And the more you practice logical thinking, the less expensive it becomes, and the more inclined you are to use it.
Therefore, here is how I used AI not to solve a problem for me, but to help me develop the skills to be a better problem solver myself. Let’s start with the puzzle.
The Puzzle
The Rules:
- Only one person can have one weapon.
- Only one person in one location.
Suspects:
- Professor Blue
- Lady Yellow
- Mister Green
Locations:
- Garden
- Hall
- Kitchen
Weapons:
- Fork
- Candlestick
- Vase
Clues:
- Mr. Green was suspicious of the person who brought the candlestick
- Professor Blue brought a fork
- The candlestick was not in the kitchen
- Mr. Green was in the kitchen or Professor Blue was in the garden
- The victim was found in the hall
The Questions:
Who is the murderer? What weapon did they use? Where did the murder take place?
Who Dunnit?
Working my way through this puzzle, it seemed pretty straightforward at first, and it may be extremely easy for you, but what happened to me is that I got stuck on the 4th clue. Mr. Green was in the kitchen, or Professor Blue was in the garden. For some reason, I just kept dithering over the idea that there were two equally valid choices. I knew I was missing something, and strangely, I knew that what I was missing had to do with the “or” statement. If this puzzle does not frustrate you or confuse you, I am genuinely sorry, because then you will miss out on the illuminating moment, and you might even think using AI to think better is a complete waste of time because you already think just fine, thank you very much. However, I ask that you bear with me, because there is likely to come a point where you will get stumped, and this process might help you solve a real problem, not just a phony one made up for entertainment. You see, we humans are not naturally great rational thinkers solo3. In small groups, we’re amazing, but individually? We are biased to be cognitively lazy because, in theory, it’s more energetically efficient. The brain is more interested in generating feelings of certainty through pattern matching than in being “correct.”
So here’s what I did: I used basic logic to create constraints for the main clues:
-
Translate to constraints fast.
- “Suspicious of candlestick-bearer” ⇒ Green ≠ Candlestick
- “Professor Blue brought a fork” ⇒ Blue = Fork
- “Candlestick not in kitchen” ⇒ person-with-candlestick ≠ Kitchen
-
Use uniqueness to cascade.
Once Blue = Fork, the remaining weapons split; Green ≠ Candlestick ⇒ Green = Vase; Yellow = Candlestick.
-
Propagate location bans.
Yellow (Candlestick) ≠ Kitchen. Keep a running “cannot be” list.
And then I got stuck. I looked at rule four and thought, but… but… which is it? I could not intuit the answer. So I turned to AI.
Who Knows Who Dunnit?
When I get a problem like this, I will often run it by both AIs to see how they think differently. So I took the puzzle (just as I gave it to you) and I pasted it into Claude Sonnet 4.5 and ChatGPT o1. I turned both onto their best thinking/reasoning models and asked them to not only solve the problem, but to help me understand the process by which they would solve the problem.
Ask AI to help teach you how to solve a problem
I knew I was missing some key insight into the “or” statement, and interestingly each model helped, but they came up with different paths to solution.
Claude got it wrong, but the way it got it wrong showed me what I was missing. It evaluated one half of the “or” statement and revealed a contradiction—if Professor Blue was in the garden… it would mean that Lady Yellow would be in the kitchen, which she couldn’t be. That unstated contradiction locked the remaining puzzle into place, but Claude assumed both conditions of the “or” statement could be true (which is what I had done). An inclusive “or” means both things can be true; an exclusive “or” means only one can. But the main insight for me was when faced with an or statement, I need to fully evaluate each state and see if it produces any contradictions or violates any constraints. I needed to know what I was looking for, and that was a constraint violation.
The correct answer is… (well, see below4).
- Who:
- What:
- Where:
ChatGPT got it right out of the gate, but it went one step farther, pointing out that the distinction between inclusive or exclusive did not matter. Only one condition needed to be true. Their methodology and recommendation to me is called disjunctive elimination or proof by cases—basically, you’re eliminating branches. The “or” means you just need one of them to be true, but it also implies that only one of them might be true, and you could (and in this case did) eliminate the others. It doesn’t have to be either/or; you just need to find out which branches (or cases) are true and which ones cannot be true due to a constraint. It would be another form of fallacy to assume the “or” was inclusive or exclusive. It doesn’t really matter. The bottom line is you need to evaluate each branch, and there’s only two, so why the big deal? Because we are biased to make decisions based upon what we can see, NOT what we can’t. I could see that each character could be in one location or the other (something I could visualize in my mind), but I needed to look for who couldn’t be in a location. I was searching for absence, and that is not what our System 1 mental mode is built to do. Looking for missing information is a System 2 function, as Kahneman and Tversky discovered.
The Takeaway
My recommendation here is that if you’re getting stumped on a puzzle, finding your own cognitive blindspots can be a hard problem. And finding someone else who can get up to speed with you on your problem, at whatever weird hour of the day you’re working on it, can be difficult. Using AI to help you work through that problem, in my experience, is more valuable than actually having it solve the problem for me. Why? Because now I can enjoy the rest of the puzzles in my book. The goal is to be entertained and challenged, not to have the answers.
Early in my video game career, I got the cheat codes for a game called Deus Ex. It was a brilliant, challenging shooter game made by Warren Spector, a genius game developer. However, once I went to God mode (unlimited ammo, my character could not die), I stopped playing the game. I had played it for at least 10 hours, working my way through the dangerous cyberpunk landscape—but then, with the challenge removed, the game became boring, uninteresting, and I stopped playing it altogether. Making it “easy” ruined the experience for me. I’m sure there’s a parenting lesson in there somewhere about safetyism, or some such thing, but the bottom line is that I try to use AI to help me deal with my challenges, not deal with them for me (at least the enjoyable ones). And even if it makes a mistake like Claude did, I got value from finding its cognitive error, and we talked about it. I now have a new toolkit to enjoy the rest of the book, and I’m sure I’ll find new challenges in later puzzles that may bring me back to this process.
The process is: help me figure this out (not figure it out for me).
Because a game with no challenge is not fun at all.
Endnotes
-
These are awesome fun games, and I highly recommend the book as well as the website murdle.com ↩︎
-
Allostasis is a cutting-edge concept. You might have heard of homeostasis, but that idea is being replaced with the more dynamic concept of allostasis, which means stability through change. The brain and body don’t sit around waiting for things to happen they can react to; they try to predict what will happen next all of the time, and then make adjustments as sensors (internal and external) provide feedback to their prediction model. ↩︎
-
According to Kahneman and Frederick (2002; Frederick, 2005), cognitive reflection (CR) refers to the individual ability or disposition to stop the first impulsive response that our mind offers and to activate the reflective mechanisms that allow us to find an answer, make a decision, or carry out a specific behavior in a more thoughtful way. This is hard; when other people are present, they are more likely to challenge an initial gut reaction. ↩︎
-
Professor Blue with the fork in the hall. Lady Yellow with the candlestick was in the garden. Mr. Green with the vase was in the kitchen. ↩︎