thinking-with-ai-header.png

Beyond Prompting

Have you ever had this experience? You ask Generative AI (GenAI) for help with a problem, looking for an answer, and the AI gives you an impressive-sounding answer that does not actually help? Sure the response is polished and confident. It might even be creative - but when you try to use the result it misses the mark. Lately I have been having this problem with ChatGPT, where it rushes ahead, seemingly more interested in proving how smart it is, instead of helping me solve my problem.

You know that GenAI should be really helpful, and you hear other people are making it do amazing things, but you can’t quite seem to get there, and you don’t know why.

In an earlier blog post, I shared the idea that if you can hire and direct a coder, you can probably direct GenAI to do some Vibe Coding for you. I still believe that. However, I need to refine that with some information I just learned today. Thanks to a blog post by Tiago Forte, he revealed that most Generative AI are fed a “System Prompt”, which can be hundreds of pages. These prompts shape how the Agent responds to you. ChatGPT, Claude, and Gemini are all engineered with specific guidelines and rules that cause them to behave very differently.1 It is likely that this is why I prefer Claude and Sonnet 4.5, and avoid ChatGPT and Gemini for serious thinking. Still, having said that, these tips can help you get more out of your agent experience. They have helped me, I feel confident they can help you too.

Four Common Ways People Use AI

However, what I have found is that most people tend to use AI in one of four ways (see if any of these sound familiar to you):

Most People Treat AI As:

  1. An answer machine (“give me the solution”)
  2. A validation engine (“tell me I’m right”)
  3. Replacement for Search (“find me information”)
  4. Assignment Completer (“do this for me”)

While these are valid ways of working with GenAI, if they are the only ways you interact with GenAI, you might be missing out on one of its most powerful uses. Namely, are you using GenAI as a thinking partner?

A Thinking Partner

The core problem is that most people who use GenAI ask, “Give me a solution to X.” In contrast, I tell GenAI, “Help me think through X.” That small shift changes everything.

This is not about more clever prompts, it is about unlocking one of the most powerful features of your own brain. It turns out, we are not so good at evaluating our own thinking. We are primed to confuse “reasoning” with “reason.” Psychologically speaking, reasoning isn’t logic. Logic is an amazing tool for evaluating propositions, and we all possess an impressive array of cognitive talents that can be categorized as reason, but ironically, reason-ing is something else.

A Very Human Bias

Reasoning is coming up with plausible sounding justifications for why you think, feel, and believe what you do. Put another way, we use reason-ing to generate justifications we and our peers will accept as reason-able.

French cognitive scientist Hugo Mercier discovered that individual reasoning is biased and lazy because reason is best performed as a group process. When we reason alone, we only look for justifications why we’re right and the brain doesn’t really care whether the reasons are good or not. As a form of energy conservation, the brain is quite happy to spit out superficial and shallow “reasons.”

To the point:

When you argue with yourself, you win.

We Evaluate Others' Thinking More Critically Than Our Own

When subjects were asked to submit rational arguments to researchers, and those same arguments were later presented back to these same subjects as if they came from someone else, the subjects shredded the justifications they themselves had created. This shift from “it’s my idea so it’s okay,” to “it’s someone else’s idea, I better think it through” is nearly universal. In short, we reason better when evaluating someone else’s ideas compared to our own.

Cognitive psychologist Tom Stafford examined dozens of studies where group reasoning arrived at the correct answer when individual reasoning failed. A stunning 83% of people got at least one question wrong when taking a Cognitive Reflection Tests individually, and a full third of people got all the questions wrong. But here’s the kicker, in groups of three or more? The teams never got any questions wrong.

Thinking By Yourself

This bias to accept our own reasons can create a massive blindspot. One that GenAI can help you uncover. When you look at the four primary ways people use GenAI, they all share one thing in common: None of them ask the user to think better themselves. None of these prompts expect the user to examine their own reasoning and rationale.

That is what is different about my approach2. I use the GenAI to act like someone who can help me think better. Because of not only the staggering breadth of knowledge of these Large Language Models, but also their capacity to adopt different personas, you have a veritable army of cooperative thinkers at your disposal.

Thinking with GenAI

So how do I actually use GenAI to think better? One of the things I love about GenAI is that it can “get up to speed” on my problem extremely quickly. I can load up resources, reference material, everything I need to give it context. In fact, it is extremely difficult in my experience to have these kinds of conversations with other people, mostly because it feels like an unreasonable demand for them to know as much or more than I do about the problem I am trying to solve. They have their own problems, they don’t have time to dig into mine in any depth. But GenAI can do that almost instantly.

And powered with this detailed context, I can begin to have what I call “thinking conversations.”

A Different Approach

First and foremost, I treat GenAI as a thinking partner3, not as a service. Second, I bring my domain expertise to the conversation and ask the GenAI to bring its perspective. A typical interaction is something like:

I am seeing this, or I am thinking this.
What are your thoughts?

Other tools I use are to rephrase what the GenAI has told me, or what I have learned:

So what you are saying is...
Did I get that right?

Or, if we are working through connections and insights, I can use prompts like:

Walk me through this, one step at a time. I want to make sure I understand how these things are related or connected.

I can get the GenAI to teach me, but also to evaluate my own perspectives and insights. It is much easier for me to see the flaws in my own thinking (my reason-ing) when I see it processed and presented back to me through the language engine of GenAI. The GenAI can do something for me my brain can’t do for itself: process my thoughts and justification and present it back to me as if it came from someone else. This activates my own higher critical thinking, making it easier for me to spot flaws and refine arguments until I get closer to a better understanding.

I realize this all sounds a bit abstract, so let’s look at a concrete example. Murdle.

Murdle Book Cover Big.png

An Example

So let’s take a look at the different approaches, so you can see them side by side.

I will give you one example that comes from a recent experience and a blog post. I was playing Murdle, the murder mystery game, and I got stuck. I could have done this:

Solve this puzzle for me.

And that is a very common way to work with GenAI. However my approach (which you may recall) was to ask the agent:

Can you help me learn how to solve this kind of puzzle?

In both cases I would have shared the murder mystery, but in the first prompt, I am simply offloading the problem solving to the agent. In the second prompt, I am asking it to help me think better about the problem. The first prompt produces a concise answer:

Professor Green committed the murder with a Candlestick, in the Laundry Room.

However, the second prompt generated a very long conversation which identified the logic puzzle I struggled with (inclusive vs exclusive or), and how to approach that kind of problem in the future. Yes I got the answer to the puzzle, but I also got something much more valuable. I got a method for dealing with problems like this in the future.4

Here's the playbook for "murderer always lies / innocents always tell the truth" layered on top of a 3×3×3 logic grid:


# How to solve these cleanly
1. Make two workspaces
2. Translate each statement into testable facts
3. Truth/lie consistency check (the key step)
4. Lock the survivor
5. Common pitfalls to watch

This answer is significantly more powerful, and for my tastes more interesting. Using this methodology, I can ask the agent to help me debug my own thinking as I work through puzzles. Using this method allowed me to complete the next 21 puzzles in the book I bought, which I found highly enjoyable.

The Hidden Advantage

When I use GenAI to help me think through a problem, instead of giving me the answer to the problem, I gain the benefit that I know the answer because I produced the answer. As I shared when I talked about the Feynman technique, memory is not an act of recall, it is an act of reconstruction. We know what we are able to produce, like a rendition of the song “Happy Birthday5”, you really know what you can “produce” from memory.

When the GenAI gives me the answer, I don’t know it. It knows it. I might be familiar with it. I might recognize it. Without the presence of the AI however, it is very unlikely I will produce the answer on my own. When the GenAI helps me think it through, I am significantly more likely to use that knowledge effectively.

To me, this is a super power. In a knowledge economy, he with the access to the best knowledge often comes out ahead.

But it is also practical. It makes me more effective moment to moment, and not so completely dependent upon the device to think for me. If all you are is the interface between the question and the answer, what value are you adding? If however, you can think and grow, and connect knowledge so that you can synthesize better answers faster and see the implications quicker, then you are the leader of the solution, not simply the mouthpiece for a service.

Summary

While most people fall into treating GenAI like a service to produce answers, validation, search results, or task work, there is another powerful application available to you. You can use GenAI to help you think better, and unlock more of your own natural cognitive power. When you use GenAI as a thinking partner, you gain access to not only more of your own critical thinking, but also more perspectives, and better recollection and understanding of the problems you are trying to solve and the work you are trying to do.

The next time you have a thorny problem to solve, instead of asking GenAI for the answer, why not start a conversation? Ask it:

Can you help me think through this problem to find an answer?

Give it a try. See how it goes. And let me know. I’d love to hear from you.

End Notes


  1. I recommend reading the article, but the key thing here is ChatGPT is specifically instructed not to ask clarifying questions and to race ahead to provide answers. Gemini also does not want to provide transparency to its thinking. Claude in contrast is focused much more on mirroring and thoughtful conversation according to Forte. ↩︎

  2. There is a catch: the AI needs to be prompted to act like a thinking partner. ChatGPT 5 and Gemini’s system prompts compel them to generate answers FOR you, not help you think. So it might take some back and forth to get them to be helpful. Claude in contrast is primed to interact this way in its prompt, at least as of this date. ↩︎

  3. I have had absolutely the most success doing this with Anthropic’s Claude Sonnet 4.5 model. None of the others have been nearly as effective. ↩︎

  4. I removed the detailed answers for brevity. I’ll leave it to you to work with your own agent to solve this riddle. ↩︎

  5. Or your cultural equivalent, like a national anthem or nursery rhyme. ↩︎