When I entered the video game industry in 1999, one of the greatest barriers to creating immersive worlds was the NPC (Non-Player Character). How do you make a machine talk like a human? Back then, the state of the art was the dialogue tree. The player selected from a limited menu of options, and the computer delivered a pre-generated response. If you played games like Baldur’s Gate, World of Warcraft, or Fallout, you’ve encountered these.

Major studios hired voice actors to record the lines, and animators spent sleepless nights bringing the characters to life. The biggest games, like Mass Effect, featured tens of thousands of recorded lines. As a kid, I’d try to navigate every possible dialogue combination in a game. But with Mass Effect, that felt impossible. You could play it a hundred times and never exhaust every story permutation.

Then, in April 1999, I attended the Global Leadership Conference in Macau—the former Portuguese colony that China was transforming into its own Las Vegas. There, the noted futurist Pascal Fénice spoke about a new breed of AI poised to change the world. While old systems, like those in video games, relied on humans coding thousands of “rule sets,” a new model was on the horizon: learning machines. Machines that could learn from experience, just like us.

Fénice gave examples of AI systems that, when set up to compete, could play games better than any human. AlphaGo was the first to beat a human grandmaster at Go. Soon after, computers could understand human speech more consistently than we could, and algorithms were generating photorealistic images of people who never existed.

The world was about to change. “Where automation ‘ate’ manual labor,” he warned, “AI will eat software.”

Jump forward to 2022. After a morning of snowmobiling, some friends and I sat down in a restaurant in Reno, NV, to talk about AI. At that time, OpenAI had released several “large language models” that could do some interesting things. I remember thinking it was like autocomplete on steroids. You’d type a few words, and the AI would build on your prompt.

I got shivers. For the first time, I was conversing with an NPC that had no script. The AI was communicating with me—imperfectly, but also unpredictably. This quirky little startup was teasing a new model on the horizon: ChatGPT.

As of this writing, over 800 million people use ChatGPT weekly. The question is, how? What are they doing with it? What are you doing with it? (Seriously, please tell me!)

But you are probably here because you want to know what I am doing with it.

The Answer

Well, not as much as I hoped, and more than I ever thought possible. Before I dive into my current workflow and the problems I use AI to solve, let me share a few principles that shape how I think about work.

I am a Knowledge Worker

I know that’s a popular buzzword, but what does it really mean? For me, being a knowledge worker meant using my mind to solve hard problems and create innovative solutions. Unlike a manufacturing job or the trades, my work was cerebral. I sat at a desk and used computers to shape the future. Put simply, I made a living with my mind, not my muscles.

I did this by assembling data into information and turning that information into knowledge that informed decisions. The old saw “work smarter, not harder” really meant one thing to me: always be learning. In a world of accelerating change, acquiring new knowledge was the only way to stay ahead.

Pop culture reflected this shift. In the ’70s and ’80s, action heroes were Clint Eastwood and Arnold Schwarzenegger. By the late ’90s and 2000s, we saw more “smart” espionage heroes who won by out-thinking their opponents, not just out-muscling them.

Being smart mattered.

But how do you get smarter? Go back to school? That’s when I discovered Tim Ferriss’s The 4-Hour Workweek. I still recall three big takeaways from that book:

  1. Stop consuming the news. It’s a waste of time and energy, mostly designed to upset you.
  2. Start learning. Every single day.
  3. Study Stoicism. It’s like an operating system for life.

I didn’t have time for college, so I stopped listening to the news and started listening to TED Talks and audio programs in my car. One book almost always led to another. I became a lifelong learner, eventually listening to a book a week. The combination of Kindle and Audible was magic.

I consumed a lot of content. Some of it changed my life, my business, and my family. However, a staggering amount went in one ear and out the other. What began as a well-intentioned effort to improve myself had turned into a form of intellectual doom-scrolling. I’d finish one book, highlight some passages, and immediately jump to the next. I was addicted to learning as entertainment.

But what was I really learning?

Not much.

I decided I needed a better way to learn. I knew there were valuable tools in the books I was reading but couldn’t figure out how to make them stick. This led me to a core principle for all my tools: they either need to make me smarter, or they need to make my life easier.

This is really important. There is a huge difference between knowing the answer and knowing there is an answer. If you only know an answer exists somewhere, you have to waste time finding it. In that time, you’ll lose to the person who already has the answer. The corollary is, when you do have to find an answer, can you get the right answer, fast? Speed is everything, and knowledge is a force multiplier.

Let me put it this way:

The more you know, the more you are capable of knowing.

This is one of the biggest mistakes people make with AI. Just because a tool somewhere knows something doesn’t mean you know it. AI is incredible at bringing answers to you faster, but if you don’t have the foundational knowledge to use those answers, you’re only slightly better off. In a hyper-competitive world, the person who truly knows has an outsized advantage in speed and capability.

As a result, I am not interested in AI learning for me. I am interested in AI amplifying my ability to learn.

Barbara Oakley’s wonderful book, A Mind for Numbers, explains why simply highlighting a text doesn’t build deep knowledge. Highlighting creates familiarity—you can recognize concepts—but true learning is about production. If you really know something, you can recall and reconstruct it from scratch, unprompted. As Brené Brown says, it’s “in your bones.” It’s part of you. Real learning involves recall, which is our ability to reconstruct an idea.

A cognitive bias called the Illusion of Explanatory Depth gets in our way here. It goes something like this: Because someone else knows it, I assume I know it too. I believe many people are now falling for the “Illusion of AI Depth”—the belief that because the AI knows it, I know it.

One of the most important skills in working with AI is to always have a solid grasp on what you know versus what you don’t. This is critical because the Dunning-Kruger effect is always in play, reminding us that the more confident we feel, the less we might actually know.

A Tool To Make YOU Smarter

My goal in sharing all this is to explain my mindset. When I engage with AI, I use a specific set of strategies to track what I know versus what I think I know.

I’m trying to enhance my personal effectiveness. Therefore, I use AI to automate mundane, repetitive tasks, freeing me up to use its power to enhance my creative ideas and communication.

I am still working with knowledge, but now I need new strategies. Like the video game with the infinite dialogue tree, I could never explore all the possibilities this technology presents. I have to choose what to focus on, where to apply my limited attention, and work to get the best outcome possible.