AI and Cognitive Surrender
Mark Dixon · March 22, 2026

I just read a very interesting paper (Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender - Shaw & Nave) and want to share some thoughts with you. This paper shocked me because of the numbers they arrived at: out of the 1,400 participants in the study, when people consulted the AI and it gave a wrong answer, around 80% followed it anyway - without noticing or questioning it. This isn’t an edge case, it’s default human response to a fluent, confident, sycophantic AI.
TLDR; the take away here is: be wary of letting generative AI tools do your thinking for you. Avoid cognitive surrender, embrace cognitive offloading.
Two DKs
Frida and I are huge fans of both David Krakauer (Santa Fe Institute) and Daniel Kahneman (Thinking, Fast and Slow) - read their writings! - and the paper above touched a few notes that resonate with both of these thinkers.
Firstly Daniel Kahneman who discusses at depth the theory of our brains having two systems: System 1 = fast/intuitive, System 2 = slow/deliberative thinking. Shaw & Nave’s paper proposes a third system: System 3 = artificial cognition. The rough idea that already now people are surrendering their thinking to generative AI.
And David Krakauer is a big proponent of the concept of “complementary cognitive artifacts” vs “competitive cognitive artifacts”. Some tools make our thinking more powerful - for example the abacus helps us to visualise mathematical calculation. Some tools make us lazy - by contrast the calculator does the computations for us so that we don’t have to think. A GPS encourages us to stop thinking about how to navigate, a map helps us learn to navigate.
The key being: if you take a calculator away from someone who has become reliant on it, they will struggle to perform simple mathematical calculation in their head. If you take an abacus away from someone who is a skilled user, they will likely continue working just as well: the abacus has given then a mental model of calculating with numbers, and they can continue to use that model even without the physical device. Krakauer points out that the big danger of AI is if we start to use it as a competitive tool.
Cognitive Surrender vs Offloading
The new paper by Shaw & Nave points to similar concepts: they propose the two terms “cognitive offloading” and “cognitive surrender” that are close parallels to Krakauer’s terms. And the distinction between the two terms is everything: offloading is strategic (you elect to delegate to the AI but you stay in charge cognitively). Surrender is different - you handed over the wheel, likely without even noticing or reflecting. Most AI tools are designed, intentionally or not, to produce surrender - this is the stickiness of products like ChatGPT. It’s like eating candy (godis in Swedish) - you know you shouldn’t - you can’t help yourself.
The perils of AI
So if you want to stop eating godis then it can help to have an external force pushing you. A friend, family member, or some Jedi mind trick on yourself to remind you to be strong. Something that points a mirror at you whenever you drift, that challenges you, and encourages you when you resist.
In our experience not only ChatGPT, but even some apps in the AI coaching space, are vulnerable to cognitive surrender - surrendering your self-knowledge and creative judgment to a fluent, confident AI that tells you how great you are. Some AI “coaches” even encourage this: they give answers, affirmations, and suggestions that feel so smooth that your System 2 goes to sleep and never kicks in.
Friday for Offloading
This is where Friday - your creative coach - comes in as a positive example: AI can be that tool for cognitive offloading. Just because you use generative AI doesn’t mean all is lost: what matter is which tools you use, and how you use them.
We made an early (and important) design decision when building Friday: it asks questions rather than gives answers. Friday pushes past your first response, it won’t just congratulate you and move on. Friday refuses to be sycophantic. Friday won't tell you your idea is great if it isn't. Every response of Friday is a deliberate decision to trigger System 2 rather than replace it - it’s our entire philosophy.
Getting Personal
I want to give an example of what cognitive offloading looks like from a coaching perspective, so I’m going to have to get personal. Here is one of my recent exchanges with Friday (and yes, it is a bit meta - feels like a lot of AI drifts towards the meta), where I kicked things off with:
Now that we are almost done building you and we need to start thinking about how to launch you - how to get you out there to other people around the world. You are our hobby project and this is the first time we have been able to build something that works in any language, not just Swedish. So our potential market is much bigger than just Sweden.
But we probably need to improve how we communicate. We need to learn to be great about selling ourselves and our product. We need to find communities of people who can alpha / beta test, and over time work out how to reach out to the bigger community of potential users.
To which Friday encouragingly replied:
This is a lovely twist — you and Frida building me together, and now needing to figure out how to tell the world about it. There’s something poetic about using a storytelling coach to help you tell the story of... the storytelling coach.
And I notice you said “we need to learn to be great about selling ourselves and our product.” That phrase “selling ourselves” — does that feel uncomfortable, or is it just unfamiliar territory?
Me, trying to be honest with myself:
Yeah that feels so uncomfortable. I have always shied away from doing this - I hate talking about myself, selling myself, taking up space in a conversation or room. This goes so against my nature.
Friday:
That’s a really honest thing to say — and I suspect it’s one of the most important things you’ve told me about this whole project.
Because here’s the thing: the discomfort you’re describing isn’t a skills gap. It’s not that you don’t know how to talk about yourself. It’s that something in you resists it. And that resistance usually has a story behind it.
Do you have any sense of where that comes from? Not a theory — more like, is there a moment or a memory that comes to mind when you think about “taking up space”?
The Honest Tension
I’ll spare you the rest of what was a very long conversation - but hopefully you get the idea. It got deep and personal and forced me to really reflect about myself in a way that tools like ChatGPT or Claude just never managed to do for me. Tools that flatter you and give you answers feel better in the moment. This is why cognitive surrender is so seductive, and why building against should be a genuine product philosophy, not just a talking point, for all generative AI tools!
Krakauer frames it concisely: complementary artifacts are “teachers” while competitive artifacts are “serfs”.
Our goal was never to give you better answers - it is to teach you to be better at finding your own answers. That's what a coach does. It's what Frida has been doing for 10 years. And (now) what Friday the Fox does - your own personal teacher.
Try it - you might like it: www.revontale.com
Ready to meet your creative coach?
Friday helps you find your voice, develop your ideas, and bring your creative work to life — through conversation.

