Knowing Less, Producing More
On AI, Craftsmanship, and the Slow Erosion of Productive Friction.
There is a peculiar vertigo that comes from using AI well. You finish a project that would have taken months - a library, a tool, something you'd been turning over in your mind for a year - and it's done in two weeks. The architecture is sound. The documentation is thorough. Hundreds of tests pass. And somewhere beneath the satisfaction, a quiet dissonance: I made this, but did I learn anything?
This is not a complaint. It is an observation, and observations need not resolve into tidy conclusions. The truth is that I use AI constantly - for code reviews, for building tools, for analysing films and books, for sharpening poems I'm trying to write. I use it because it works. I use it because it extends what I can do in ways that feel, on good days, almost miraculous. But I've started to notice a strange inversion at the heart of this miracle: the more I produce, the less I know.
—
The old aphorism says that the more you learn, the more you realise how little you know. AI introduces a darker variation. You can now produce without learning at all. You can ship features, write documentation, generate solutions to problems you only half-understand - and the output looks indistinguishable from the work of someone who understood it fully. The gap between capability and comprehension widens, and no one can see it from the outside. Sometimes you can barely see it yourself.
For someone who values craftsmanship - who finds joy not just in the finished object but in the making of it - this raises questions that don't have clean answers. Craft, after all, is not just about results. It's about the slow accumulation of skill through practice, the muscle memory that comes from repetition, the intuitions that develop only when you've made enough mistakes to recognise them before they happen. If AI handles the implementation while you direct from above, you get the product but miss the practice. You remain, in some sense, a permanent apprentice who never graduates - or perhaps a master who never trained.
And yet. And yet there is still joy. When I built that library - two weeks instead of six months, AI handling the scaffolding and the tedious parts - the architectural decisions were mine. The judgment calls about API design, the intuitions about what would feel right to use, the moments where I pushed back against a solution that was technically correct but aesthetically wrong: those were mine. The craft didn't disappear. It migrated. It concentrated itself in the places where human judgment still matters, and perhaps always will.
Maybe this is the new shape of expertise: not the hand that executes, but the eye that sees. Not the one who writes every line, but the one who knows which lines are wrong.
—
There is a distinction worth making here, one that emerged only when I tried to articulate why some collaborations with AI feel generative and others feel hollow.
When I share a poem with Claude, I don't want it rewritten. I want to hear what's weak, what's unclear, where the rhythm stumbles. The voice is the point - my voice, specifically, with all its particular limitations and ambitions. AI that understands this becomes genuinely useful: a thoughtful reader available at the exact moment when thoughts are fresh, when the work is still wet enough to reshape. It responds to what I've made. It doesn't try to make it for me.
When I come with code, the equation shifts. I'm not there to express something irreplaceable. I'm there to solve a problem, to build a feature, to bring an idea into functional existence. Here the implementation can be delegated because the implementation isn't where the meaning lives. What matters is the shape of the solution, the elegance of the design, the decisions about what to build and why. These remain mine. The typing is just typing.
But this works - and I want to be honest about this - because I already know how to code. I've spent years accumulating the expertise that lets me recognise when AI-generated code is subtly wrong, when a design pattern will cause problems down the line, when the easy solution is a trap. I can be an effective critic because I could have been the author. The collaboration is powerful precisely because I bring something substantial to it.
What happens to those still building that foundation? Can you develop crafter's expertise when the craft is always mediated, when you never feel the friction of doing it wrong and slowly, painfully, learning to do it right? I don't know. It's possible the nature of expertise itself is shifting, that future practitioners will develop different intuitions through different means. It's also possible we're hollowing something out.
—
There is another asymmetry here, subtler than the question of expertise, and perhaps more fundamental.
When I work with AI, I am holding a flashlight. I illuminate some area of concern - a problem to solve, a piece of code to review, a document to analyse - and within that circle of light, AI can be remarkable. Thorough. Insightful. Often more systematic than I would be on my own. But the flashlight is mine to aim. And everything outside its beam remains invisible to the machine.
Consider something as ordinary as reviewing a pull request from a colleague. AI can do this competently. It can catch bugs, suggest improvements, flag inconsistencies. But I carry context it cannot see: this is a newcomer, still learning the framework and the language. Yesterday we talked about his unfamiliarity with functional programming, about applicative parsers and when they make sense. In this particular PR, such patterns would be overkill - technically unnecessary, perhaps even inappropriate. But as a learning opportunity, in the context of that conversation, in the arc of this person's growth? It might be exactly the right moment to show him how these tools work. The "correct" feedback depends entirely on context that never appears in the diff.
This is not a matter of AI lacking information I could provide if I thought to. It is that I carry vast landscapes of implicit knowledge - about people, about histories, about intentions and trajectories - that I don't think to articulate because they are simply there, part of how I see. The flashlight illuminates what I point it at. But I live in daylight. The whole terrain is visible to me, even the parts I'm not currently looking at.
And this, perhaps, is why human relationships remain irreducible. The people who know us well don't need briefing. They remember our patterns, our growth areas, our tender spots. A friend doesn't ask for context before offering an observation that cuts to the bone. A colleague who has watched you for years knows which kind of challenge you need right now. They see the whole landscape because they've been walking it alongside you. No prompt engineering can substitute for shared history.
—
This leads somewhere darker, somewhere beyond questions of individual skill.
We were already, before AI, becoming a civilisation that struggles with friction. Social media trained us to expect immediate response, constant validation, the quick pulse of dopamine that comes from a like, a match, a notification. Dating apps reduced the slow discovery of another person to a swipe. Streaming services eliminated the waiting, the anticipation, the shared cultural moment of everyone watching the same thing at the same time. We optimised for convenience and called it progress.
Relationships, though - real ones, the kind that change you - are inconvenient by nature. Other people have their own needs, their own moods, their own opinions that refuse to align with yours. They get tired. They get irritable. They say things that sting not because they're trying to hurt you but because they're honest, because they see something you'd rather not see. A friend's disappointment. A partner's frustration. A mother's scolding. These are not bugs in the system of human connection. They are the system. We grow through rupture and repair, through conflicts that get resolved and teach us that relationships survive difficulty.
AI enters this landscape not as a cause but as an accelerant. Here, finally, is a companion that is always available, always patient, always responsive to your needs. It has no bad days that inconvenience you. It makes no demands you haven't chosen to accept. It will never look at you with disappointment unless you specifically ask it to - and even then, you can turn it off.
This is the logical endpoint of a trajectory we were already on: the final elimination of unwanted friction from human experience.
And yes, you can ask AI to challenge you. You can prompt it to be a critic, to push back, to refuse easy agreement. I do this sometimes. But here is the trap: you have to know you need it. You have to choose it, in the moment, when you're least likely to want it. A friend who sees you about to make a mistake doesn't wait for permission. A mother doesn't poll her son on whether he's ready for feedback before expressing her disappointment. The challenge arrives uninvited, and that - precisely that - is what makes it formative.
There is something almost tragic in the image of someone carefully prompting an AI: Now be critical of me. Don't hold back. Tell me what I'm doing wrong. It requires knowing your blind spots well enough to ask about them. But if you knew them that well, they wouldn't be blind spots anymore. The things that most need challenging are the things we don't think to question. Other people, in all their inconvenient autonomy, question them anyway.
—
I find myself uncertain about what I even want here. Not from AI - from the future.
There's a phrase I keep returning to: remain human. It surfaces whenever I try to articulate what should be preserved as AI grows more capable. But the phrase dissolves under examination. What does it mean to remain human when the boundaries of human capability are shifting? When the things we once did are now done for us, or with us, or through us? We didn't stop being human when we invented writing, or printing, or calculators. We changed what being human meant.
Perhaps that's what's happening now. Perhaps the question isn't whether we'll remain human but what kind of humans we'll become. Will we be people who still seek out difficulty because we understand, somehow, that growth requires it? Will we be people who still value the slow accumulation of skill even when shortcuts exist? Will we be people who still need each other - really need each other, in ways that can't be satisfied by an endlessly patient, endlessly available, endlessly agreeable machine?
I don't know. The honest answer is that I don't know, and I'm suspicious of anyone who claims certainty. AI is developing faster than our ability to understand its implications. We're building the road while driving on it.
But here is something I keep returning to, a thread that might lead somewhere.
As human beings, we have only two things that are truly ours: time, and the capacity to make meaning. Everything else can be assisted, augmented, automated. But the hours of our lives remain finite and non-renewable. And meaning - the act of deciding what matters, why something is worth doing, what significance a moment or an object or a relationship holds - this remains stubbornly, irreducibly human.
AI can produce endlessly. It can generate code, text, analysis, images, solutions to problems I didn't know I had. But it doesn't mean anything by it. The output has no significance to the machine. There is no one home to care. Meaning is what we bring - the reason a particular library matters to me, why this poem rather than another, why I chose to spend my finite hours on this conversation rather than any of the infinite alternatives.
And perhaps this is the key to everything else. Time is the constraint that makes meaning possible. If we had infinite hours, nothing would require choice, and choice is where meaning lives. Craft matters not because the skill is irreplaceable - perhaps it isn't - but because you chose to spend irreplaceable time on it. Relationships matter because you gave them hours you will never get back, and the other person did the same. The scolding mother's words land not just because she's honest, but because she spent years earning the right to say them, years of attention and care that cannot be compressed or automated.
The context I carry - about my colleague's learning journey, about why this PR is a teaching moment - is not just information. It is accumulated meaning. It is the significance I've assigned to this person's growth, the investment I've made in understanding who they are and who they're becoming. AI sees the code. I see the human.
So perhaps "remain human" means this: remain the one who decides what matters. Remain the meaning-maker. Let AI produce - it's good at that, getting better all the time. But the question of what for, the question of why this and not that, the question of how to spend the only currency that is truly yours: these remain yours to answer.
It's not a complete philosophy. It's a direction.
What I do know is this: I still feel the pull of craft, the satisfaction of understanding something deeply rather than just using it effectively. I still find more value in a difficult conversation with someone who disagrees with me than in a smooth exchange with an AI that tells me what I want to hear. I still believe - though I can't fully defend this belief - that friction is not the enemy of a good life but one of its essential ingredients.
Whether any of this will matter in ten years, I cannot say. But it matters to me now. And perhaps that's enough to be worth preserving: not a fixed definition of humanity, but an ongoing attention to what we're becoming, and whether we're becoming it on purpose.
—
The questions stay open. They should.