Danny Wilkins
2 min read • 24 June 2025
Every project has its own little knot to untangle. On this one, it was the interplay between tools. Specifically, how we brought together React, Next.js, and Web Components in a way that worked for the team, the product, and the long-term codebase.
On paper, the stack looked familiar. A Next.js app with a React frontend is something I’ve built plenty of times. But this project also needed to use a shared UI library built with Web Components. That choice made perfect sense as it keeps the product consistent with other parts of the wider platform. But architecturally, it introduced an interesting tension.
React and Web Components aren’t fundamentally incompatible, but they don’t speak the same language natively.
📖 Developer Diaries: Danny Wilkins on Two Modes For AI
What made it more subtle still was how rarely this combination comes up in public codebases. There isn’t much of a trail to follow. So when I first reached for AI tooling to help scaffold out components, there wasn’t much it could draw on. Codegen models tend to perform best when there are thousands of prior examples in the training data. Here, there weren’t.
That doesn’t mean AI was useless, just that it needed a bit of a warm-up. I had to do the first few sections manually, working out how to bridge the gap between React’s way of thinking and the idioms of the component library. Once I’d established a working pattern that felt robust and readable, that’s when AI could step in and help replicate the structure.
And that, in a way, was the interesting part: solving the problem wasn’t about brute-forcing a workaround or finding a hidden fix. It was about shaping the conditions for success by clarifying the pattern, making the architectural decisions early, and giving both the humans and the machines something they could reliably build on.
Now, I could point the AI at that example and say: “Like this. Same layout, same structure, just swap out the questions.” Suddenly, it could keep up. It wasn’t magic. It still needed supervision. But it saved me hours of repetitive markup and logic. I could focus on edge cases and validation while it cranked out the boilerplate.
That shift made me think differently about how we use AI in development. We like to talk about it as a co-pilot, but I think it’s closer to having a junior dev on the team. One who’s eager, fast, and totally reliant on good examples. If you give them something solid to work from, they’ll move quickly. But hand them an ambiguous brief, and you’ll spend more time cleaning up than if you did it yourself.
What’s interesting is how this changes the skillset. I’ve always thought of coding as the process of thinking. You don’t always know what you’re building until you start typing. You make discoveries by doing. But working with AI flips that. It demands clarity up front. You need to describe the solution before you’ve even built it.
So now I find myself working in two different modes. When I’ve got a clear picture in my head; when I know the pattern, the shape, the logic, I’ll craft a prompt and let the AI take a first pass. I’ll be specific: “Use this hook, follow this naming convention, structure the error handling like this.” It’s precise, and the results are usually usable straight away.
But when the feature’s vague or the architecture’s still forming I will go manual. That’s when I need the keyboard, the editor, the freedom to think messily. To figure it out as I go.
I’ve come to think of this balancing act as a kind of soft skill. Not just technical experience, but judgment. Knowing when to delegate and when to dive in. When to trust the AI and when to override it. And knowing that, sometimes, writing the code yourself isn’t slower. It’s just how we think.
That’s not something I would’ve said a year ago. But now? It’s part of my rhythm.
👉 AI Native Software Development
This is an AI generated summary of a conversation with Dootrix Software Engineer - Danny Wilkins
Danny Wilkins