Most developers spend their first wishes on the obvious: fancy auto-complete, quick bug fixes, simple Q&A, and instant feedback. They fill immediate needs and provide quick bursts of productivity. But when the dust settles, these types of improvements don’t fundamentally change the developer. True transformation requires a more deliberate engagement with the tools.
Forging a deeper partnership with LLMs can unlock previously inaccessible workflows. LLMs can do far more than grant surface-level wishes: their real magic lies in helping us reshape how we reason about problems. In this post, I’ll walk through several ways I’ve moved past using them for simple code generation and instead improved how I think, design, and build.
Injecting Intuition
Getting familiar with a new codebase is challenging for any developer. With luck, well-thought-out naming and structure guides you like a trail of breadcrumbs. However, more often it’s a maze: sprawling design, inconsistent organization, and unfamiliar patterns make it difficult to follow. Even after grasping the high-level structure, projects of reasonable complexity tend to contain layers of indirection which make tracing execution paths tedious.
The key to becoming productive in the new codebase is arriving at a reasonable mental model and building intuition about how to find things, where to add new contributions, and what patterns to re-use. LLMs have made this tiresome task far more tractable, enabling quick and efficient onboarding.
To start getting oriented, I prompt the LLM to draft a compact Markdown outline of the codebase with references to actual files and functions. I immediately review what it produces, follow each link, verify details, and correct any hallucinations. Auditing the document early and preventing the LLM from churning out uninterrupted content keeps it focused and catches small drifts before they can compound. If the project lacks a clear README, I have the LLM generate setup and testing instructions, then validate and fold them into the document.
For the specific task at hand, I ask the LLM to record how component X works, describe primary user interfaces, and map out essential data models. I repeatedly ask follow-up questions and push the LLM to revise and condense content. I only need to reference this document a few times before I am able to effectively reason about the codebase. Like a good wish, the tangible document itself is less valuable than the intuition gained through the process of making it.
Camouflaged Coding
Once I’ve mapped out the terrain, the next challenge is moving around silently. A hallmark of maintainable code is PRs that blend seamlessly into the codebase: following existing patterns and reusing utilities keeps cognitive load low for other developers. Yet uncovering these patterns is challenging without extensive prior experience in the codebase. Luckily, this is an LLM superpower.
When I’m unsure how something is done, I ask the LLM to find examples, surface code references, and suggest how to adapt my solution to better assimilate. For example, I’ve recently sought help with handling soft deletions, testing temporal workflows, and managing API endpoints. This approach helps with both tightly scoped, tactical changes and more general stylistic choices like naming conventions and file organization.
The most up-to-date recommendations can be surprisingly difficult to tease out. Codebases continuously evolve, and standards are often mid-transition. This leaves behind coexisting patterns where it’s not clear which to prefer. Commit timestamps help, but not everyone sticks to the latest conventions. When confronted with competing standards, LLMs help me determine which appears most frequently in recent commits.
In greenfield projects, I find myself babysitting LLMs. However, when pattern matching against existing code, LLMs tend to deliver excellent first-pass solutions. Beyond saving time, this technique accelerates and reinforces the process of building intuition for a new codebase.
Armchair Architect
When beginning a new project, I bring in an LLM to help construct a plan. This might involve system design, data models, or milestones. I often start with a diagram to clarify my own understanding and establish the right level of abstraction. Rather than diving into solutions, the goal is to scope out achievable sub-problems and clear interfaces. As with onboarding to a new codebase, I have the LLM iterate on a living Markdown document.
The first draft is frequently flawed with missing pieces and misplaced priorities. Articulating structured requirements and relevant context is hard, and I rarely know everything upfront. Trying to craft the perfect prompt is a recipe for procrastination, but starting with something and refining it later accelerates discovery. Through this back and forth, we converge towards a concrete plan.
The LLM inevitably overproduces, adding superfluous sections. I ruthlessly pare down the document to the essentials. Ideally, there is not much prose at all: just diagrams, interface skeletons, tables, and concise bullet points. Just as an LLM can freely offer critique, so too can it eagerly receive it. I call out inconsistencies and unnecessary complexity to help the LLM distill the document into a cohesive, practical plan. While the LLM often has many theoretical strategies, grounding them to the specific project at hand takes coaxing.
This planning document is valuable in its own right, but it’s also an anchor keeping the LLM moored during implementation. Instead of constantly reminding the LLM about the requirements, I simply include the blueprint as context. Time spent upfront drafting the plan yields higher-quality, more stable frameworks later.
Phone a Friend
No man – or machine – is an island. Lately, I’ve had my LLMs call in backup. Even as LLMs collectively improve, their capabilities and specializations vary. I choose different models for tasks aligned with each LLM’s strengths. In a standard chat interface, there is a dialogue between human and machine: collaborating to produce something better than either would alone. The natural next step is to invite another voice, or model, into the committee.
Asking one LLM to review another brings diversity in reasoning. More voices result in a higher chance of spotting subtle issues or discovering clean solutions. You can do this manually by copy-pasting output between chats, but dedicated CLI tools tighten the loop. I ask the first LLM to seek critiques from the second, then have them review and revise together.
Sometimes an LLM is too eager and dives straight into applying changes. I start by telling it not to implement anything yet and to share the proposed plan with me first. Keeping the model in the planning phase prevents it from making surface-level fixes without consulting its counterpart.
Since ambiguous requirements are inevitable, I also instruct the LLM to forward any clarifying questions to me. Thinking models struggle when trying to infer exactly what a general request means. This can result in irrelevant responses, or a long series of solutions with different assumptions. Encouraging clarifications produces more relevant and less verbose plans.
This process is slow, so I reserve the technique for tricky situations. When I’m not an effective judge for a particular problem, this helps give me context to form a clearer opinion and exposes trade-offs more explicitly. And when a solution feels clunky, the debating models generate fresh alternatives or confirm the soundness of the initial approach.
Compacting Context
Though I reach for them less often than the basics covered in Part I, these four techniques have helped me grow as a developer. They raise the bar for the quality of what I build and change how I think about building. Onboarding to new codebases has become a breeze with the help of LLMs to build intuition and emulate recommended patterns. Robust solutions are more accessible through collaborative design and refinement via committee.
Yet, for all the benefits of LLMs, using these tools effectively means knowing when to reach for them and when not to. Even genies have limits, and in the final post, I’ll explore where the magic runs thin.