Recently I've been thinking about what it means to be good at this job. In the 14 years I've worked in technology, I've mentored a handful of people and observed many more grow and succeed. Most were smart, knowledgeable, and (sufficiently) experienced for their role. But when it came to actually being effective – delivering consistently, navigating complexity, sustaining momentum – there was a broad range.
I've been trying to pinpoint some of the less obvious qualities that correlate to effectiveness at the job. These traits aren’t innate – they're skills that can be taught, practiced, and refined. Focusing on them directly, outside of general mentorship, can be one of the highest-leverage ways to grow as a developer and to help others grow.
Leverage natural curiosity
A bright new grad recently reflected that, although he was delivering results, he didn’t feel like he was building lasting knowledge. His focus was on completing tasks efficiently – resolving blockers quickly, shipping code, moving forward. When he encountered an issue, he’d find the first viable workaround. Often, this meant copying a solution from elsewhere in the codebase or applying the top Stack Overflow answer. These fixes worked in the moment and allowed him to maintain momentum, but they didn’t deepen his understanding of the systems he was working with.
To his credit, he recognized this. He saw that while he was progressing in his day-to-day tasks, he wasn’t learning much about how things actually worked. He wasn’t building a holistic picture of the systems or the broader technical context around his work.
This pattern is common, especially for early-career developers in fast-paced environments. And in urgent situations – production outages, critical deadlines – taking the fastest path is appropriate. But when this approach becomes the default, it can quietly undermine long-term growth. It prioritizes output over comprehension, speed over depth.
The alternative is to adopt a mindset of curiosity, and to treat each obstacle as a learning opportunity. When you solve a problem with a snippet from Stack Overflow, take a few extra minutes to understand the unfamiliar concepts mentioned in the post. When editing code, take time to examine the surrounding logic – why certain parameters exist, how they interact with the rest of the system. Develop the habit of tracing a problem back to its source, not just resolving the symptom.
The cartography analogy
A useful analogy is that of a cartographer. At the beginning of a career or a new position, your mental map of the system is blank. Each task you’re assigned is a destination: get here. You have two ways to approach that journey.
Option 1 is to head directly toward the target. When you encounter a barrier – a complex code path, a failing test, a mysterious dependency – you do just enough to get past it and keep moving. You arrive, but with little understanding of the landscape you crossed.
Option 2 is to slow down and map the terrain as you go. When you see a barrier, you take time to understand what it is, where it comes from, and how it connects to other parts of the system. You document what you learn. You may even build tools or abstractions to make future crossings easier. It’s slower in the short term – but over time, it pays off. You begin to anticipate obstacles. You navigate with confidence. And importantly, the knowledge you accumulate becomes shareable. You help others avoid the same pitfalls and take more efficient routes.
Option 1 emphasizes delivery. Option 2 builds systems knowledge, organizational context, and reusable understanding. Striking the right balance between the two is essential.
For most developers, a reasonable default is a 50/50 split between execution mode and exploration mode. You could probably go as far as 70/30 in either direction. Personally, even after nearly a decade at my company, I still lean toward exploration. I spend more of my time in a mode of curiosity than a mode of execution: understanding systems, following leads, documenting knowledge, and preparing for future complexity. That investment helps me to resolve issues that span multiple components, to identify subtle risks that lie beneath the surface, and to guide others through unfamiliar systems.
Most early-career developers focus far too much on immediate delivery. If that’s you, here's a heuristic: for every unit of work you deliver, pair it with a unit of exploration. When you add a method to an API, read and digest the rest of the API. When you find a bug, read the version control history and understand how it got there. When you're improving the performance of a system, learn how the upstream and downstream systems work, and map out their interactions with your system.
Effective developers have a natural curiosity and wield it with deliberation. They draw the map as they go. They invest intentionally into building a body of understanding, and the investment pays dividends down the line.
Learn to learn
Don't consume knowledge, build mental models.
When faced with something you don’t understand, one approach is to start reading the code and docs and gradually build a picture from the ground up. A more effective approach is to first construct a mental model of how the system could or should work, then learn the system by cross-referencing reality against your imagined version. Each time your model is contradicted or clarified, refine it. By the end, your model should closely match the actual system.
Everyone knows it’s easier to remember things you created yourself than things you merely consumed. Somehow, the ideas you generate yourself seem to lodge deeper in memory. This technique is a pedagogical hack: it engages the “origination” part of the brain during learning. Building the initial mental model is a creative act, and so is refining it in light of new information. Instead of passively absorbing facts, you’re actively shaping and adjusting a construct in your mind.
A useful side effect of this method is that, over time, it lets you absorb new systems and ideas very quickly. As you gain experience and internalise the idioms of your domain, your initial models will more often match the reality from the outset. Learning becomes an exercise in spotting the handful of differences. It’s like already having most of the jigsaw puzzle in your head – learning a new system becomes a matter of dropping in a few missing pieces, and maybe correcting a couple.
You’ll find yourself thinking, “OK, yep – this module handles query logic, this class manages the transaction log, and here’s the sequence of transactions – yep, makes sense.”
This technique works for understanding software systems, but it also applies to anything intellectual: mathematics, organisational structures, the stock market.
Use the Feynman technique to learn brand-new things.
What do you do if you want to learn something, but you don't even have the building blocks yet to build an initial mental model? Use the Feynman technique.
The Feynman technique, supposedly popularised by physicist Richard Feynman, is in my opinion the most effective technique for "bottom-up" learning. It essentially puts into practice the notion that "the best way to understand something is to teach it".
I think the Feynman technique is effective for precisely the same reason as the mental model-building technique: by "teaching" something to yourself, you transform a mental consumption activity into a mental construction activity.
The link above gives a far better outline of the technique than I ever could. All I’ll add is this: I spent six years completing a part-time pure mathematics degree while working a demanding full-time job, and I couldn’t have done it without the Feynman technique.
Diagnose effectively
Much of our job consists of diagnosing problems: debugging new code, tracking down the causes of user errors, figuring out why the organisation isn’t operating as effectively as it could. You almost certainly spend more time diagnosing problems than writing code.
Effective, fast diagnosis is a skill worth mastering. Here are some techniques.
Get the experiment time down.
Once in a blue moon, you’ll pinpoint a bug just by reading code. Most of the time, though, diagnosis involves repetition and interaction: adding print statements or breakpoints, interacting with the system in a REPL, running a test suite. Each of these is an experiment.
The highest-leverage move is to get the experiment time down before you start. You’ll probably need to rerun the experiment 5 or 10 or 20 times. Each minute you shave off pays for itself. Fast iteration maintains momentum, keeps the problem details in working memory, and helps you stay focused on the goal.
Narrow the search space thoughtfully.
At the start of diagnosis, you usually have no idea where the problem lies. Then, through a mix of thinking and testing, you gradually narrow the possibilities – to a system, to a module, to a block of code. Eventually, the bug is just sitting there in front of you.
This narrowing can be done naively or thoughtfully. Aim for the latter. The goal of each step is to rule out as much of the search space as possible, regardless of the experiment outcome. I recommend a kind of generalised binary search: if you know the bug is somewhere in a system, mentally partition it in two, and design a test that will tell you which half it’s in. Then repeat on that half.
Each time you rerun the experiment, limit it to the smallest space that definitely contains the problem. If you’ve narrowed it to a subcomponent, run the next test just on that. This keeps your focus sharp and keeps experiment time down. Don’t rerun your whole app end-to-end to debug a bug in email address validation.
Think about the edges between
In most organisations, some of the most impactful work exists in the "edges" between systems. What do I mean by this?
In a technological organisation, you and your team will usually have some remit in the form of one or more systems. If you work on the database team, this remit is the database server and client. If you work on the ML team, the remit is the ML infrastructure. If you work on a research team, the remit might be the research tooling.
Systems in an organisation can be modelled as a graph. The systems themselves are the vertices, and the edges represent interactions or interfaces between systems.
Most of the time, you'll be focusing on these systems themselves. You'll consider what features they need, what are their deficiencies of their current features, what can you make faster, and so on. Organisational gravity pulls your attention to the vertices of this graph; the vertices are probably what your team is named after, and therefore what you think you should be working on. You could spend your whole career working on the vertices.
My thesis is that, in most organisations, some of the most impactful work exists in the edges between systems. Some reasons I believe this:
- Teams orient themselves around systems, so a disproportionate amount of organisational attention goes toward the vertices. Relatively little attention is paid to the interactions between systems.
- There are more edges than vertices in this graph, so the edges are more likely to be under-attended and under-explored.
- Each vertex evolves relatively independently of its adjacent vertices. A team builds a new feature in their system, but those working on adjacent systems don't notice the new feature, or notice it but don't have the time or inclination to leverage it.
- Thinking about edges gives you a higher-level view of what's going on. You're not thinking about one system, but about multiple systems and their interactions.
Let me be more concrete. In my organisation, if I consider any particular system in isolation – say, a database – it looks reasonable. Its features seem sensible. I can't find any low-hanging fruit for optimisation. Obvious bugs have been ironed out. It is essentially a self-contained, self-consistent design.
But then I pick an adjacent system that acts as a client for the database. When I consider that system in isolation, it also looks fine. But when I look at how the client uses the database, I often immediately notice some gaping issues. Maybe the client's query pattern doesn't make sense for the database's revamped implementation. Maybe the client is redundantly caching query results that the database is already caching. Maybe the client is missing out on a recent optimisation that was implemented within the database.
In my experience, almost every time I delve into the edge between two systems, I find glaring problems. And these problems often seem more severe than the problems within any particular system. I'd go as far as to say that, in sufficiently complex webs of interconnected systems, inefficiencies at interaction points tend to grow to dominate inefficiencies within any particular system.
This is such an easy way to have impact within an organisation. I encourage everyone around me to spend more time thinking about the edges.
I also suspect the same principle can apply to other disciplines of knowledge work. For example, in mathematics research, many significant results come from techniques in one area being effectively applied to an entirely disparate area. In this way, focusing on an "edge" between two areas yields results.
Move fast and break (some) things
As a culture, we've moved past "move fast and break things". But Zuckerberg's infamous words hold a degree of truth.
Any time you break something, whether it be a silly CI pipeline or your live production system, it has a cost – usually measurable in terms of money, person-hours wasted, or people woken up in the middle of the night. I make these claims:
- the cost of breaking something in a given system is often hard to predict. Calibrating this sense takes time and experience with the system;
- as a result, many people (especially in their early career) err on the side of significantly over-estimating the cost of breakages, and place disproportionate weight on the perceived emotional consequences ("people will be angry with me");
- this aversion to potentially breaking things can slow down one's progression, in terms of learning and delivery.
Here are some disorganised thoughts:
- Breakings things can cost you in trust. You will lose more trust if you break something in a predictable way because you didn't follow a known process, e.g. you made a change without following the checklist. You won't much trust if you break something in a novel way, e.g. you were tinkering with a new system. You can even gain trust by breaking something in a really interesting way, because it demonstrates curiosity.
- If something breaks repeatedly in the same way, it's a process problem rather than a people problem, and you can regain the lost trust (and more!) by fixing the process.
- If you break something, immediately take responsibility and fix it. Generally, people care about breakages inasmuch as dealing with those breakages costs them time and energy (or money, of course). If you break something and fix it yourself, and it doesn't have a significant monetary impact, people won't mind.
- Breaking something fragile, then making it robust in the process of fixing it, is usually more valuable than not breaking it in the first place.
- The cost of a breakage is sort of proportional to how long it goes unnoticed for. The worst sorts of breakages are where something breaks in a subtle way and costs the business money over a long time.
- The newer you are to a team or system, the more leeway is extended to you to break things. Take the opportunity of being new to experiment. If you break things, it's probably a process problem more than a people problem, and improving your team's processes will help others down the line.
Of course, all of this depends on the culture of your team and workplace. So take the temperature of that before you do anything else.
But why must anything break at all? As with many things in life, the strategy that optimises for success on a long time horizon may differ from the strategy on a short horizon. Let's take "success" to mean that you finish the work you're assigned, and you grow your knowledge and experience in an area.
Some newer developer err on the side of being too careful, and stepping gingerly in their work. The reality is that most successful projects reach their endpoints with a bunch of zig-zagging and experimentation along the way. At the outset of a project, one of your goals should be to discover the unknown unknowns. There will be things that don't work – you're better off finding them early by breaking things, rather than theorising really hard about how to do things as safely as possible.
On this subject one of the most valuable approaches you can take when building something new is to invest time upfront to build a safety net that will contain the damage of future breakages. For example, when building a new system from scratch, design at the very start a robust staging setup so that you can flail around wildly and break things in the test system without affecting prod data.
In terms of your own learning, becoming proficient in a new system or area is like learning to ride a bike. If you're terrified of ever falling over, it will take you a long time to get good. You learn the boundaries of a system by overstepping them.
As a final note,. Lessons that you learn through personal failure will stick with you much more than those you learn by reading the warning label on the bottle.
Do "Do things properly" properly
We have finite time and too many things we want to do. After building features, fighting fires, deploying releases, and running interviews, we’re left with a narrow sliver of time and attention to invest in technical excellence. We can’t do everything perfectly. So we triage. Prioritising deliberately and aggressively is one of the highest-leverage things a developer can do.
An effective developer avoids aesthetic or “vibe-based” judgments when making these prioritisation calls. Instead, they assess trade-offs in terms of concrete benefits and risks.
When examined this way, some “obviously necessary” proposals often turn out to have marginal value. And some seemingly harmless hacks reveal themselves to be more damaging than they first appear.
Some engineers fall into the trap of doing everything “the right way.” In my view, very few things need to be done 100% right. In fact, an insistence on perfection can erode the trust placed in you – it can make it seem like you prioritise technical indulgence over the needs of the business. A better approach is to build goodwill by working efficiently and pragmatically most of the time, and then cash in that goodwill to do things properly when it really matters.
Prioritisation often involves talking to others. In these conversations, it’s important to speak in concrete, factual terms – not in emotional appeals. Developers are flawed and fickle, but we respond well to structured, objective reasoning. Leadership may tune out vague complaints about technical debt, but they understand structural risk.
Ben Rady’s CRUFT model (“complexity, risk, use, feedback, team”), which he covers on his blog and in a Two's Complement podcast episode.
-
An earlier episode of the same podcast with Rady and Matt Godbolt, where they discuss other frameworks for framing technical debt.
-
A piece I wrote recently on this blog, about modelling liabilities in software. The section on scale and longevity is especially relevant here – an effective developer understands that the severity of an imperfection depends on both its blast radius and how deeply embedded it is in the system.
Know how to use the computer
Being an effective developer isn’t just about writing good code. It’s about moving fluidly through your tools. Typing speed, keyboard shortcuts, terminal fluency, and precise navigation aren’t just nice-to-haves. They directly impact your ability to get work done. Do not underestimate this!
The common rebuttal is that “typing speed doesn’t matter – most developers only write a few lines of code per day.” That’s true. Raw typing output isn’t the bottleneck. But most of a developer’s time isn’t spent in flow-state, writing production-ready code. It’s spent interacting with the computer in a thousand small ways: jumping between files, searching through unfamiliar code, inspecting logs, testing hypotheses, debugging, running commands, pulling down changes, writing documentation, managing version control, sending messages, and searching the web. These tasks are interleaved with actual coding – and how efficiently you move through them has a real impact on how quickly and effectively you work.
If you're skeptical, spend some time pairing with someone who lacks computer fluency. A slow typist, someone unfamiliar with keyboard shortcuts, someone who struggles to navigate the terminal or doesn’t know basic command-line tools. You’ll notice that their pace drags – each action takes longer, context-switching is more cumbersome, errors take more time to recover from, and momentum is harder to sustain. The inefficiency adds up.
Most junior developers I meet are too slow. Not because they lack talent, but because they haven’t practiced intentionally. Learn to type properly. Learn the keyboard shortcuts. Learn the suite of tools at your disposal. An intern I mentored told me years later that this advice fundamentally changed how he worked for the better – and that now he’s the one frustrated when others fall behind.
Fluency compounds. When the friction between thought and action is low, you can think more clearly, move more freely, and work more effectively.
No comments:
Post a Comment