There is a recurring pattern in software engineering: a team adopts an exciting new framework, spends months fighting its rough edges, and eventually realizes they could have shipped the product faster on tools they already knew. The new technology was not bad. It was simply new, and newness has costs that are easy to underestimate.
Dan McKinley articulated this idea clearly in his widely-read essay Choose Boring Technology. His core argument is that every team has a limited capacity for absorbing new, unproven tools, and spending that capacity on infrastructure that does not differentiate your product is a strategic mistake. Years later, the essay remains one of the most useful frameworks for making technology choices, precisely because the pressure to chase novelty has only intensified.
The Innovation Token Concept
McKinley proposes a mental model: imagine your team starts each project with a small, fixed number of innovation tokens. Each time you adopt a technology that is new to your team, or new to the industry, you spend a token. You are not buying capability. You are buying uncertainty: unknown failure modes, missing documentation, hiring difficulty, and integration surprises that your team will need to absorb.
The insight is not that you should never spend tokens. It is that you should spend them deliberately, in places where the new technology creates genuine competitive advantage. If your product’s core differentiator is real-time data processing at unusual scale, spending a token on a specialized streaming database might be worth it. Spending another token on a novel build system and a third on an untested deployment platform is not. You have burned your budget on problems that have perfectly adequate boring solutions, leaving no room for the risks that actually matter.
Most teams do not fail because they chose the wrong technology for their hardest problem. They fail because they chose novel technology for five problems simultaneously and could not absorb the cumulative uncertainty.
What “Boring” Actually Means
Calling technology “boring” is not a criticism. It is a compliment about operational maturity. Boring technology has a specific set of characteristics that make it valuable.
Known failure modes. PostgreSQL has been in production for decades. Its edge cases, performance cliffs, and operational quirks are extensively documented. When something goes wrong at 2 AM, there is a Stack Overflow answer, a blog post, or a colleague who has seen it before. The same is true for Redis, Nginx, Django, Rails, and Express. These tools still surprise people occasionally, but the surprise surface is small.
Deep documentation. Boring tools have years of accumulated tutorials, reference documentation, and real-world war stories. You are not the first person to attempt what you are attempting. This matters enormously when onboarding new team members or debugging production issues under time pressure.
Hiring pools. There are many experienced PostgreSQL administrators, many Rails developers, many people who know how to operate a Node.js service in production. When you build on boring technology, you can hire for proven skills rather than hoping to find someone who happens to have used the same niche tool you chose eighteen months ago.
Stable interfaces. Boring tools tend to have stable APIs. A Django application written three years ago still works. An Express middleware from 2021 probably still does what it did then. This stability is not glamorous, but it means your team spends time building product features instead of chasing framework migrations.
None of this means boring technology is stagnant. PostgreSQL ships significant improvements in every major release. Node.js 22 LTS brought meaningful performance gains. TypeScript continues to evolve its type system. These tools are actively maintained and improved. They are boring in the best sense: you can rely on them without worrying about what will break next.
The Hidden Costs of Novel Technology
When a team evaluates a new tool, they typically assess its features, performance benchmarks, and developer experience during a short proof-of-concept. What they rarely assess is the cost of running that tool in production over years with a full team.
Sparse Documentation and Small Communities
New frameworks have thin documentation. The official docs cover the happy path, and everything else requires reading source code, filing GitHub issues, or hoping someone in a Discord channel has encountered the same problem. This tax is paid by every developer on your team, on every task, for as long as you use the tool.
Breaking Changes
Immature tools break their APIs. This is natural and often necessary as the tool finds its footing, but it means your team periodically stops feature work to update call sites, rewrite integrations, and retest everything. The JavaScript ecosystem is particularly prone to this. Over the past few years, teams have navigated significant migrations across build tools, meta-frameworks, and even runtimes. Each migration was individually justifiable. Cumulatively, they represent enormous engineering time that did not produce user-facing value. This is exactly how technical debt compounds silently — each novel tool adds a thin layer of maintenance burden that grows over time.
Undiscovered Edge Cases
Every technology has edge cases that only emerge at production scale and over production timelines. With PostgreSQL, most of these edge cases were found years ago and are well-documented. With a database released last year, you and your team are the ones finding them. You are doing unpaid QA for the project, and you are doing it on your production systems with your customers’ data.
Hiring and Onboarding Friction
If your stack includes three tools that a typical candidate has never used, your hiring pipeline narrows and your onboarding time expands. This cost is invisible in a technology evaluation spreadsheet but very real in quarterly planning when the team is short-staffed and new hires are not yet productive.
When to Spend an Innovation Token
Boring technology is a default, not an absolute rule. There are legitimate reasons to adopt something new.
The boring option genuinely cannot do the job. If you need sub-millisecond response times on graph traversals, PostgreSQL with recursive CTEs may not cut it. If you need to serve a million concurrent WebSocket connections from a single process, you may need a runtime optimized for that workload. The key word is “genuinely.” Many teams convince themselves the boring option is insufficient without actually benchmarking it. PostgreSQL, Redis, and a well-configured application server handle far more than most teams assume.
The new tool is your product’s differentiator. If your competitive advantage depends on capabilities that a novel tool provides and a boring tool does not, spending the token is justified. This is spending your innovation budget where it creates value that customers see and pay for.
Your team has deep, production-level expertise. If your lead engineer spent three years operating the new tool at a previous company and knows its failure modes intimately, the risk calculus changes. The tool is novel to your company, but it is boring to the person responsible for running it. This is meaningfully different from a team evaluating a tool through blog posts and conference talks.
You have explicitly budgeted for the cost. Adopting a new tool is not free. If you have allocated time for the team to learn it, built buffer into your timeline for unexpected integration work, and accepted that some debugging sessions will be longer than they would be with familiar tools, you are making an informed bet rather than an optimistic one.
A Practical Decision Framework
Before adopting a new technology, work through these questions with your team. Write down the answers. The act of articulating them often reveals assumptions that do not survive scrutiny.
Can we hire for this?
Look at job postings and candidate pools. If you need to hire two more engineers in the next year, how many candidates will have meaningful experience with this tool? If the answer is very few, you are committing to extensive training and a prolonged ramp-up period for every future hire.
What is the bus factor?
If the one person on the team who understands the new tool leaves, what happens? If the answer is “we would be in serious trouble,” you have a fragility problem that no amount of documentation fully solves.
What does the migration path look like?
Every technology choice is eventually replaced. How hard will it be to move away from this tool if it does not work out, if the project is abandoned by its maintainers, or if something better emerges in three years? Tools that use standard protocols, common data formats, and well-defined interfaces are easier to migrate away from than those with proprietary lock-in.
What is the actual problem we are solving?
Be specific. “Our current tool is slow” is not specific enough. “Our current tool adds 400ms of latency to the checkout flow, and we have profiled it to confirm the bottleneck is in X” is specific enough to evaluate whether a new tool actually solves the problem. Many technology migrations are driven by vague dissatisfaction rather than identified problems.
Have we benchmarked the boring option?
Before concluding that you need a new tool, verify that the existing one cannot be tuned, reconfigured, or used differently to meet the requirement. Adding an index, enabling connection pooling, or restructuring a query often eliminates the performance gap that motivated the search for something new. This benchmarking discipline applies equally to architectural decisions — the boring monolith often handles far more than teams assume.
Boring Compounds
The strongest argument for boring technology is not any single decision but the cumulative effect over time. A team that consistently chooses well-understood tools ships features faster, debugs production issues faster, onboards new members faster, and spends less time on framework migrations. These advantages compound quarter over quarter.
The team running PostgreSQL, Redis, and a mainstream application framework is not exciting at a conference talk. But they shipped last week, they will ship next week, and when something breaks at 3 AM, they already know where to look. That reliability, sustained over years, is a genuine competitive advantage, and it comes from the deliberate, unglamorous decision to choose boring technology wherever it is good enough.
