The Napkin Plan
As the Donald Knuth saying goes, “Premature optimization is the root of all evil”.
The root of premature optimization is actually worry about the future. The worry that your code, architecture, and plan won’t adapt over time.
The concern you won’t be able to scale up your tech as your demand increases. Anxiety over what happens if you go viral. If demand spikes 2X or 10X in a short period of time. What if our abstractions don’t hold up?
Premature optimization and too many layers of abstractions are defense mechanisms we use to protect ourselves from these unknowns.
The solution is another overused expression
“Expect the best, plan for the worst”
Is the often-quoted fragment of Denis Waitley’s quote which in full is: “Expect the best, plan for the worst, and prepare to be surprised”.
The emphasis here should be on plan. You should have a rough plan of what to do when you run out of Redis connections. A plan for how you will handle a sudden influx of new users. What tech or technique you will move to when you run out of road on your current path.
But for God’s sake, don’t build it.
Sketch it out. Discuss it a bit and poke holes in it. Spend a couple hours on it during your early architecture design sessions with your team. Revisit and revise the plan a couple of times a year.
Benefits of a Napkin Plan
The main benefit for me is that it exercises the part of my brain that wants to jump right to the optimizations. “Oooh, if we use this technique and that data structure we can grow to 1000X our expected size and still have sub-second respond times… is the kind of stuff that gets me up in the morning.
Gaming out what we would do lets me enjoy the thought experiment portion without the project-killing slog of actually implementing it far too early.
The team can also feel safe that there is a plan for what to do if we ever get there. We know the rough playbook; we just have to implement it slightly ahead of running out of road.
Statistically speaking, you’ll never need it, but you have it if you do.
Most importantly, however, we can build toward the plan without implementing the whole damn thing.
We can wrap our use of SlowAssDB with a light abstraction so it’s easier to swap in NewHotnessQL when the time comes. We can see that slightly adjusting our data model today makes step three of the plan much more manageable with no real extra work today, and do it now. We can opt to use UUIDs instead of integer IDs in case we really do end up with more than 2 Billion rows.
Give it a try on your next project, and prepare to be surprised.
Frank Wiles
Founder of REVSYS and former President of the Django Software Foundation . Expert in building, scaling and maintaining complex web applications. Want to reach out? Contact me here or use the social links below.