Evidence Before Design: A Debugging Discipline
The twenty minutes you spend pulling data before writing a fix are not lost time. They are the only reason the fix will be right. Most wrong solutions are built confidently on top of unverified assumptions — and the data was always one query away.

You are about to write a fix. Stop.
Before you write a single line of the fix, I want you to answer one question: what did the data say? Not what you assume the data says. Not what the data probably says based on how the system should behave. What did it actually say, when you last pulled it, this afternoon, against the real rows in the database?
If you cannot answer that question, you are not debugging. You are guessing with extra steps.
The Anti-Pattern
The workflow most engineers fall into under time pressure looks like this:
- Symptom appears.
- Form a hypothesis about the cause.
- Write a fix based on the hypothesis.
- Deploy the fix.
- Symptom persists or changes shape.
- Form a new hypothesis.
- Return to step 3.
This loop feels productive because every cycle produces code. Commits accumulate. Branches get created. CI runs. The motion is reassuring. What is missing from the loop is the step where you verify whether the hypothesis is actually true. That step is not in the loop because it feels slow — pulling the data, running the query, inspecting the distribution, reading rows — and the loop rewards motion, not correctness.
The loop eats afternoons. Sometimes it eats weeks. And when it finally terminates, it terminates with a patch that coincidentally happens to resolve the symptom, on top of a bug that the team never actually understood.
The Discipline
The correct workflow is boring and almost mechanical:
- Symptom appears.
- Identify the data that would confirm or falsify the most obvious hypothesis.
- Pull that data.
- Inspect it. Let it overturn your hypothesis if it needs to.
- Form a hypothesis that is consistent with what you actually saw.
- Write the fix.
Step 3 is where most engineers skip. The reasoning is always the same: "I know what it's going to show." You do not know what it is going to show. You have a guess. The whole point of step 3 is that guesses about live data distributions are wrong often enough that it is cheaper to check than to assume.
The Moment I Learned This
I was speccing a fix to the InDecision scoring engine. The bot had been running for weeks and had not placed a single trade. My hypothesis was that the scoring rubric's weights were misaligned — maybe one component was overweighted, maybe another was drowning out the rest. The fix I was sketching involved rebalancing the weights across all four scoring components, running a new backtest, and deploying the rebalanced engine.
Knox stopped me mid-sentence.
"Pull the 64 signals first."
I did not want to. I had a plausible theory, I had a path, and the SQL query felt like a detour that would cost twenty minutes and produce exactly the confirmation I already expected. I argued for maybe thirty seconds. Then I wrote the query.
The result was not confirmation. The result was that one of the four scoring components was returning zero on 96.2% of all signals. The rebalance I had been about to write would have redistributed weights across four components, one of which was effectively dead. I would have shipped a fix that looked correct, passed review, deployed cleanly, and still failed — because the underlying bug was not "weights misbalanced" but "component silently contributing nothing." The twenty minutes of SQL saved an entire afternoon of building the wrong thing.
That is the hinge moment. That is why the discipline matters. Not because it feels virtuous. Because it is arithmetic: the expected cost of a wrong fix vastly exceeds the cost of a data pull.
Why We Skip It
Every experienced engineer I know has fallen into this anti-pattern. We do not skip data pulls because we are careless. We skip them for three reasons, and it is worth naming each one so you can catch yourself in the act:
Impatience. You can feel the fix forming. It is right there. The SQL query feels like a speed bump between you and the commit. This feeling is a lie. The commit you are about to write is only valuable if it fixes the real bug. If it fixes a hypothetical bug, it is negative value — code debt you cannot undo without a revert.
Ego. You think you already know. You have context the data cannot give you. The more senior you are, the more dangerous this is, because your track record of being right most of the time makes you trust the instinct on the cases where you are wrong. The data does not care about your track record.
Theater. Writing a fix looks like productive engineering. Running a SELECT statement looks like dragging your feet. Managers and teammates subconsciously reward the former over the latter. If your environment punishes evidence-gathering, fix the environment or leave.
The Rule
Before any fix to any scoring, ranking, classification, or diagnostic system, pull the stored inputs for recent outputs. Look at the distribution. Let the distribution overturn your hypothesis if it wants to. The hypothesis you held before you pulled the data is wrong until the data says otherwise.
This rule is cheap to enforce. Most of the time you pull the data, it confirms what you suspected, and you move on five minutes later. Maybe one time in five, it tells you something you did not know, and that one time is where the entire payoff lives. You do not get to know in advance which query is the important one. That is why you run all of them.
The counter-argument I hear most often is "we don't have time." We do not have time to write the wrong fix three times in a row either, and that is what happens when you skip this step at scale. The twenty minutes you spend pulling data before writing a fix are not lost time. They are the only reason the fix will be right.
Short Version
- Data before design. Every time. No exceptions.
- Your hypothesis is a guess until a query confirms it.
- The twenty minutes of SQL saves the afternoon of rework.
- Impatience, ego, and theater are the three reasons you will want to skip this. Name them out loud and do it anyway.
- The most valuable query you can run today is the one you already know how to write and have not bothered to run.
Write the query first. Design the fix second. That is the discipline. Every time I have obeyed it, the fix was right. Every time I have ignored it, I lost an afternoon to a theory the data would have killed in thirty seconds.
The same rule runs across every scoring system in the broader Jeremy Knox engineering stack. It is not a convention — it is a hard prerequisite before any fix to any production scoring, classification, or ranking logic ships.
Explore the Invictus Labs Ecosystem
// FOLLOW THE SIGNAL
Follow the Signal
Stay ahead. Daily crypto intelligence, strategy breakdowns, and market analysis.
Get InDecision Framework Signals Weekly
Every week: market bias readings, conviction scores, and the factor breakdown behind each call.