The Quality System That Catches What Developers Miss

teckollab.com
23 MAR 2026
5 min read
London, UK
The Quality System That Catches What Developers Miss

Part 4 of 8  —  Building StepZero.eco

Part 3: Phasing Is Not Cutting Corners: How We Scoped StepZero.eco Into Shippable ReleasesPart 5: Build or Buy? When We Replaced a Third-Party Service With Our Own Engine

Part 4 of Building StepZero.eco: how we built a sustainability platform from discovery to launch.

The short version

The most common bug we found during stepzero.eco development was not broken code. It was perfectly written features that were never actually connected to the rest of the product. Tests passed. Code review passed. But a real user could never reach the feature because nobody checked the full path from database to screen. We built a quality system with one core principle: the person who writes the code never certifies it as complete. Three roles, strict separation, a layer-by-layer checklist, and a 14-item quality gate that has to pass before anyone can use the word "Done." Since adopting it, none of these silent failures have reached production.

When a developer writes code and then marks their own ticket as "Done," they are checking their own homework. That is a structural problem, not a character flaw. The person closest to the code is the least likely to spot what is missing, because they already know how it is supposed to work. They test the happy path instinctively, confirming what they built rather than probing for what they forgot, and that gap between "it works when I test it" and "it works when a real user encounters it" is where the most damaging bugs live.

The dangerous bugs are not the ones that throw errors. They are the ones that pass every test but never actually work in the real product. Features that look finished but are never properly connected to the rest of the application. Screens that render beautifully but never receive real data. Forms that accept input but never actually save it. These are the failures that erode user trust quietly, because the user does not see an error message telling them something went wrong. They just experience a product that does not quite work, and they stop trusting it without being able to articulate why.

We have seen this pattern repeat across years of building products for startups. So when we built stepzero.eco, we designed a quality system where the person who writes the code is never the person who signs it off.

Three roles, strict separation

The system works because of a simple structural decision: building, verifying, and coordinating are three separate responsibilities, and no one performs more than one.

  • The builder writes the feature, following a structured checklist that traces every connection from the database through to the screen. When they finish, they set the status to "Pending Verification." They never set it to "Done." That word is not theirs to use, and that distinction matters more than it might seem, because it changes the psychology of completion.
  • The verifier is a completely separate reviewer who did not write the code. They walk through every layer of the feature, confirming that each connection actually exists and works. They are the only ones who can mark work as "Done."
  • The coordinator manages sequencing and dependencies across the project. The coordinator never writes code and never verifies. They keep the work moving in the right order and ensure that the boundary between building and verifying is never blurred by deadline pressure or convenience.

What the verifier actually checks

We built a structured checklist that maps every layer of the system, from the database through to the user interface. Every feature must be traceable through all layers. If any layer is missing, the feature does not ship.

Think of it like checking the plumbing in a building. You do not just test if the tap turns on. You check the mains connection, the pipes behind the walls, the valves, the joints, and then the tap. If any piece in the chain is missing, water does not flow, even if the tap handle looks perfect and turns smoothly.

For software, that means verifying the data storage, the connections between the front and back of the application, the input validation, the visual presentation, and every state the user might see: loading, error, empty, and success. The checklist also covers visual consistency with the design system and accessibility requirements. None of this is left to the verifier's judgement about what seems important. The checklist is explicit and exhaustive, which means the verification is repeatable and consistent regardless of who is doing it.

What it catches

  • Features that look finished but are not connected. This was the single most common failure we found. A developer builds a perfectly good feature, writes tests for it, the tests pass — but they forget to connect it to the rest of the application. In a traditional code review, the reviewer sees well-written, well-tested work and moves on. Our verifier catches this every time, because the checklist requires confirming the feature is reachable by a real user following a real path through the product.
  • Errors that disappear silently. Sometimes code handles errors by doing nothing. The error gets caught, no one is told, and the feature appears to work normally while quietly failing in the background. Our checks flag any error handling that does not actually inform the user or log the problem, because silent failure is worse than visible failure in every case.
  • Visual inconsistency. A slightly different colour or spacing that looks fine in isolation but feels "off" next to other pages. Users notice this, even if they cannot explain why. Our verification enforces consistency with the design system so every screen feels like it belongs to the same product.
  • Actions that fail silently. A user clicks "Save" and nothing happens. No confirmation, no error message. The request failed, the screen did not react, and the user assumes their work was saved when it was not. Our verifier checks that every user action has explicit feedback, whether it succeeds or fails.

The Done Gate

Beyond the layer-by-layer checks, every piece of work must pass a 14-item quality gate before it can be marked complete. This includes zero compiler errors, clean code standards, test coverage for new logic, accessibility checks on interactive elements, correct loading and error states, and confirmation that no temporary debug code remains.

Pass all 14 items or the work stays open. There are no exceptions, and there is no process for skipping items when the deadline is close. The gate exists precisely for those moments when the pressure to ship is highest, because that is when quality is most likely to slip.

The results

Since adopting this system on stepzero.eco, none of these common failure modes have reached production. No disconnected features. No silent errors. No save buttons that quietly fail.

These kinds of bugs do not make headlines. They do not cause dramatic outages or data breaches. They just quietly erode trust, one frustrated user at a time, and the frustration is often silent too, because users who encounter these problems tend to leave rather than complain. We decided early on that was not something we were willing to accept.

What this really comes down to

You do not need to adopt our exact system. The specific checklist, the 14-item gate, the role names: those are implementation details that work for us and might need adapting for your context. But the underlying principle is worth sitting with: the person who writes the code should never be the person who certifies it as complete.

This is not about distrust, and it is not about adding bureaucracy. It is about acknowledging a cognitive reality. The builder's mental model of a feature makes them genuinely blind to certain categories of failure, in the same way that a writer cannot proofread their own work reliably, no matter how skilled they are. The words on the page keep resolving into what they intended to write rather than what they actually wrote. Code works the same way.

Separating the roles, making "Done" a word that only a verifier can use, making the verification checklist explicit and exhaustive: these are small changes to how a team operates, but they shift the definition of completion in a way that catches problems before users do.


This is part 4 of the Building StepZero.eco series. Next: how we replaced a third-party carbon API with our own engine, and the decision framework we used to make that call. If you want to talk about how quality processes work in practice, we enjoy that conversation. Say hello.

Work with us

Let's Kollab!

From early-stage build to Series A growth — we've got the right team for it.

Wiltshire-based build and scale studio for ambitious startups.

Wiltshire, UK

Quick Links

  • Home
  • Build Services
  • Scale Services
  • Our Work

Company

  • About Teckollab
  • Blogs
  • Contact

Monthly insights in your inbox

Lessons from 50+ builds. What's working in fundraising. Once a month, no spam.

© 2026 Teckollab Ltd. All rights reserved.

Privacy Policy|Cookie Policy