What Experience Looks Like in Design Reviews

What Experience Looks Like in Design Reviews

Inexperienced engineers tend to talk a lot in design reviews. Experienced ones often don't.

This is frequently misinterpreted as disengagement. It isn't. It's pattern recognition at work.

A junior firmware engineer presents a new data acquisition system for an industrial press. The architecture is clean: sensors feed into an STM32, which aggregates readings and publishes over Modbus RTU to a supervisory PLC. There's error detection, retry logic, and graceful degradation. The presentation is thorough. Twenty slides. Clear diagrams.

The team asks questions. The junior engineer has answers. Protocol timing? Handled. Sensor failure modes? Covered. Integration with existing systems? Documented.

Thirty minutes in, the senior controls engineer—who's been silent the entire time—asks: "What happens when the press cycles faster than the sensor settling time?"

Pause.

"The sensors are rated for 100Hz sampling. The press cycles at 60 strokes per minute. We have margin."

"What happens when production pushes it to 75 strokes per minute to meet a deadline?"

Longer pause.

That question wasn't on any slide. It wasn't in the requirements. But everyone in the room who's been in production long enough knows: the system will eventually be pushed beyond its design envelope. Not because anyone is reckless, but because production pressures always find the gap between "rated for" and "tested at."

Experience changes what you listen for.

The Questions That Matter Change Over Time

Early in a career, design reviews focus on how something works.

Is the state machine correct? Is the timing analysis sound? Is the protocol implementation compliant? Are the error paths covered?

These are legitimate questions. They matter. But with experience, attention shifts to different territory:

  • What assumptions are being made about the environment?
  • Which constraints are fixed, and which are imagined?
  • What happens when this fails at the worst possible time?
  • Who will be awake at 3 AM when it does?
  • How will we know it's failing before it's catastrophic?

These questions don't show up on architecture diagrams. They emerge from having seen systems behave badly in familiar ways.

I remember reviewing a temperature control system early in my career. I focused on the PID tuning, the sensor accuracy, the control loop frequency. All technically sound.

What I missed: the system assumed sensor readings were always valid. When a sensor eventually failed by reporting plausible but frozen values, the control loop dutifully maintained those values—and a tank overheated. The system worked exactly as designed. The design just didn't account for sensors that fail by lying rather than going silent.

An experienced engineer would have asked: "What does a failing sensor look like to this algorithm?" Not because they're smarter, but because they've seen that failure mode before. Probably more than once.

Silence Is a Signal

Experienced engineers are often quiet during the early part of a review.

They let the design present itself. They listen for what is emphasized—and what is rushed past. They note which risks are acknowledged and which are avoided. They pay attention to the energy in the room: where does the presenter get confident, and where do they get vague?

When they finally speak, it is rarely to propose an alternative design. It is to test the edges:

"What happens if Modbus requests start taking 200ms instead of 50ms?"

"How do we know this component is failing versus just slow?"

"Who owns the configuration updates when this is in production?"

"What's the plan when this needs to change and you're not here?"

These are not clever questions. They are uncomfortable ones. They don't have satisfying technical answers because they're not really technical questions—they're questions about risk, ownership, and operational reality.

The inexperienced reviewer says: "Have you considered using CRC32 instead of CRC16 for better error detection?"

The experienced reviewer says: "What happens when the checksum passes but the data is still wrong?"

Both questions are about data integrity. Only one is about what actually happens in production.

Experience Recognizes Deferred Decisions

Many designs appear solid because they defer hard decisions:

  • Retry logic described as "exponential backoff with appropriate limits" but never defined
  • Operational ownership assumed but not assigned
  • Failure handling described vaguely as "graceful degradation"
  • Scaling concerns waved away with "we'll monitor and adjust"
  • Integration complexity acknowledged but postponed to "phase two"

Experience recognizes deferral instantly—not because deferral is always wrong, but because deferred decisions always come back with interest.

"We'll tune the timeouts in production" sounds reasonable. What it means is: we will discover the correct timeout values by finding out which values cause problems. The production system and its operators will pay the tuition for that education.

"We'll add more detailed logging if needed" sounds pragmatic. What it means is: when this fails mysteriously, we will not have the information we need to understand why, and we will add logging in a panic, and that logging will probably be wrong the first time.

"We'll clarify ownership during deployment" sounds like responsible planning. What it means is: when something breaks at 2 AM, there will be a delay while people figure out who is supposed to be responding, and that delay will be expensive in ways no one budgeted for.

Production is very patient about collecting on these debts.

I watched a team defer a decision about how to handle partial sensor array failures. "We'll see how it behaves in production and tune the algorithm." Three months later, a single failed sensor caused the system to oscillate because the algorithm wasn't designed for asymmetric input. The fix required a firmware update that needed a maintenance window. The maintenance window took six weeks to schedule. The oscillation cost measurable efficiency every day until then.

The cost of deferring that decision was higher than the cost of making it wrong during design would have been.

The Absence of Overconfidence

One of the clearest markers of experience is restraint.

Experienced engineers are careful with certainty. They do not promise that something will work. They describe the conditions under which it probably will—and the ways it might not.

This is sometimes mistaken for pessimism. It isn't. It is respect for complexity.

Confidence says: "This will handle sensor failures."

Experience says: "This will handle sensors that fail by going silent. Sensors that fail by drifting slowly will look like environmental changes. Sensors that fail by reporting intermittently valid data will be harder to detect. We've added bounds checking for the first case. The other two will require operational awareness."

Confidence says: "This will scale to 100 nodes."

Experience says: "This will scale to 100 nodes if network latency stays below 50ms and nodes don't join simultaneously. If we exceed those conditions, the synchronization protocol will degrade. We'll see it as increased jitter in the timing measurements."

The difference is not about being negative. It's about being specific about what has been designed for and what has been assumed away.

When someone presents with absolute confidence, experienced reviewers get nervous. Absolute confidence means either the problem is trivial, or the presenter hasn't understood the problem space well enough to see the edges.

Most problems are not trivial.

Ownership Reveals Maturity

Designs that lack clear ownership often pass reviews easily. They are polite. They offend no one. They carefully avoid assigning responsibility for anything uncomfortable.

Experienced reviewers push on this immediately.

Who owns this in production? Who gets paged when it misbehaves? Who can say "no" when someone wants to add a feature that compromises the design? Who has the authority to simplify it later when it proves too complex?

If ownership is unclear, the design is incomplete—no matter how elegant the code.

I've seen systems designed by committee where every component had a different owner, and the interfaces between components were "shared responsibility." Shared responsibility is another way of saying "no one is responsible."

When that system started having integration issues, no single person had the authority to make decisions about tradeoffs. Every change required negotiation. The system's behavior was the outcome of those negotiations, not of any coherent design intent.

The experienced engineer asks: "Who can wake up at 3 AM, look at this system in an unknown state, and make a judgment call about whether to restart it or leave it alone?"

If the answer is "well, it depends on which part is having issues," the ownership model is broken.

Ownership isn't about credit. It's about who carries the mental model of the system's actual behavior—not its documented behavior, its actual behavior—and has the authority to act on that understanding.

Experience Is Often Mistaken for Negativity

The most experienced person in the room is often labeled "difficult" at least once.

They ask why something must exist at all. They question timelines that assume perfect execution. They introduce failure scenarios no one wants to think about. They point out that the schedule doesn't include time for learning what was gotten wrong.

This is uncomfortable. It slows down momentum. It introduces doubt into presentations that felt confident.

What they are really doing is defending the future team from the present team's optimism.

That is rarely popular in the moment. It is deeply appreciated later—usually around 3 AM, when the pager goes off and someone realizes that the uncomfortable question they didn't want to answer during the review has become an urgent problem with no good solutions.

I've been that "difficult" person. I've asked questions that made presenters defensive. I've pointed out gaps that felt like criticisms. I've watched the energy in the room shift from enthusiasm to frustration.

And I've also gotten emails, months later, from those same presenters: "Remember when you asked about [that thing]? It happened. We were ready because we'd thought about it. Thank you."

The experienced reviewer is not trying to stop the design. They are trying to make it survivable.

Why Design Reviews Fail

Most failed design reviews fail quietly.

They approve systems that look reasonable, feel familiar, and fit within current organizational comfort. Everyone leaves the meeting satisfied. The design moves forward.

And then production teaches the lesson that the review avoided.

Design reviews fail when they optimize for approval rather than understanding. When uncomfortable questions are seen as obstruction. When experience is interpreted as negativity. When the goal is to get the design approved rather than to understand its edges.

Good design reviews do not prevent failure. They prevent surprise.

A good review leaves everyone with a shared understanding of:

  • What has been designed for
  • What has been assumed away
  • Where the edges are
  • Who owns what happens at those edges

It does not guarantee success. It makes failure less catastrophic and more recoverable.

A Final Thought

Experience does not make you smarter in design reviews. It makes you more selective.

You stop optimizing for correctness and start optimizing for survivability. You stop asking whether a system can work and start asking whether it can endure. You stop evaluating designs in isolation and start evaluating them in context: the operational environment, the organizational structure, the people who will maintain it when you're gone.

That shift is subtle. And once you see it, you can't unsee it.

You start noticing the patterns: the deferred decisions, the vague ownership, the overconfidence, the assumptions baked into diagrams. You start asking the uncomfortable questions—not because you're difficult, but because you've seen what happens when no one asks them.

And you realize that the best design reviews are not the ones where everyone agrees.

They are the ones where everyone leaves slightly uncomfortable, but with clarity about why.

← Back to Feed