Flow raises $23M Series A from

Sequoia

Flow raises $23M Series A from

Sequoia
Volume III — Iterative Systems Engineering: In Practice
Volume III — Iterative Systems Engineering: In Practice
Pari Singh

Pari Singh

Oct 14, 2025

Handbook to Iterative Systems Engineering

Volume III — Iterative Systems Engineering: In Practice

A practical guide on how modern hardware teams apply CI/CD principles to deliver complex systems early and often.

Chapter 9 — How Iterative Teams Run the V

The traditional V-diagram in systems engineering, with its linear, single-pass journey from requirements to validation, made sense for simpler times. It assumed you'd get it right upfront, and verification at the end was mostly a rubber stamp. When mission risk demanded extreme caution and systems were less interconnected, that approach worked. But that's not the world we engineer in anymore.

Today's hardware teams, especially in fields like aerospace, robotics, energy, and autonomy, live with high uncertainty. Requirements evolve, integration cycles are blazing fast, and your early assumptions? They're often just plain wrong. These teams simply can't afford to verify only once, long after the design is "done." Instead, they embrace iterative V-cycles: complete design–build–test loops that repeat, refine, and drive continuous learning.

Think of it as CI/CD for hardware. It's not about deploying 20 times a day, obviously. It's about building a rhythm where verification isn't a phase, but an operational pulse. You test constantly. You feed results back into the design, sometimes within hours. You fix fast. You update design criteria in real time. And with every iteration, you build more confidence than the last.

CI/CD for Hardware (Continuous Integration/Continuous Delivery): An analogy from software development adapted for hardware. It refers to a disciplined approach of frequently integrating hardware and software components, running automated or rapid tests, and continuously feeding results back into the design process to accelerate learning and build confidence

CI/CD for Hardware (Continuous Integration/Continuous Delivery): An analogy from software development adapted for hardware. It refers to a disciplined approach of frequently integrating hardware and software components, running automated or rapid tests, and continuously feeding results back into the design process to accelerate learning and build confidence

CI/CD for Hardware (Continuous Integration/Continuous Delivery): An analogy from software development adapted for hardware. It refers to a disciplined approach of frequently integrating hardware and software components, running automated or rapid tests, and continuously feeding results back into the design process to accelerate learning and build confidence

Design Criteria: Quantitative and testable specifications owned by subsystem teams that define the performance, physical, or operational characteristics of a component or subsystem.

Design Criteria: Quantitative and testable specifications owned by subsystem teams that define the performance, physical, or operational characteristics of a component or subsystem.

Design Criteria: Quantitative and testable specifications owned by subsystem teams that define the performance, physical, or operational characteristics of a component or subsystem.

Each cycle (V0, V1, V2, and beyond) is a bounded experiment. The goal isn't to be right upfront; it's to reduce risk and increase fidelity and confidence with each loop. Each loop reveals real integration issues, under real conditions, with real hardware. This chapter will walk you through how fast-moving teams scope and structure these iterations, how progress is tracked in tools, and what "good" looks like at each step.

Modern programs face shifting targets: evolving missions, uncertain physics, immature suppliers, and tight coupling with software. The cost of being wrong early is high, but the cost of discovering it late is catastrophic. This isn't about speed for its own sake. It's about truth. Revealing accurate understanding of your system's behavior and risks as early as possible. CI/CD for hardware is about testing daily, directly linking failures to design changes, clearly showing what's verified, and tightening that learning loop every single week.

We've all seen the classic NASA lifecycle:

Phase A (Concept) → B (Prelim Design) → C (Detailed Design) → D (Integration/Test) → E (Ops) → F (Decom)

V vs Spiral Model
V vs Spiral Model
V vs Spiral Model

The traditional V-model follows a single, upfront design-to-validation arc. Iterative teams run nested V-cycles (V0, V1, V2…), each structured as a design–build–test–learn loop. The spiral shows how each cycle compounds validated learning, systematically reducing risk and increasing system fidelity with every turn.

While NASA, like many organizations, is increasingly adopting iterative practices, the classic A–F sequence often represents a more traditional, sequential approach. This works when the path is known and stable. But fast-moving teams rarely go A → F. Instead, they operate with explicitly iterative V-cycles:

V0 (Exploratory Build) → V1 (Integrated Subsystem) → V2 (Flight-like Stack) → V3 (Cert-Ready System)

Traditional SE runs one long V-cycle (Phases A→F):

  • Freeze early, verify late

  • Integrate only at the end

  • Milestones act as phase gates

  • Assumes early correctness

Agile SE runs multiple small V-cycles (V0→V3):

  • Verify continuously across iterations

  • Integrate from the start

  • Milestones are learning checkpoints

  • Assumes early uncertainty

Each V is a focused design–build–test loop. Each one is engineered to reduce a specific risk. Teams might run 3–5 iterations before locking anything down.

This lets them learn early, compound confidence, and build towards certification rather than just assuming it from day one.

High-performing teams don’t wait to find problems. They build to find them.

Iteration (V-Cycle): A bounded, goal-oriented design-build-test-learn loop in. Each iteration (e.g., V0, V1, V2) targets specific risks or unknowns, culminating in a demonstrable output and a learning checkpoint.

Iteration (V-Cycle): A bounded, goal-oriented design-build-test-learn loop in. Each iteration (e.g., V0, V1, V2) targets specific risks or unknowns, culminating in a demonstrable output and a learning checkpoint.

Iteration (V-Cycle): A bounded, goal-oriented design-build-test-learn loop in. Each iteration (e.g., V0, V1, V2) targets specific risks or unknowns, culminating in a demonstrable output and a learning checkpoint.

Example: Low-Cost Orbital Transfer Stage

Let's say you're building a low-cost orbital transfer stage (OTS) for a commercial launch provider. Your mission: reposition 200kg-class payloads in LEO using a lightweight propulsion module that interfaces with multiple smallsat buses.

In a traditional flow, you'd freeze your interface assumptions in Phase A, lock architecture in B, and build toward an integrated test in D. Your first true system-level verification might come 18–24 months in.

In a spiral V-model, you'd take a radically different approach:

  • V0 might validate basic thrust and interface protocols on a flat sat.

  • V1 could close the loop on your GNC and control thrust dynamically.

  • V2 might run in thermal vacuum with mission-like loads and durations.

  • V3 would finalize qualification articles, ready for review and flight certification.

Each V is bounded, goal-oriented, and structured to reveal whether your current assumptions hold up. You don't need to get it all right up front. You just need to structure how you learn.

Build "reps" with your team on the full lifecycle to build it into the culture.

Skyler Shuford

Skyler Shuford

COO, Hermeus

Build "reps" with your team on the full lifecycle to build it into the culture.

Skyler Shuford

Skyler Shuford

COO, Hermeus

Build "reps" with your team on the full lifecycle to build it into the culture.

Skyler Shuford

Skyler Shuford

COO, Hermeus

1. Iteration Planning: Scoping the V

Before starting an iteration, teams decide what they need to learn. They scope the V not around components or Gantt charts, but around risk.

Each V-cycle is a bounded engineering experiment. It should:

  • Target a specific set of unknowns or risks

  • Define a minimal configuration that exercises those risks

  • Include testable criteria, verification plan, and clear exit conditions

These V-cycles build on each other. V0 tests basic function. V1 integrates. V2 pushes full-system edge cases. V3 approaches review and certification. The CI/CD mindset starts here: each V is a mini pipeline – Scope → build → test → verify → feedback.

Each V-cycle isn’t just an experiment. It’s a checkpoint that becomes the starting point for the next. SpaceX didn’t build Falcon 9 and then optimize. They launched v1.0, then iterated: v1.1 brought stretched tanks and new engines, Full Thrust added subcooled propellant, and Block 5 became the human-rated workhorse.

Iterations objectives
Iterations objectives
Iterations objectives

Here's an example of how a team might define its iteration objectives for the OTS Each version locked in what worked, improved what didn’t, and built system-level confidence through flight.

This approach doesn’t sacrifice quality. It builds quality through repetition, data, and system-level learning.

A well-scoped iteration avoids building too much or guessing too little. It’s an engineering hypothesis: What is the fastest way to prove or disprove our assumptions? This is how teams get fast without being reckless. Each iteration becomes a hypothesis test. And every V de-risks the next. This mirrors the rapid feedback loops seen in successful software CI/CD pipelines, adapted for the realities of physical systems.

2. Establishing Milestones and Baselines

Each iteration ends in a milestone. A milestone is a structured snapshot of the system's state at that moment:

  • The requirements as they stand.

  • The current architecture configuration.

  • The test coverage and results achieved.

  • The assumptions still in play.

Milestones serve two critical functions. First, they freeze the state of the system, allowing teams to clearly see what changed across iterations. Second, they provide traceability and evidence for decision-making.

These aren't just approval gates; they're deliberate moments for the team to reflect on assumptions, incorporate new knowledge, and adjust course based on verified data, ensuring every iteration builds on a solid foundation.

This is crucial for leadership and stakeholders to quickly grasp the system's current state, understand progress, and make informed decisions about resource allocation and future direction.

Milestone: A structured snapshot of the system's state at the end of an iteration. It captures the current requirements, architecture, test coverage, and remaining assumptions, serving as a learning checkpoint and evidence package.

Milestone: A structured snapshot of the system's state at the end of an iteration. It captures the current requirements, architecture, test coverage, and remaining assumptions, serving as a learning checkpoint and evidence package.

Milestone: A structured snapshot of the system's state at the end of an iteration. It captures the current requirements, architecture, test coverage, and remaining assumptions, serving as a learning checkpoint and evidence package.

Example: Milestone V1 — First End-to-End Closed Loop

Learning Goal: Can we close the thrust loop with current avionics?

Test Article:

  • Engine Block A

  • Avionics Unit Rev C

  • Power board Rev 2

  • Simulated load (500N)

Verifiable outcomes
Verifiable outcomes
Verifiable outcomes

Verifiable outcomes

Tools designed for iterative hardware development can automatically track all this. By V2, teams can look back and see exactly what changed and why. A good tool will capture what was known at the time and what changed going into V2, automatically flagging verification gaps and assumption deltas.

3. Requirements and Verification in Iterations

In old-school systems engineering, requirements are exhaustive and static. In agile, they’re progressive and verified as they go.

Teams don’t write 1,000 requirements and call it done. They start small and write the requirements that matter right now and link them to tests. Just enough to define the current scope.

Not all requirements are created equal and most are not static. We can distinguish between two levels of constraint:

- Top-level mission requirements: These are fixed, outcome-driven goals (e.g. “must deliver 250kg to LEO” or “avionics board shall operate below 70°C”). - Lower-level design parameters: These are flexible and often traded during development — things like tank pressure margins, board layout, wiring harness length, or packaging.

Subsystem teams constantly trade these lower-level parameters to optimize the system as a whole. If GNC can handle more latency, propulsion might relax its throttle curve. If thermal margins improve, mass might increase slightly in exchange for faster integration.

These aren’t ad hoc decisions. They’re structured, traceable trades tracked in the same tools that manage requirements and architecture. This flexibility allows teams to adapt quickly without losing control.

You protect the mission goal, not a spreadsheet full of guesses.

Each iteration verifies a subset of the system. This is just enough to test critical behaviors and update the design. In your tools, you should link each requirement to:

  • A test case (sim, calc, rig, or flight)

  • Pass/fail status and history

  • Confidence level and last verified date

Teams review verification health weekly, seeing what passed, what failed, and what hasn’t been tested yet. Teams that truly do CI/CD in hardware treat test failures as data, not drama. They're opportunities to learn and refine.

Milestones
Milestones
Milestones

Each milestone answers a different question. These are designed to drive learning, not just reviews.

4. Change Tracking and Decision Support

In complex hardware programs, change is inevitable. Assumptions will be overturned, test results will surprise, and better designs will emerge. The goal isn't to prevent change, but to structure and manage it effectively.

Effective teams don't just react to change; they engineer it. This means tracing root causes, precisely revising constraints, and systematically propagating updates across both requirements and architecture. Changes are not isolated; they are connected and their impact must be understood throughout the system.

Every iteration surfaces new information:

  • New test results

  • Revised design constraints

  • Architecture updates

  • Requirement changes

"Teams need to get comfortable with micro-waste to avoid macro-failure."

Pari Singh

Pari Singh

CEO, Flow Engineering

"Teams need to get comfortable with micro-waste to avoid macro-failure."

Pari Singh

Pari Singh

CEO, Flow Engineering

"Teams need to get comfortable with micro-waste to avoid macro-failure."

Pari Singh

Pari Singh

CEO, Flow Engineering

Test results reveal unexpected behavior. Design parameters shift. Architecture diagrams evolve. And requirements are refined based on what the team learns. This isn’t a one-time cleanup. It’s continuous.

Your tools should support that by default. Every requirement, architecture block, and assumption should be traceable. When something breaks, the impact should be obvious. And when something changes, that change should flow through the system.

In Practice: Thermal Test Failure

Trigger: TEST-2024-06-001-C (thermal load test) fails at 46s. Max board temp = 76.2°C.

Requirement affected: TEMP-REQ-23 "Avionics board shall operate below 70°C"

In tools: Requirement flagged. Owner notified.

Outcome:

  • Root cause: Misaligned airflow ducting identified.

  • Action: Cooling block updated in design.

  • New test case created: TEST-2024-06-009-B to verify the fix.

  • Plan: V2 milestone will include the revised thermal harness and the new test.

This way, decisions are traceable. No more wondering why something changed two iterations ago. Effective tools for iterative development show what happened, when, and why, allowing teams to map iterations across their real lifecycle, connecting every decision to clear, visible evidence.

5. Responsibility > Process

The most effective teams operate based on a philosophy of responsibility over process. No amount of process can replace this for getting things done right, efficiently. While processes provide a framework, superior teams prioritize individual and collective responsibility for system outcomes. This means engineers don't just build their components; they own the real-world performance of the entire system.

This emphasis on responsibility is not a rejection of process. Rather, it makes process meaningful.

When the individual making a decision also experiences its outcome, rigor follows naturally.

Team practices reinforce this mindset and serve as essential mechanisms for:

  • Surfacing risk effectively.

  • Accelerating learning through direct feedback.

  • Keeping iteration on track with appropriate structure.

Practices
Practices
Practices
Tooling Over Rulemaking

Traditional programs rely on gates, checklists, and control boards. These scale poorly. They reward compliance over clarity and delay signal in favor of ceremony.

High-performing teams coordinate through the work itself. Verification gaps surface the moment a test fails. Requirement conflicts appear as soon as a constraint shifts. Traceability is ambient, captured by default rather than assembled by hand.

Coordination should feel more like a social network than a command structure: real-time updates, embedded discussions, and live links between requirements, tests, and decisions reduce latency across the org.

By the time a team enters a V&V review or shares a milestone snapshot, the signal is already visible. What changed. What passed. What broke. What needs attention.

Great tools don't just store knowledge. They shape behavior. The best ones make sound engineering practice automatic.

"What you want is to codify the people. Give autonomy, creativity, extreme ownership. That’s the key."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"What you want is to codify the people. Give autonomy, creativity, extreme ownership. That’s the key."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"What you want is to codify the people. Give autonomy, creativity, extreme ownership. That’s the key."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

6. Common Anti-Patterns

Embracing iterative V-cycles is essential for navigating the uncertainty of modern hardware development. But wanting to be agile isn’t enough. Without discipline and awareness of common pitfalls, teams often fall into patterns that defeat the purpose of iteration.

The iterative V-cycle is a discipline. Each loop has a purpose: to reduce one set of risks, prove one set of assumptions, and increase one level of confidence. It’s not about speed for its own sake. It’s about learning with structure. If the traditional V is a single pass, the iterative V is a controlled loop. The more reps you run, the more confidence you build.

These traps slow progress, hide risk, and create false confidence. They mirror the same failure modes that CI/CD is designed to eliminate. To get real value from iteration, teams need to spot and avoid these habits.

That’s how fast teams make safe systems. This type of CI/CD approach works because the loop is tight. Next, we’ll look at how high-performing teams structure that convergence. Where leadership sets direction, teams build and test, and the real system takes shape in between.

Anti-Patterns-Chapter-1
Anti-Patterns-Chapter-1
Anti-Patterns-Chapter-1

"Get to the heart of what you're trying to learn or demonstrate in this iteration. If you build a machine that churns out iterations, you'll know that if you miss it on this one, you'll get it on the next."

Pari Singh

Pari Singh

CEO, Flow Engineering

"Get to the heart of what you're trying to learn or demonstrate in this iteration. If you build a machine that churns out iterations, you'll know that if you miss it on this one, you'll get it on the next."

Pari Singh

Pari Singh

CEO, Flow Engineering

"Get to the heart of what you're trying to learn or demonstrate in this iteration. If you build a machine that churns out iterations, you'll know that if you miss it on this one, you'll get it on the next."

Pari Singh

Pari Singh

CEO, Flow Engineering

Key Takeaways:

  • Agile SE employs multiple, smaller, bounded V-cycles (V0, V1, etc.) as continuous, risk-reducing experiments.

  • Each V-cycle focuses on specific unknowns, with defined configurations, tests, and exit criteria.

  • This "CI/CD for hardware" approach builds confidence incrementally, accelerating learning and de-risking the system faster than traditional methods.

Key Takeaways:

  • Agile SE employs multiple, smaller, bounded V-cycles (V0, V1, etc.) as continuous, risk-reducing experiments.

  • Each V-cycle focuses on specific unknowns, with defined configurations, tests, and exit criteria.

  • This "CI/CD for hardware" approach builds confidence incrementally, accelerating learning and de-risking the system faster than traditional methods.

Key Takeaways:

  • Agile SE employs multiple, smaller, bounded V-cycles (V0, V1, etc.) as continuous, risk-reducing experiments.

  • Each V-cycle focuses on specific unknowns, with defined configurations, tests, and exit criteria.

  • This "CI/CD for hardware" approach builds confidence incrementally, accelerating learning and de-risking the system faster than traditional methods.

Chapter 10 — Top-Down Intent Meets Bottom-Up Reality

In traditional programs, system design flows top-down: define the mission, derive requirements, allocate to subsystems, and review at the end. It assumes leadership can lock down intent and constraints upfront, and that subsystems can just fill in the blanks later.

But modern programs rarely follow that path. The system doesn’t emerge from one direction. The final system emerges in the middle.

Agile systems engineering doesn’t begin with a perfect spec. It begins with intent and assumptions. System-level goals are often rough at first. Subsystem constraints emerge from what teams can actually build, test, and learn.

The final system takes shape as top-down intent and bottom-up reality converge through iteration, feedback, and hard evidence.

Bottom-Up Reality: Constraints, limitations, and opportunities that emerge from the detailed design, testing, and capabilities of individual subsystems and components.

Bottom-Up Reality: Constraints, limitations, and opportunities that emerge from the detailed design, testing, and capabilities of individual subsystems and components.

Bottom-Up Reality: Constraints, limitations, and opportunities that emerge from the detailed design, testing, and capabilities of individual subsystems and components.

Top-Down Intent: Mission-level goals, system requirements, and overarching constraints derived from leadership, customers, contracts, or certification authorities, defining what success looks like for the entire system.

Top-Down Intent: Mission-level goals, system requirements, and overarching constraints derived from leadership, customers, contracts, or certification authorities, defining what success looks like for the entire system.

Top-Down Intent: Mission-level goals, system requirements, and overarching constraints derived from leadership, customers, contracts, or certification authorities, defining what success looks like for the entire system.

This chapter shows how high-performing teams structure that convergence in practice and how purpose-built tools help align intent, design, and test in real time.

Hypothetical Program: Autonomous Cargo Drone

Imagine you’re building a 300kg-class autonomous cargo drone for short-range military logistics. The mission: carry 50kg over 120km in under 90 minutes. Operate without a pilot, survive hot-and-high conditions, and return with full telemetry. That mission-level intent is clear. But the details? Not yet. Your avionics, batteries, thermal, and airframe teams are all still figuring out what’s feasible. So how do you move forward?

1. Top-Down: Mission Intent and System Needs

Top-down engineering begins with intent. This comes from leadership, customers, contracts, or certification bodies. It defines what success looks like for the system, even if the “how” is still unclear.

Example system needs:

  • Deliver 50kg payload over 120km within 90 minutes.

  • Autonomously navigate GPS-denied zones.

  • Maintain internal battery temps <55°C under 40°C ambient.

  • Achieve 20-minute turnaround between flights.

These become system-level requirements and constraints. In your requirement tools, you capture them as:

  • High-level mission and system requirements.

  • Linked rationale (e.g., RFP docs, stakeholder input).

  • Confidence scores (e.g., draft vs. confirmed).

  • Initial assumptions (e.g., payload size, altitude range).

2. Bottom-Up: Constraints From Design and Test

While the system engineers are defining the top-down intent, subsystem teams are surfacing the hard truths. They’re building, testing, and discovering the real-world constraints:

  • The battery team can only hit 750Wh/kg with a mature cell, not the ideal 900Wh/kg initially assumed.

  • The avionics are throttling above 65°C in preliminary thermal tests.

  • Propulsion peak output is falling off significantly at simulated high altitudes.

These aren’t just observations. They’re critical engineering inputs: bottom-up constraints that fundamentally shape what’s possible for the system.

"Modern teams don’t start with a hard design point. They start with a window. That window is dynamic, and it’s based on assumptions that evolve over time."

Pari Singh

Pari Singh

CEO, Flow Engineering

"Modern teams don’t start with a hard design point. They start with a window. That window is dynamic, and it’s based on assumptions that evolve over time."

Pari Singh

Pari Singh

CEO, Flow Engineering

"Modern teams don’t start with a hard design point. They start with a window. That window is dynamic, and it’s based on assumptions that evolve over time."

Pari Singh

Pari Singh

CEO, Flow Engineering

Subsystem owners (often called Responsible Engineers or REs) then define and own their specific design criteria:

  • Each criteria is quantitative and testable.

  • Tagged with a confidence level (e.g., assumed, proposed, or verified).

  • Linked to test results, models, or studies.

  • Includes rationale and uncertainty.

Responsible Engineer (RE): The individual accountable for the design, development, and verification of a specific component, subsystem, or interface. REs are empowered to propose design criteria and manage their elements iteratively.

Responsible Engineer (RE): The individual accountable for the design, development, and verification of a specific component, subsystem, or interface. REs are empowered to propose design criteria and manage their elements iteratively.

Responsible Engineer (RE): The individual accountable for the design, development, and verification of a specific component, subsystem, or interface. REs are empowered to propose design criteria and manage their elements iteratively.

"Everyone needs to think like a systems engineer. If your widget is amazing, but the system doesn’t deliver, no one actually cares about your widget being amazing and great."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Everyone needs to think like a systems engineer. If your widget is amazing, but the system doesn’t deliver, no one actually cares about your widget being amazing and great."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Everyone needs to think like a systems engineer. If your widget is amazing, but the system doesn’t deliver, no one actually cares about your widget being amazing and great."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

This table, ideally within your system engineering tool, becomes a live system model. Everyone sees what’s proposed, what’s been confirmed by test, and what still needs work. It allows for a single requirement to be linked directly to its owner, rationale, underlying assumptions, and relevant test links.

Live system model
Live system model
Live system model

3. Where They Meet: Continuous Reconciliation

Every week, teams compare top-down intent with bottom-up constraints. What did we want to do? What can we actually do?

Continuous Reconciliation: The ongoing process of comparing top-down mission intent and system requirements with bottom-up design capabilities and test results, identifying and resolving misalignments in real-time.

Continuous Reconciliation: The ongoing process of comparing top-down mission intent and system requirements with bottom-up design capabilities and test results, identifying and resolving misalignments in real-time.

Continuous Reconciliation: The ongoing process of comparing top-down mission intent and system requirements with bottom-up design capabilities and test results, identifying and resolving misalignments in real-time.

What matters is that teams see where they’re out of sync and resolve it rapidly.

Example
  • System need: “Flight time ≥ 90 minutes”

  • Battery model: “Estimated runtime 74 minutes”

  • Result: A flag is raised. The system-level requirement is out of sync with a critical subsystem’s capability.

At this point, leadership can:

  • Approve a change to the mission envelope (e.g., reduce required flight time or payload).

  • Re-evaluate the system-level goal and explore alternative solutions (e.g., different battery chemistry, energy harvesting).

Purpose-built tools help make these misalignments visible by default. If a subsystem constraint breaks a system requirement, the conflict is flagged immediately. Reviewers see the full trace without digging through documents.

Visibility is key. In traditional systems, disconnects show up late, often leading to costly rework. In agile, you are designing your process to have them show up the same day. This means if a test fails, or a responsible engineer updates a design criteria, any affected system-level requirements are flagged, showing the complete chain of impact.

top-down-meets-bottom-up
top-down-meets-bottom-up
top-down-meets-bottom-up

4. Design Work Starts Early. Even If Specs Aren’t Final

REs and subsystem teams don’t wait for system-level specs to be finished. They start with assumptions and tentative targets and work forward as evidence rolls in.

Example: Cargo Drone Early Assumptions
  • Mechanical team: Assumes 125kg dry mass. Based on this, they start building structural mockups and running preliminary stress analyses.

  • Thermal team: Assumes a <55°C internal temperature goal. They immediately begin passive airflow studies and explore cooling options.

  • Battery team: Assumes a 900Wh target. They start with enclosure layout designs and investigate potential cell configurations.

  • Flight team: Uses a 25ms control target. They run flight dynamics simulations to understand stability and control authority.

These assumptions are explicitly flagged, not hidden. They live within your system engineering tools, traceable to their rationale and linked to planned test activities. This allows teams to iterate on design without waiting for perfect upfront definition.

In your tools, it’s important to enable your REs by letting them:

  • Propose design criteria based on their expertise and early findings.

  • Flag assumptions with clear rationale and expected verification methods.

  • Attach relevant rationale, test data, and uncertainty markers.

As the system evolves, these early assumptions either get verified by test, or they are updated based on new information. This creates a living, breathing design.

system-design-philosophy-traditional-vs-agile-se
system-design-philosophy-traditional-vs-agile-se
system-design-philosophy-traditional-vs-agile-se

5. Weekly Syncs to Align Intent and Constraint

Alignment doesn’t happen in documents. It happens in structured conversation. Top-down (system engineers) and bottom-up (subsystem teams/REs) teams review deltas weekly:

  • Which system-level assumptions changed this week?

  • Which subsystem constraints were updated?

  • What’s out of sync between our intent and our reality?

  • Are we still inside our overall mission envelope?

Subsystem changes should auto-propagate. You don’t need to chase Slack threads or spreadsheets. Your requirement software needs to show what’s new, what’s blocked, and what to do next.

A structured view within your tool can highlight:

  • Recent requirement changes.

  • Test results that invalidate assumptions.

  • Design criteria that conflict with system-level goals.

This makes tradeoffs explicit and facilitates rapid, informed decision-making.

weekly-tradeoffs-and-sync
weekly-tradeoffs-and-sync
weekly-tradeoffs-and-sync

"You have to empower engineers to be able to have that creativity and autonomy to apply what they learn in the design and test phases back into updating the requirements."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"You have to empower engineers to be able to have that creativity and autonomy to apply what they learn in the design and test phases back into updating the requirements."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"You have to empower engineers to be able to have that creativity and autonomy to apply what they learn in the design and test phases back into updating the requirements."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

6. Common Anti-Patterns

anti-patterns-chapter-2
anti-patterns-chapter-2
anti-patterns-chapter-2

Mission-level intent defines the goal. Subsystem constraints define the boundaries. The real system lives in the space between and evolves over time.

Agile teams don’t fight this. They embrace it. They use tools and rituals to make alignment a rhythm, not a reactive cleanup:

  • Propose early assumptions based on best available data.

  • Link criteria to their rationale and planned tests.

  • Reconcile intent and constraint weekly.

  • Flag and resolve conflicts before they snowball.

This is how modern teams move fast without disconnect. This is how robust systems get built.

Systems emerge from both ends: top-down goals and bottom-up constraints. Tools designed for this iterative approach help teams visualize where they align and where they don’t. They give REs and SEs a shared view of what’s proposed, verified, or broken.

With effective tools, you can:

  • Link system-level requirements to subsystem criteria, assumptions, and tests.

  • Tag each assumption or criteria with confidence levels and rationale.

  • Flag conflicts automatically when subsystem constraints break system intent.

  • Run weekly reviews that surface changes across levels in real time.

Key Takeaways:

  • Modern systems emerge from the dynamic convergence of top-down mission intent and bottom-up subsystem reality.

  • Top-down defines "what success looks like," while bottom-up reveals "what's actually feasible."

  • Interfaces are owned by Responsible Engineers (REs) and are continuously reconciled.

  • Continuous reconciliation, often weekly, identifies misalignments early, driving real-time trade-offs and design updates rather than late surprises.

Key Takeaways:

  • Modern systems emerge from the dynamic convergence of top-down mission intent and bottom-up subsystem reality.

  • Top-down defines "what success looks like," while bottom-up reveals "what's actually feasible."

  • Interfaces are owned by Responsible Engineers (REs) and are continuously reconciled.

  • Continuous reconciliation, often weekly, identifies misalignments early, driving real-time trade-offs and design updates rather than late surprises.

Key Takeaways:

  • Modern systems emerge from the dynamic convergence of top-down mission intent and bottom-up subsystem reality.

  • Top-down defines "what success looks like," while bottom-up reveals "what's actually feasible."

  • Interfaces are owned by Responsible Engineers (REs) and are continuously reconciled.

  • Continuous reconciliation, often weekly, identifies misalignments early, driving real-time trade-offs and design updates rather than late surprises.

Chapter 11 — Continuous Verification: Test Early, Test Often

In traditional programs, verification is a gate. It happens late. After Preliminary Design Review (PDR), after Critical Design Review (CDR), after integration. You verify what you built, often long after critical design decisions were made. That delay is dangerous.

Modern teams can’t afford that. Their designs change too fast. Their architectures are too integrated. And their biggest risks come from what’s unknown, not what’s already documented.

Agile teams treat verification as a design input, not just a closeout activity. They test early, test often, and use every test to inform the next design decision.

Every desgin criteira should have a plan to verify it from day one. That doesn’t mean a final hardware test. It might start with a simulation, a calculation, or a basic integration check. What matters is the signal.

In a culture of responsibility, engineers are empowered and expected to:

  • Write their own test plans rather than adhering strictly to a checklist.

  • Track telemetry themselves to understand live system behavior.

  • Review failures directly from the hardware they built, fostering learning through experience

"Things like temperature, thrust, dimensions, firmware logic—these aren’t static. They’re assumptions. They change daily. Especially early in the program."

Pari Singh

Pari Singh

CEO, Flow Engineering

"Things like temperature, thrust, dimensions, firmware logic—these aren’t static. They’re assumptions. They change daily. Especially early in the program."

Pari Singh

Pari Singh

CEO, Flow Engineering

"Things like temperature, thrust, dimensions, firmware logic—these aren’t static. They’re assumptions. They change daily. Especially early in the program."

Pari Singh

Pari Singh

CEO, Flow Engineering

This approach ensures design decisions are made with real-world application in mind, knowing the engineer responsible for the design will also address any failures. This "design a testable system and test what you fly" philosophy leads to more robust systems and fewer surprises. Engineers are driven to think beyond their individual elements, considering integration, questioning assumptions, and proactively identifying problems. This chapter explains how iterative teams continuously verify their system across cycles, how they structure verification coverage and status in purpose-built tools, and how test failures trigger meaningful design updates, not just paperwork.

Example: Thrust Vectoring System

You’re building a vectored thrust system for a reusable upper stage. The mission requirement is simple: maintain control authority at up to 10° pitch under max thrust. But every assumption is in motion:

  • Can your actuators respond fast enough?

  • Will the gimbals hold position under thermal load?

  • How do you know the control software is compensating correctly?

You can’t answer these questions at the end of the program. You need to build confidence in real time. Test by test, loop by loop.

1. Start With Verifiable Design Criteria

Every design criteria should have a path to verification from the moment it’s written. If not, it’s not a requirement. It’s a wish.

Agile systems engineers write design criteria with:

  • A clear verification method (e.g., simulation, test, inspection).

  • A confidence level (progressing from assumed → proposed → verified → frozen).

  • Linked rationale (models, customer input, prior data).

In your tools, each requirement needs to be traceable to:

  • One or more test cases.

  • A verification method tag.

  • Pass/fail status and latest evidence.

  • Owner and next planned test.

The goal is to enable every Responsible Engineer (RE) to answer: What’s been tested? What’s failed recently? What design criteria are still assumptions? What’s next in the test queue?

Verification method maturity
Verification method maturity
Verification method maturity

The verification method matures alongside the system. Early iterations may only simulate. Later ones test real hardware. Your tools need to allow your whole team to track that progression.

2. Verify Continuously, Not at the End

In traditional systems engineering, test is a phase. In agile systems engineering, teams don’t wait for CDR or PDR to verify functionality. Test cadence is continuous. Teams test what they can, when they can. It’s a rhythm:

  • Unit test plans run daily (simulation, CI rigs, code-in-the-loop).

  • Subsystem tests run weekly.

  • Full-stack integration tests happen each V-cycle.

  • Confidence Level: A qualitative or quantitative measure indicating the maturity, reliability, or certainty of a requirement, interface, architecture block, or assumption.

  • Assumed: Based on prior work, an educated guess, or a placeholder; requires future verification.

  • Proposed: Actively under evaluation, a target the team is working towards; not yet verified.

  • Verified: Supported by concrete test data, validated analysis, or direct evidence.

  • Frozen: Locked for integration, formal review, or production; indicates a high level of stability and verification.

  • Confidence Level: A qualitative or quantitative measure indicating the maturity, reliability, or certainty of a requirement, interface, architecture block, or assumption.

  • Assumed: Based on prior work, an educated guess, or a placeholder; requires future verification.

  • Proposed: Actively under evaluation, a target the team is working towards; not yet verified.

  • Verified: Supported by concrete test data, validated analysis, or direct evidence.

  • Frozen: Locked for integration, formal review, or production; indicates a high level of stability and verification.

  • Confidence Level: A qualitative or quantitative measure indicating the maturity, reliability, or certainty of a requirement, interface, architecture block, or assumption.

  • Assumed: Based on prior work, an educated guess, or a placeholder; requires future verification.

  • Proposed: Actively under evaluation, a target the team is working towards; not yet verified.

  • Verified: Supported by concrete test data, validated analysis, or direct evidence.

  • Frozen: Locked for integration, formal review, or production; indicates a high level of stability and verification.

Testing isn’t delayed until “the design is done.” It makes the design done. Verification isn’t an event. It’s a loop.

Modern teams automate this loop. This relentless, automated feedback cycle is the essence of CI/CD applied to hardware. Test rigs push results directly into your system engineering tool. Requirement statuses update automatically. Coverage reports are generated from linked tests. Change logs show what’s different since the last iteration. This gives the team instant, trustworthy visibility without manual chasing.

In this approach, test failures aren’t a final hurdle, but a signal to revise the spec. Traceability isn’t just for compliance. It’s used for active decision-making. The goal isn’t document completion. It’s continuous confidence-building.

In your tools, every test result should be linked directly to its corresponding requirement. As tests pass or fail, requirement statuses update automatically. This makes failures traceable, not buried. Confidence builds incrementally as each requirement is backed by real test data. This real-time verification chain is critical, showing how a single test failure can trigger a requirement change, for example.

3. Failures Trigger Change

Test failures aren’t problems. They’re signals. The danger isn’t in failing. It’s in failing silently.

When a test fails in your system engineering tool:

  • The linked requirement is automatically marked ❌.

  • The Responsible Engineer (RE) is notified.

  • Associated assumptions are flagged or updated.

  • Impact is propagated to downstream blocks.

This immediate feedback and automated propagation of failure mirrors the 'fail fast, fix fast' mantra of effective CI/CD, ensuring issues are addressed at their source without delay.

V0 to V3 verification focus
V0 to V3 verification focus
V0 to V3 verification focus

Each step isn’t a repeat but a refinement. You’re building confidence, not just checking boxes.

Example: Thermal Failure on Gimbal

Test: TEST-021-C fails at 60s in TVAC, max temp = 78.3°C
Requirement: GIMBAL-THERM-02 fails
Action:

  • Owner notified automatically.

  • Root cause identified: heat soak via an aluminum bracket.

  • Cooling redesign initiated.

  • New requirement added for bracket thermal specification.

  • Retest planned for V3 with updated hardware.

Tools for iterative development log the change and trace its effects. This lets reviewers and certification bodies follow the decision path without digging through 20 slides. This real-time feedback contrasts sharply with traditional, bottom-loaded verification models.

"We take prototypes out there, fly them, gather real data, and iterate. Some of those tests fail. That’s fine—we learn"

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"We take prototypes out there, fly them, gather real data, and iterate. Some of those tests fail. That’s fine—we learn"

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"We take prototypes out there, fly them, gather real data, and iterate. Some of those tests fail. That’s fine—we learn"

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

4. System-Level Verification Health

Fast teams don’t wait for formal reviews to check verification status. They monitor it weekly. Most teams don’t know what’s verified and what’s still a guess. CI/CD teams do. That’s the difference between flying blind and flying fast.

verification status
verification status
verification status

In your tools, you should show verification coverage in real time, accessible for the whole team:

  • Percentage of requirements verified.

  • Percentage pass/fail/pending status.

  • Failures by subsystem or owner.

  • Recent regressions or retests.

  • Confidence deltas since the last milestone.

This lets teams rebalance: if guidance is 90% verified but power is 40%, you know exactly where to focus resources.

As an engineering leader, your role is to:

  • Ensure test failures don’t pile up.

  • Adjust scope transparently and not sweep issues under the rug.

  • Empower REs to update design criteria quickly based on new data.

Your system engineering tool should constantly update design criteria as new test results come in, providing a dynamic verification dashboard.

5. Common Anti-Patterns

anti-patterns-chapter-3
anti-patterns-chapter-3
anti-patterns-chapter-3

In Agile, you build through test. You don’t finish a spec, then test it. You write a test, then use it to refine the spec.

Each iteration includes:

  • A comprehensive verification plan.

  • A clear test target.

  • A structured response loop for addressing results.

Effective system engineering tools let teams establish seamless feedback loops by linking every requirement to its test evidence and seeing live verification health across the system.

You don’t wait for PDR to know what’s real. No waiting for integration. No hidden regressions. Just a system that gets more real, one test at a time. This allows you to:

  • Link every requirement to one or more test cases (simulation, bench, flight).

  • Track real-time pass/fail status and last verified date.

  • Automatically flag failed requirements and notify owners.

  • Review verification coverage by subsystem, owner, or iteration.

Key Takeaways:

  • Modern systems emerge from the dynamic convergence of top-down mission intent and bottom-up subsystem reality.

  • Top-down defines "what success looks like," while bottom-up reveals "what's actually feasible."

  • Interfaces are owned by Responsible Engineers (REs) and are continuously reconciled.

  • Continuous reconciliation, often weekly, identifies misalignments early, driving real-time trade-offs and design updates rather than late surprises.

Key Takeaways:

  • Modern systems emerge from the dynamic convergence of top-down mission intent and bottom-up subsystem reality.

  • Top-down defines "what success looks like," while bottom-up reveals "what's actually feasible."

  • Interfaces are owned by Responsible Engineers (REs) and are continuously reconciled.

  • Continuous reconciliation, often weekly, identifies misalignments early, driving real-time trade-offs and design updates rather than late surprises.

Key Takeaways:

  • Modern systems emerge from the dynamic convergence of top-down mission intent and bottom-up subsystem reality.

  • Top-down defines "what success looks like," while bottom-up reveals "what's actually feasible."

  • Interfaces are owned by Responsible Engineers (REs) and are continuously reconciled.

  • Continuous reconciliation, often weekly, identifies misalignments early, driving real-time trade-offs and design updates rather than late surprises.

Chapter 12 — Fast Architecture: Iterate Blocks, Freeze What’s Proven

In traditional systems engineering, architecture is front-loaded. Teams aim to define the entire system architecture early, allocate every requirement, and freeze the interfaces. But in fast-moving hardware programs, that rarely works. Early architectural decisions are based on unproven assumptions. Locking them in too soon creates expensive rework later.

Agile teams treat architecture as a living structure. They define just enough to support the next iteration. They treat architecture like code: block, test, revise. As the system evolves, the architecture gains fidelity, constraints are verified, and interfaces are solidified through test. The goal is not to get it right the first time; it's to iterate toward correctness.

This doesn't mean deferring architecture indefinitely or skipping detailed design. Instead, it means starting with a minimal, functional framework: high-level blocks that represent key capabilities and their relationships. Incrementally adding detail and rigor as learning occurs and risks are retired. The goal is to avoid upfront paralysis and costly rework caused by premature specificity.

Architecture Block (Block): A functional or physical grouping of system elements representing a discrete capability or component within the system architecture. In agile SE, blocks are initially high-level and gain detail incrementally through iterations.

Architecture Block (Block): A functional or physical grouping of system elements representing a discrete capability or component within the system architecture. In agile SE, blocks are initially high-level and gain detail incrementally through iterations.

Architecture Block (Block): A functional or physical grouping of system elements representing a discrete capability or component within the system architecture. In agile SE, blocks are initially high-level and gain detail incrementally through iterations.

"Get that initial system architecture and block diagram down, define the interfaces. That enables you to start developing and delivering pieces of the system independently."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Get that initial system architecture and block diagram down, define the interfaces. That enables you to start developing and delivering pieces of the system independently."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Get that initial system architecture and block diagram down, define the interfaces. That enables you to start developing and delivering pieces of the system independently."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Early on, when everyone starts moving, it might look chaotic. That’s why you need that initial architecture and direction. So when teams start vectoring, they’re generally going in the right direction. They might zig-zag, but that’s okay. That’s the point of iterative development. Build something, test it, learn, and correct your course."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Early on, when everyone starts moving, it might look chaotic. That’s why you need that initial architecture and direction. So when teams start vectoring, they’re generally going in the right direction. They might zig-zag, but that’s okay. That’s the point of iterative development. Build something, test it, learn, and correct your course."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Early on, when everyone starts moving, it might look chaotic. That’s why you need that initial architecture and direction. So when teams start vectoring, they’re generally going in the right direction. They might zig-zag, but that’s okay. That’s the point of iterative development. Build something, test it, learn, and correct your course."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

This chapter shows how fast-moving teams model their system incrementally, track architectural decisions over time, and balance flexibility with the need for alignment.

Example: Satellite Payload Controller

You’re developing a payload controller for a modular satellite bus. The system must integrate with multiple payload types (optical, radar, comms), maintain synchronization with the spacecraft clock, and survive radiation and thermal extremes.

The architecture must:

  • Support flexible payload interfaces (initially TBD).

  • Allow modular swaps between FPGA and CPU-based control.

  • Handle thermal constraints across the board stack.

You can’t solve all this in one pass. Instead, you define what matters most now and then refine as tests give you signal.

1. Start with Functional Blocks, Not the Whole System

Instead of defining every block from day one (full CAD or wiring diagrams), agile teams start with a high-level block diagram and flesh out only what’s needed for the next milestone.

These are rough, high-level representations that answer three questions:

  • What needs to exist?

  • What connects to what?

  • What are the top unknowns?

Early architectures are also built around risk, not completion. Each block should ideally surface a key unknown.

Examples:

  • Propulsion tank size impacts mass and integration envelope.

  • Thermal control method affects avionics layout.

  • Power routing scheme influences GNC performance and noise.

V0 architectures are intentionally rough. They’re not meant to be final—they're meant to get teams moving. The goal is to start framing the system so that subsystems can begin thinking about interfaces, integration points, and key unknowns. Early versions help surface what needs to be tested and serve as scaffolding for future design criteria as the system evolves.

They provide a necessary, shared mental model without burdening teams with unverified detail

Architecture detail emerges as confidence builds. This progressive layering of detail ensures that architectural decisions are informed by the latest test data and verified understanding, rather than being based on early assumptions

Example: Initial Block Model (V0)
v0-block-model
v0-block-model
v0-block-model

You don’t need to define every edge case. Just enough to get signal in V0.

In your system engineering tools, teams:

  • Create functional blocks (e.g., Payload Controller, Clock Sync Module, Comms Interface).

  • Link each block to requirements and test scope.

  • Assign ownership and a confidence level (tentative, confirmed, frozen).

This approach to architecture development contrasts sharply with traditional methods:

v0-architecture-table
v0-architecture-table
v0-architecture-table

V0 Architecture Table: Early architecture model showing high-level blocks, ownership, and status. Helps teams frame the system, identify unknowns, and start defining interfaces without locking in premature detail.

2. Layer Detail Over Time

In early iterations, architecture changes frequently. That’s expected. What matters is making those changes traceable and intentional. The idea is to run quick trades and freeze late.

Architecture detail emerges as confidence builds. By V1 or V2, teams:

  • Add internal sub-blocks (e.g., power path, thermal mounts).

  • Attach interfaces between blocks (with underlying assumptions).

  • Specify key parameters (e.g., latency, throughput, size).

Your architecture tool should track these changes by iteration. You can see how each version evolved, what was added, and what got removed.

This allows for a visual evolution of a single architecture block and its dependencies, and allows teams to make a block trade and immediately see and update impacted requirements.

By V3, teams have often revised the architecture 3 to 5 times. This is normal. What’s important is the traceability: what changed, why, and what it affected.

Within your system engineering tool, teams can:

  • Compare architecture diagrams across milestones.

  • See exactly what blocks changed and when.

  • Review which interfaces or constraints evolved.

This helps systems engineers and leadership assess maturity, not just progress. This could be visually represented by a before-and-after diagram of V0 vs. V2 architecture, highlighting the added detail.

block-by-block-view
block-by-block-view
block-by-block-view

Block-to-Block View: Zoomed-in view of two connected architecture blocks, showing their interface, ownership, and verification context. Useful for reviewing integration points and surfacing assumptions early.

Iterative Architecture Development Example
iterative-architecture-development-example
iterative-architecture-development-example
iterative-architecture-development-example

Each cycle adds detail. Architecture becomes a shared model, not just a frozen spec.

When a trade study or test result invalidates a decision:

  • Teams update the affected block.

  • Linked assumptions and requirements are updated automatically.

  • A new milestone snapshot is created to capture the change.

Subsystems should only freeze when the configuration is proven. Your tools need to support your team in tracking when a block is:

  • Tentative: Still in flux, internal consistency being explored.

  • Stable: Internally consistent, but external dependencies might still shift.

  • Frozen: Locked for external integration or formal review.

3. Tie Architecture to Verification

Architecture isn’t just a drawing. Each block should exist for a reason. To satisfy requirements and enable test. In a modern system model, every block links directly to:

  • The design criteria it supports.

  • The interfaces it owns or depends on.

  • The tests it must pass.

In your system engineering tool, architecture becomes a live model, not just a static diagram. Each block includes:

  • Traceability to linked requirements and design criteria.

  • Embedded test coverage and verification status (pass/fail/unknown).

  • Interface definitions tied to assumptions and test results.

  • Change history and rationale over time.

This gives teams and reviewers instant visibility into what’s in scope, what’s been verified, and what still needs work. This can be effectively demonstrated through a product screenshot showing an architecture block's detailed view, complete with linked requirements, rationale, and test status.

In Practice: Architecture-Guided Iteration

Let’s say you’re planning V2. You use your architecture tool to: Identify blocks marked as “tentative.”

  • Check which blocks haven’t been tested yet.

  • Decide whether to freeze or revise based on the latest test results.

  • Tag blocks that will be in-scope for the next iteration.

This proactive approach avoids untested assumptions from becoming architecture debt.

4. Manage Interface Evolution

Interfaces often change the most. That’s where systems meet and where assumptions pile up.

In agile teams, interface specifications are:

  • Proposed early with a confidence level.

  • Verified through active integration testing.

  • Frozen only when sufficient test coverage and confidence are achieved.

Your tools should clearly show which interfaces:

  • Are confirmed or tentative.

  • Have passed end-to-end integration tests.

  • Still rely on unverified assumptions (e.g., bandwidth, physical alignment, timing).

We’ll talk about interfaces in-depth in Chapter 5.

interface-evolution
interface-evolution
interface-evolution

Architecture evolves through iterations. V0 shows high-level blocks and rough signal paths—enough to test basic flow. V1 adds structured, testable interfaces with shared ownership and traceability.

5. Review Architecture as a Team Ritual

Architecture isn’t just a model. It’s a coordination mechanism. High-performing teams review it regularly.

Weekly Architecture Review

  • What blocks or interfaces changed since last week?

  • What’s still tentative? Why?

  • Which assumptions are verified? Which are not?

  • What’s being frozen for the next milestone?

In your tools, this happens through:

  • Architecture diffs (e.g., comparing V1 vs. V2).

  • Tagged blocks (e.g., “freeze pending,” “under review”).

  • Links to impacted requirements and test cases.

This turns architecture from a static planning artifact into a real-time source of truth.

6. Common Anti-Patterns

anti-patterns-chapter-5
anti-patterns-chapter-5
anti-patterns-chapter-5

Architecture isn’t a phase. It’s a living model of system behavior—built up through iterations.

Agile teams don’t avoid architecture; they do it differently. They model what matters now. They freeze what’s been tested. They evolve the system one cycle at a time.

The goal isn’t to get the diagram perfect early. The goal is to use it to surface and retire unknowns. Early versions help teams align on interfaces, test key assumptions, and make better decisions as the system matures.

If a traditional architecture is a static blueprint, an agile architecture is a working model. It changes as the system gets more real and keeps everyone aligned on what’s true today.

Effective MBSE/Architecture tools help teams define functional blocks early, link them to constraints and tests, and track how the architecture matures without locking in guesses too soon. Each block is tied to its function, owner, dependencies, and current confidence level. As requirements and test results evolve, so do the blocks, and confidence builds over time. With such tools, you can:

  • Model architecture with versioned, editable blocks.

  • Link blocks directly to design criteria, assumptions, and verification plans.

  • Track freeze status and confidence levels across the system.

  • Compare architecture snapshots between milestones to see what changed and why.

Key Takeaways:

  • System architecture is a living structure that evolves incrementally, rather than being frozen upfront.

  • Teams start with high-level functional blocks, adding detail as knowledge and confidence build.

  • Architecture is explicitly tied to verification, with each block supporting requirements and enabling tests.

  • Iterative architecture allows for flexibility, risk retirement, and continuous alignment, avoiding premature commitments.

Key Takeaways:

  • System architecture is a living structure that evolves incrementally, rather than being frozen upfront.

  • Teams start with high-level functional blocks, adding detail as knowledge and confidence build.

  • Architecture is explicitly tied to verification, with each block supporting requirements and enabling tests.

  • Iterative architecture allows for flexibility, risk retirement, and continuous alignment, avoiding premature commitments.

Key Takeaways:

  • System architecture is a living structure that evolves incrementally, rather than being frozen upfront.

  • Teams start with high-level functional blocks, adding detail as knowledge and confidence build.

  • Architecture is explicitly tied to verification, with each block supporting requirements and enabling tests.

  • Iterative architecture allows for flexibility, risk retirement, and continuous alignment, avoiding premature commitments.

Chapter 13 — Interfaces in the Loop: Integration Isn’t a Phase

In traditional programs, integration is a single milestone. A thing you do once everything is "done." But by then, it’s often too late. Interfaces fail not because they’re inherently hard, but because they were never truly tested, just assumed.

Interface: A defined boundary or connection point between two or more system elements (hardware, software, human). Interfaces specify how elements interact, including physical, electrical, data, and timing characteristics. In agile SE, interfaces are treated as testable contracts.

Interface: A defined boundary or connection point between two or more system elements (hardware, software, human). Interfaces specify how elements interact, including physical, electrical, data, and timing characteristics. In agile SE, interfaces are treated as testable contracts.

Interface: A defined boundary or connection point between two or more system elements (hardware, software, human). Interfaces specify how elements interact, including physical, electrical, data, and timing characteristics. In agile SE, interfaces are treated as testable contracts.

In traditional programs, integration is a single milestone. A thing you do once everything is "done." But by then, it’s often too late. Interfaces fail not because they’re inherently hard, but because they were never truly tested, just assumed.

"Everyone’s work needs to interface with others. We need to think about how data flows, how control flows, how software and hardware interact"

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Everyone’s work needs to interface with others. We need to think about how data flows, how control flows, how software and hardware interact"

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"Everyone’s work needs to interface with others. We need to think about how data flows, how control flows, how software and hardware interact"

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

This chapter shows how fast-moving teams define, own, and test interfaces from day one, and how purpose-built tools can turn them into live, traceable parts of the system.

Example: Propulsion ↔ Avionics Integration

You’re building a flight control system for a small launch vehicle. The avionics team owns the GNC (Guidance, Navigation, and Control) loop; the propulsion team owns the engine controller. They think they’re aligned on:

  • Power draw: 28V, 4A max.

  • Data link: via CAN bus.

  • Control rate: 50 Hz updates.

But these assumptions haven’t been tested together yet. Both teams are moving fast. If the interface isn’t clear, and testable, it will break when it matters most, leading to costly delays or even mission failure.

1. Define Interfaces Early.

Agile teams don’t wait for a formal Interface Control Document (ICD) to define interfaces. They start early, capture working assumptions, and tag them as tentative.

Here's how interface management differs:

Interface management differences
Interface management differences
Interface management differences

Even in early architectures, teams define how blocks interact: What flows between them? Signals? Power? Data? What timing constraints or physical boundaries exist?

Who owns each side?

A good early interface specification includes:

  • Physical dimensions: (mount points, connector types).

  • Electrical parameters: (voltage, current, signaling).

  • Data protocols: (bus type, message schema, timing).

  • Ownership and assumptions: (who owns what, what’s still TBD).

Interfaces don’t need to be finalized—but they do need to exist. That’s how teams align early and avoid late surprises.

Tools for iterative development help teams:

  • Create interface objects between architecture blocks.

  • Specify electrical, data, timing, and physical characteristics.

  • Assign owners on both sides and tag status (tentative, proposed, frozen).

  • Link interfaces to assumptions, design criteria, and test coverage.

Each interface becomes a living part of the system model. Traceable, testable, and updated as the system evolves.

2. Treat Interfaces as Testable Contracts

Most integration issues don’t come from faulty hardware. They come from unverified assumptions about how that hardware interacts. That’s why high-performing teams treat interface test as a precondition for freezing any interface.

An interface isn't truly defined until its behavior has been verified through a test. It's a 'contract' because it specifies an agreement on how two system elements will interact, and it's 'testable' because that agreement can be programmatically verified.

If you can’t test an interface, it doesn’t really exist.

Teams build early test harnesses to check basic behavior:

  • Loopbacks: Simple tests to confirm signal path.

  • Mock subsystems: Simulating a missing component.

  • Simulated loads: Testing electrical or data throughput under expected conditions.

Interface Test Coverage Example
Interface Test Coverage Example
Interface Test Coverage Example
Interface Test Coverage Example

Until tests pass, you should keep the interface flagged. When assumptions change, all linked blocks and requirements should be notified by your tools. This ensures that any breach of the interface 'contract' is immediately visible and actionable. This iterative approach to testing is vital, with an interface test progression evolving from V0 to V2.

The goal isn’t perfect fidelity in V0, but early signal. Test the basic behavior: does data flow? Does the control loop close? Is the voltage within bounds? These early tests are your 'unit tests' for interfaces, proving the contract holds even with preliminary hardware or mockups.

This continuous, automated interface testing builds the equivalent of 'integration tests' in software CI/CD, ensuring that interfaces are 'green' before broader system integration.

Each interface should have:

  • One or more test cases linked directly to it.

  • Pass/fail status updated in real time.

  • Clear rationale for its freeze status (e.g., “Verified in V2 with test TEST-048-A”).

As tests pass, interface maturity increases. Until they do, the interface should stay flagged. And when underlying assumptions change, all linked blocks and requirements should be notified automatically by your tools.

3. Assign Interface Ownership

Every interface has two sides, but it must have one clear owner. Interfaces are where systems fail as components meet and assumptions collide. Assigning clear ownership is critical to preventing issues.

In your tools, each interface should be:

  • Owned by a single Responsible Engineer (RE).

  • Linked to related test cases and architecture blocks.

  • Flagged automatically when underlying assumptions, components, or requirements change.

Now, teams can immediately see:

  • Who owns each interface.

  • Which interfaces changed this week.

  • Where verification gaps still exist.

This prevents undocumented changes, eliminates tribal knowledge, and avoids specification drift. It creates accountability and keeps gaps from getting buried in slides. Without a clear owner, interfaces become 'everyone's problem and no one's responsibility,' leading to late-stage blame and costly rework. The ability to link a mock interface signal test from a testbench directly to its interface object in your tool, showing a pass/fail result, is a powerful demonstration of this.

4. Review Interfaces Weekly

In fast-moving programs, interface review isn’t a phase. It’s part of the operating rhythm. Interfaces evolve with every iteration. What was tentative last week might be ready to freeze today or flagged as broken.

Each cycle, teams check:

  • What changed in our interfaces?

  • What’s still unverified?

  • Which interfaces are ready to freeze based on test evidence?

  • Where are we missing critical test coverage?

Effective tools give teams a clear, real-time view of interface maturity and risk. Each interface is:

  • Tagged by its status (tentative → confirmed → frozen).

  • Linked to the last test result.

  • Tracked for change across milestones.

This makes interface health visible, not buried in a spreadsheet.

Instead of waiting for a single, chaotic integration event, teams integrate continuously and use interface reviews to make evidence-based decisions, every single week.

Review Interfaces Weekly
Review Interfaces Weekly
Review Interfaces Weekly

6. Common Anti-Patterns

common-anti-patterns-chapter-5
common-anti-patterns-chapter-5
common-anti-patterns-chapter-5

Interfaces are the fault lines of a system. They evolve just like requirements and architecture. Treating them as static leads to hidden risk. Treating them as dynamic and testable makes integration safe, even at speed.

Agile teams don’t delay integration. They practice it from the beginning. It’s the core feedback loop that turns independent components into a working system. The earlier you start, the fewer surprises you’ll face and the faster you’ll ship something real.

You don’t have to pick between flexibility and rigor. With the right system in place, you can move fast and integrate with confidence.

"You don’t wait for perfection to ship something. You deliver incremental capabilities. Then keep improving."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"You don’t wait for perfection to ship something. You deliver incremental capabilities. Then keep improving."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

"You don’t wait for perfection to ship something. You deliver incremental capabilities. Then keep improving."

Adam Thurn

Adam Thurn

Chief Engineer, Space Missions, Anduril

In effective system engineering tools, interfaces are treated as first-class objects. They are owned, testable, traceable, and versioned. No more buried ICDs or assumptions hidden in slides. Each interface evolves with the system and is verified through test, not just paperwork. Every iteration builds interface maturity, not just functionality.

With tools, you can:

  • Define interfaces between blocks with detailed electrical, data, and timing specifications.

  • Link interfaces directly to requirements, architecture blocks, and relevant test cases.

  • Assign clear ownership and confidence levels on both sides of every interface.

  • Track version history, assumption changes, and verification coverage in real time.

  • Review interface maturity weekly: see what passed, what changed, and what’s ready to freeze.

Key Takeaways:

  • Integration begins at V0 and continues throughout the program, not just at the end.

  • Interfaces are treated as dynamic, testable contracts between teams, explicitly defined and owned.

  • Early and continuous interface testing is crucial to validate assumptions and prevent late-stage integration failures.

  • Clear interface ownership and regular reviews are essential to manage evolution, track maturity, and ensure seamless system assembly.

Key Takeaways:

  • Integration begins at V0 and continues throughout the program, not just at the end.

  • Interfaces are treated as dynamic, testable contracts between teams, explicitly defined and owned.

  • Early and continuous interface testing is crucial to validate assumptions and prevent late-stage integration failures.

  • Clear interface ownership and regular reviews are essential to manage evolution, track maturity, and ensure seamless system assembly.

Key Takeaways:

  • Integration begins at V0 and continues throughout the program, not just at the end.

  • Interfaces are treated as dynamic, testable contracts between teams, explicitly defined and owned.

  • Early and continuous interface testing is crucial to validate assumptions and prevent late-stage integration failures.

  • Clear interface ownership and regular reviews are essential to manage evolution, track maturity, and ensure seamless system assembly.

Appendix — Authoring Effective Design Criteria & Requirements

Requirements vs. Design Criteria: A Critical Distinction

In Agile Systems Engineering, requirements aren't static contracts. They're dynamic design criteria that guide iterative development and facilitate continuous verification. While often used interchangeably, understanding the nuances of "requirements" and "design criteria" is vital in an iterative context.

Requirement (The "What"):

  • Definition: A higher-level, system-wide constraint or capability derived from Top-Down Intent (e.g., mission, customer needs, regulations). It defines what the system must achieve.

  • When to Use: Defined early by Systems Engineers or program leadership. Relatively stable, though subject to reconciliation as understanding evolves.

  • Example: "The power storage system shall provide continuous energy supply for a minimum of 48 hours during grid outages."

Design Criteria (The "How"):

  • Definition: Actionable, testable specifications owned by Responsible Engineers (REs). They detail how a specific subsystem or component will meet a broader requirement, acting as a testable contract.

  • When to Use: Defined by subsystem REs as they develop solutions. These are negotiable and encourage trade-offs (e.g., mass, cost, power, schedule) within higher-level requirements.

  • Example: "The battery cell shall provide 200-220 Wh/kg energy density, measured at 25∘C and a C/2 discharge rate."

Why the distinction matters: Requirements establish the overall mission. In contrast, design criteria provide the specific, actionable targets that drive iterative build-test-learn cycles. This distinction is crucial because design criteria, being more granular, are where negotiation and trade-offs are actively encouraged among subsystem teams. This flexibility allows engineers to optimize for critical factors within the boundaries set by the higher-level "what." It's common for a single high-level requirement to break down into many detailed design criteria, each owned by an RE.

design-criteria-example
design-criteria-example
design-criteria-example

Download the Full Quick Guide (PDF)

A quick guide to writing clear, testable design criteria. Learn how fast-moving teams evolve requirements with traceability, ownership, and built-in verification.

Locked image
Locked image

Agile Systems Engineering Briefing

Monthly newsletter and examples on building better iterative engineering cultures from teams like SpaceX, Stoke and Impulse Space.

Share this post

Agile Systems Engineering Briefing

Monthly newsletter and examples on building better iterative engineering cultures from teams like SpaceX, Stoke and Impulse Space.

Share this post

Agile Systems Engineering Briefing

Monthly newsletter and examples on building better iterative engineering cultures from teams like SpaceX, Stoke and Impulse Space.

Share this post