
Pari Singh
Oct 14, 2025
Handbook to Iterative Systems Engineering
Volume II — Leadership in Agile
Learn how to apply agile leadership to hardware engineering through effective scoping and concurrent engineering that meets regulatory standards. Learn from the fastest, most innovative companies to adapt quickly and deliver superior results.
Chapter 5 — Scoping in Practice: From Minimal Viable Technology to MVP
The Role of Judgment
When it comes to engineering complex systems, how do you decide what to build first? How do you balance ambition with the practical need to ship something? The secret lies in scoping. A practice that requires sharp judgment, a deep understanding of time as a resource, and the courage to confront failure head-on.
Scoping is a test of judgment. It’s about stripping down to the essentials. To the core problem. And pushing back on anything unnecessary.
Done right, scoping drives speed, learning, and progress. Done wrong, it leads to bloated timelines and missed opportunities.
One idea the leaders at SpaceX use a bunch is “future problem.” If it’s a “future problem,” we don’t talk about it today. Today is about making real, tangible progress on the 3-4 things that will give us new information. If we do that, we’re in the race, and tomorrow will come.
This chapter explores how leading companies like Astranis, Planet, and SpaceX reduced scope to the bare minimum, used scrappy prototypes to learn faster, and proved that shipping frequently—rather than perfectly—is the key to long-term success.
Astranis’ and Planet’s Minimal Viable Satellites
Today, Astranis builds complex geostationary satellites, a notoriously challenging domain. But they didn’t start there.
Their first cubesat, built for Y Combinator’s Demo Day, came together in just three weeks. The “cleanroom”? PVC pipes and shower curtains.
Later, to test their software-defined radio technology, Astranis launched LEO satellites. These early missions helped them iterate on full mission profiles, get more reps for the team from each launch, and steadily progress to a more challenging higher orbit.
Planet’s first satellites were famously scrappy. They used off-the-shelf antennas, purchased from consumer electronics stores, and attached them to their satellites. If this had been a NASA project, months—if not years—would have been spent designing custom antennas with extensive validation and verification processes.
By starting simple, Planet proved their concept worked. They reduced scope to the essentials and learned what they needed to iterate on for future versions.
This reduction in early scope to the bare minimum is the same for SpaceX and their Grasshopper. It wasn’t built to reach orbit—it was built to test and refine vertical takeoff and landing capabilities. By focusing on this narrow problem, SpaceX gained invaluable experience in reusability, paving the way for the Falcon 9 and Starship programs.
The core idea here is to go through the full cycle as fast as possible, getting from problem space to solution space as fast as possible. MVP of the whole mission profile. We’ll see later why this in particular is so important.
Counter Examples
In contrast, programs like NASA’s Space Launch System (SLS) illustrate the dangers of excessive scoping. With over $40 billion spent and no reusable capabilities, SLS highlights the risks of prioritizing perfection over progress.
Scoping effectively is an exercise in judgment. It requires answering questions like:
What is the true problem space we need to address right now?
How do we know we’ve reached the “right” moment to stop designing and start shipping?
What problem do we need to de-risk now? Is it building team muscle or creating evidence for investors? When is the “right” time to move to solutions and start shipping?
The answer lies in continuously reducing scope. Push back on anything unnecessary and pare your goals down to the absolute essentials. Avoid making your MVP too big—focus on de-risking the most critical challenges first.
Learning Is the Real Goal
Scoping isn’t just about delivering a product—it’s about learning. Start with small, manageable goals and expand from there.
For example, SpaceX didn’t start Starship by aiming for orbit. They began with up-and-down tests to refine basic systems. The learnings from these tests weren’t just technical—they included lessons on how to deliver a product.
Scoping is about judgment, time, and overcoming the fear of failure. It’s about knowing what to build now versus later, embracing imperfection, and learning with every iteration.
Teams that master scoping will outpace their competitors, delivering faster, learning more, and building better products. The question isn’t, “Can we get it perfect?” It’s, “What can we ship today?”
The Fear of Failure
A major hurdle to adopting an MVP mindset is fear of failure. Engineers often resist “crappy” first versions to avoid looking subpar. But this mindset is counterproductive.
A three-year roadmap might feel safe, but it’s often an excuse to delay shipping. Real learning and progress happen when teams take risks and ship earlier than they’re comfortable.
So what happens when something goes wrong? Easy. Just make another decision. Immediately. What happens if that goes wrong? Make another. If that one goes wrong you’re probably in the wrong business anyway.
Move the mental model from “what’s the perfect solution to the equation” to “shots on goal.” Play darts not chess.
Time as a Requirement
Force as much scope into that time as you can, not the other way around. The biggest cost in development is time (large team * salaries).
On the cost of failure: If our MVP fails, we lose $100,000. Ok, but how much does it cost if your team ships a month late? $3,000,000?
Ask yourself: With six months, what would you ship? Time constraints force teams to prioritize and focus on what matters most.
Instead of waiting to ship a perfect product next year, aim to ship five imperfect ones this year. Each iteration builds muscle—in your team, your processes, and your ability to deliver under pressure.
When scoping, time must be treated as a critical requirement. Every delay costs more than just money—it slows down learning.
Building Repetitions: Why Shipping Matters
The biggest mistake teams make is assuming they know how to execute on something they’ve never done before.
Shipping continuously builds organizational muscle. It teaches teams how to:
Deliver products under tight constraints.
Handle failure gracefully and incorporate feedback.
Develop a culture of accountability and progress.
This iterative process is just as important for leadership and investors as it is for engineers. It builds trust that your team can deliver value, even under challenging conditions.
The biggest mistake teams make is assuming they know how to execute on something they’ve never done before. Even experienced teams are effectively starting from scratch when tackling a new challenge. This is why:
Short Cycles Matter: The faster you ship, the faster you learn.
Hardware vs. Software: Unlike software, hardware is tested on physics, not people. This makes iterations slower but even more critical.
High-Quality Product, Variable Process
99% of problems occur at the integration level, where subsystems must work together. That’s when you finally put everything together and fly. That’s when you discover how much real testing matters, and how little a rigid process can guarantee. You learn more by building and testing than by following a perfect set of rules. By starting small and testing early, teams can identify and address these issues before they spiral out of control.
This leads to a fundamental tradeoff in scoping:
High-Quality Product + Variable Process
High-Quality Process + Variable Product
You can either aim for a high-quality product by letting your process vary, or aim for a rigid process with the risk of ending up with a subpar product. You simply can’t have both. European companies often default to high process quality and end up sacrificing product agility in the process. Meanwhile, scrappy teams like SpaceX or Anduril allow their processes to vary freely, relying on iteration to reach a high-quality product. It’s a stark divide between next-generation teams and traditional aerospace.
Programs like Starliner prove that following elaborate procedures to the letter doesn’t guarantee success. You can still end up with costly oversights because you become so buried in steps and checklists that you lose sight of the actual engineering. In contrast, agile teams adopt an approach that feels messy in the beginning, but they deliver meaningful hardware quickly. At an early stage, their job isn’t to meet DoD-grade checks; it’s to ship prototypes, test them, fail fast, and refine.
That means you don’t apply the same stringent new product introduction processes to every single subsystem. Sometimes you can cut metal right away and learn by flying. A real prototype—no matter how rough—usually teaches you more than a year of design reviews. But once you have traction and the product starts to solidify, that’s when you layer in more formality, checklists, and rigorous validation. Process is there to help you move faster, not slow you down. Too much formality too soon hurts speed. Too little formality too late can undermine safety and reliability.
A messy process at the start is normal—and often good. Early-stage teams shouldn’t run DoD-grade checklists. They need to get prototypes out and learn quickly. That’s how you build a real product that works, is safe, and is easy to iterate.
Vary Process with Context Don’t apply rigid new product introduction (NPI) protocols to functional safety. Nor do you need spaceflight-level rigor for a simple prototype.
Remove Process Early On At the NPI or PDR stage, skip the heavy docs and cut metal. A few extra flights teach more than a thousand design reviews.
Add Rigor When You Have Something Real Use process to accelerate progress, not slow it. Early products can be over-constrained by bureaucracy. Later products, once they have traction, may need more formal checks.
In short, you’re choosing between a high-quality product with a process that adapts to the context, or a highly regulated process that risks delivering something suboptimal. If you want to compete with the new wave of agile companies, it’s clear where you need to land.
Chapter 6 — Concurrent Engineering: Working in Parallel with Evolving Requirements
In a world where engineering challenges are growing exponentially in complexity, the old linear models of "A → B → C" no longer work. Today’s cutting-edge teams—from aerospace to automotive—are embracing concurrent engineering: a way of designing where problems are tackled simultaneously rather than sequentially. The result? Faster timelines, better collaboration, and fewer surprises late in the game.
But designing concurrently isn’t just about doing more at once—it’s about doing it right. As a leader, you must create the environment for real concurrency. It’s easy to say, “We’re doing concurrent engineering!” But are you sure? Many teams claim they are, yet they remain stuck in a watered-down waterfall. They still toss requirements over the fence, wait too long for feedback, and scramble to integrate at the end. This article explores the prerequisites, cultural shifts, and systemic changes organizations need to move beyond lip service and into real concurrency.
Let’s dive in.
Prerequisites
Design Criteria, Not Endless Requirements (See Chapter 7) The traditional approach of drafting thousands of detailed requirements, treated as unbreakable contracts, doesn’t align with concurrent design. Instead, teams need design criteria—flexible guidelines that empower engineers to innovate without bureaucratic drag.
Empowerment Matters: As a leader, you must give Responsible Engineers (REs) the authority to create and own their design criteria. This ensures those criteria remain grounded in real physics and data, not stale checklists.
Fewer is Better: Teams bogged down by thousands of rigid requirements won’t move quickly. Prioritize the critical few that drive 80% of the value. If it doesn’t shape mission-level performance, it’s negotiable.
Culture of Direct Collaboration Concurrent engineering thrives on real-time, face-to-face communication. As a leader, you must enable direct access between subsystem owners. This prevents the siloed decision-making that can derail a fast-moving program.
Instant Collaboration: Instead of waiting for formal sign-offs, the best teams walk across the room (or hop into an online workspace) to solve problems. Latency kills speed. Leadership should eliminate these bottlenecks wherever they appear. Authority Sits Where the Work Is: When Responsible Engineers have the power to decide, they move faster and care more about performance than process. This shift is cultural and begins at the leadership level.
Shorten the Feedback Cycle As a leader, your goal is to minimize the gap between a design change and its impact on downstream teams. Synchronizing every two weeks—or even more frequently—keeps the momentum going. If it takes weeks to notice someone modified a vital interface, you’re essentially operating in a waterfall mode.
Throughput
Throughput is the volume of information that moves between people. Low throughput causes confusion, blind spots, and misalignment. Effective leaders increase throughput by ensuring:
Teams share relevant context at the right time
Communication is open, transparent, and efficient
People trust each other enough to exchange ideas freely
When delay is low and throughput is high, progress accelerates. Teams coordinate smoothly. Minor issues get flagged before they become major problems. Decision-making gets sharper.
Great leaders don’t just manage—they optimize communication across the organization. They create structures where ideas circulate, feedback is immediate, and no one is stuck waiting for clarity.
Delay
Delay is the time it takes for information to travel, decisions to be made, or problems to surface. High delay derails progress—people wait, guess, or lose momentum.
Strong leaders minimize delay by building environments where:
Feedback moves swiftly
Issues are spotted early and resolved promptly
Communication loops are short, direct, and clear
Strong leaders running cross-functional teams establish structures that enable a steady stream of ideas, deliver instantaneous feedback, and ensure nobody is left awaiting clarity.
Problem vs. Solution Space
In concurrent engineering, it’s crucial to distinguish between the problem space (the goal to be achieved) and the solution space (how to achieve it). This separation avoids tunnel vision and gives teams the freedom to develop creative ideas.
Problem Space
The problem space defines what needs to be accomplished. It contains the design criteria—broad guidelines that outline desired outcomes without dictating the specific method. This flexibility keeps teams aligned with overarching objectives while allowing them to explore a wide range of approaches. Focusing on a single solution too soon can lock teams into suboptimal paths.
Solution Space
The solution space is where teams determine how to fulfill those goals. By keeping it open and iterative, engineers can test and refine their ideas continuously. It’s also vital to empower Responsible Engineers (REs) to write their own design criteria, preserving adaptability and encouraging innovation.
Leaders, meanwhile, manage mission-level requirements to provide overall direction without stifling creative thinking. Concurrent engineering relies on continuous feedback loops, where designs evolve in tandem with ongoing testing. By emphasizing outcomes rather than rigid processes, engineers can focus on meeting core design criteria effectively.
Concurrent engineering teams must integrate regulatory constraints into their design criteria from the start. For instance, engineers at SpaceX refer to their payload guide to ensure compliance while maintaining speed and flexibility. If you wait until the end to validate regulatory requirements, you’ll sink your timeline.
Design vs. Building: An Intertwined Process
In a concurrent engineering environment, design and building aren’t separate phases—they form a single, integrated loop. The traditional mindset of “design, then build, then test” is simply too slow.
Continuous Testing: There’s no such thing as “I tested it, and it’s done.” Systems must be tested against evolving criteria at every stage.
Daily Baselines: Concurrent teams baseline their designs daily, creating a living, adaptable framework. This approach minimizes the risk of late surprises and encourages ongoing innovation.
A test-driven culture further enhances adaptability by emphasizing continuous testing and iterative improvement. For example, Rivian runs over 95,000 tests every night, treating each failure as an opportunity to learn rather than a setback. Embracing failure in this way accelerates progress without fear.
Separating the problem space from the solution space not only improves design quality—it also fosters a culture of agility. Teams remain focused on desired outcomes, yet they have room to innovate and ensure they’re solving the right problems in the most effective ways.
The Race to Manufacture
In concurrent engineering, the subsystem that finishes first often sets the constraints for the rest. Speed becomes a competitive advantage—if your team is ready before everyone else, you define the parameters others must follow. This dynamic fosters a healthy “race to be first,” where teams eliminate waste, solve problems quickly, and share breakthroughs.
Building “reps” is crucial here. Each round of rapid development familiarizes teams with cross-functional collaboration and accelerates their comfort with change. Over time, these reps compound. A team that has cycled through multiple fast turnarounds will outpace one tackling concurrent engineering for the first time.
Deadlines act as forcing functions in this environment. When teams know they must show progress on a specific date, they’re compelled to deliver incrementally and learn continuously. Treat time as a design requirement, using tight schedules and smaller sprints to maintain momentum. Under these conditions:
Urgency discourages over-polishing; teams focus on core functionality over optional perfection.
Blockers can’t stay hidden; problems surface early, when they’re easier to fix.
Time pressure forces teams to deprioritize non-essential tasks, preventing bloat.
As a leader, you are the architect of these systems and cultures. Set clear deadlines, remove blockers, empower engineers, and celebrate small wins that fuel bigger successes. By doing so, you’ll create an organization that thrives on speed, adaptability, and continuous improvement.
The Role of Design Reliability
At the heart of concurrent engineering is a focus on design reliability—systems that perform under real-world conditions, not just in lab tests.
Avoiding Fragility: A design that shines in controlled tests but fails under stress isn’t a success. As a leader, insist on reliability as a core principle.
“No Such Thing as ‘I Tested It’”: Testing isn’t a one-time event. Systems are tested daily against changing requirements and real data. This constant validation drives incremental improvements.
The Role of Leadership in Concurrent Engineering
Your job is to set mission-level requirements while keeping flexibility at the subsystem level. You must also remove any structural obstacles to collaboration. Here’s what that looks like in practice:
Authority to the Responsible Engineers Make sure decisions are made by the people who own the subsystems. When REs have real power, they focus on performance over busywork.
Minimal “Requirements,” Maximum Design Criteria The team knows what truly matters—thrust, mass, thermal constraints—and everything else is fair game. This fosters creativity and real accountability.
Concurrent from Day One Hardware, software, propulsion, avionics—everyone starts together. They shape the mission architecture as it evolves, rather than waiting for a locked-down spec.
Real-Time Collaboration Problems can’t hide. If an engineer spots a risk, they walk over to the relevant owner or jump on a video call. Bottlenecks are flagged and fixed early.
Daily Design Baselines You can’t wait for monthly reviews. The design is a living document, updated as new data emerges.
Digital Information Sharing If it takes two weeks to notice design changes, you’re failing at concurrency. Modern tools are improving, but many teams still build custom solutions to ensure instant updates.
In short, your role is to balance speed with coherence. You don’t reject process; you make process serve the product, not the other way around. That’s how you lead teams to real concurrent engineering—where design, testing, and feedback all happen in parallel, and the entire organization moves faster than the competition.
Concurrent Engineering in Action: A Quick SpaceX Example
Start Early: Propulsion, avionics, thermal, and life support kick off on day one. They don’t wait for a “requirements doc.” They shape the architecture in real time.
Own the Bigger Picture: Responsible Engineers don’t just see the top-level mission goals; they own them. This sense of accountability drives better decisions.
Daily Baselines: The design is a moving target, but it’s captured every day. Everyone knows the latest specs, so integration never halts.
“Anti-Process”? Not at all. They focus on outcomes, not rigid checkpoints. That’s the essence of true concurrency.
When done well, concurrent engineering delivers tangible results: shorter lead times, improved product quality, and teams that learn at every step. As a leader, it’s up to you to create the structures, culture, and trust that make genuine concurrency possible. If you can do that, you’ll find your organization building better products—faster, smarter, and with fewer painful surprises at the eleventh hour.
Chapter 7 — Requirements vs. Design Criteria
We talked about Design Criteria and the cultural implications in Volume 1. You should read the Requirements: A Modern Approach before starting with this chapter.
The distinction between requirements and design criteria isn’t just technical—it’s psychological.
Requirements carry a sense of rigidity. Hard, unchangeable constraints (e.g., regulatory demands). often viewed as commandments that engineers can’t question.
Design Criteria, by contrast, are seen as flexible guidelines, input rather than a fixed constraint. This subtle shift in terminology empowers engineers to challenge assumptions, propose trade-offs, and iterate more effectively.
Traditional Systems Engineering: The JPL Trap
Before we jump in, let's baseline what is to come with the legacy systems engineering model and common failure points.
The Rigid Requirements Approach
Programs like NASA’s SLS start with Request for Proposals (RFPs) and spend years—sometimes decades—finalizing tens of thousands of requirements. The goal is to eliminate all risks upfront. Once requirements are locked, billions of dollars are allocated, and program management becomes a Gantt-chart exercise. Changes are discouraged, creating a brittle system that struggles to adapt to unexpected challenges.
The Cost of Rigidity
Legacy systems often experience delays and cost overruns because:
Assumptions made early in the process are often wrong.
Changes in flight data or external conditions invalidate rigid requirements.
Delays compound as programs strive for near-perfect reliability before shipping.
Modern counter-examples Many new, modern space companies, are falling into a similar perfection trap. By aiming for a 95% probability of success and extending timelines to achieve 99.7%, they risk turning a 5-year program into a 10-year ordeal. Each delay justifies adding more requirements, which only increases complexity and cost.
Design Criteria vs. Requirements
As teams become more cross-functional and iterative, a new way of handling requirements has emerged. The best teams no longer use the IBM DOORS/1000-page document model.
Let’s break down two types of requirements by stakeholder
External Requirements [customer/regulatory/suppliers/partner] Internal Requirements [sub-teams]
External Requirements are often called Level 0/1 or Mission Requirements. Internal Requirements cover levels 1/2 (system, sub-system, component).
Design Criteria (Internal Requirements)
These are softer, team-to-team agreements that make the mission work. They’re frequently renegotiated and owned by the engineer responsible for closing them out, often called the Responsible Engineer.
Using the term "Design Criteria" instead of “internal requirements” because it promotes a collaborative mindset. This also promoted a culture where requirements serve the mission profile, not the other way around.
Ownership: Assign one owner per design criterion to ensure accountability and clarity. This approach fosters a culture where engineers are empowered to innovate while minimizing the chaos of constant changes.
Design Criteria Should Be Written Immediately and Continuously Updated
The paradox of modern systems engineering: To start designing, you need a requirement. To write a requirement, you need the design.
How can you do this with incomplete information? You have to guess. Some call this an assumption or hypothesis.
The only way to break the cycle is to start somewhere and iterate. Your best guess will usually be in the right ballpark, enabling teams to move forward. Ideally, this guess comes from the Responsible Engineer.
It’s not enough to document design criteria. You also need to record the underlying assumptions driving those criteria. When assumptions change, design criteria must be updated to avoid cascading errors.
At SpaceX, design changes constantly—flight profiles, trajectories, and loads are in perpetual flux. This creates a ripple effect across subsystems, requiring teams to rerun loads analyses and update designs.
Explicit Labels: Teams should explicitly label design criteria, setting expectations that these will change as the project evolves. Make it clear this design criteria is an “Assumption”. Teams should regularly sync and update each other on changes.
Design Criteria Should Be Traded Off Over the Iteration
Requirements should not drive design blindly. Inputs must constantly be revisited and updated as conditions evolve.
Mass margins are a good example. You may find some buffer to give to another team.
Time is a critical factor in requirements. You may hit your 100kg mass requirement but take six months to do it. Or, you could hit 110kg and save six months. Design Criteria make these trade-offs explicit.
By actively reevaluating these parameters, teams can quickly identify synergy or conflicts across multiple subsystems. This ensures no single requirement inadvertently constrains the broader system.
Design Criteria Drive Ownership to Engineers (Responsible Engineers)
If requirements are continuously traded off, who’s best suited to handle it? The engineer who owns the part. They are the Responsible Engineers—they own both the part and the requirement.
Every requirement owned by a person provides clear accountability. Talk to the Responsible Engineer before starting analysis; you might find circumstances have changed or buffers are off.
Design Criteria also empower engineers, giving them real ownership of the system. This mindset is crucial.
Having decision-making in the hands of the RE fosters a culture where changes are made swiftly and with technical depth. It also closes the loop between design accountability and real-world performance.
Requirements (customer/regulatory/suppliers/partner)
The ideal scenario is to treat external parties as collaborators. If you can negotiate as freely with external parties as internal teams, you’ll be faster and more successful.
For example, you could present a customer with a trade-off: “We can deliver 10,000kg to LEO on time, or 8,000kg a year earlier. Is weight or schedule more important?”
Traditional systems engineers might say requirements are fixed and must be locked down early. This isn’t true. Development cycles are long, and things change over time.
The same applies to regulation. If you’re designing a nuclear reactor, it’s in your best interest to ensure it’s safe. Sometimes, regulators may have additional needs that aren’t relevant. In these cases, treat top-level requirements traditionally but coach customers and regulators toward a more flexible approach.
Keep internal requirements as desirements, and expand that mindset as far as possible.
Summary of changes between requirements and design criteria
Aligning Requirements and Design Criteria
Here’s how modern engineering organizations can avoid the pitfalls of legacy approaches:
Set Real Requirements
Identify the real requirements—those that absolutely cannot change. These are often fewer than you think (e.g., 10 at the program level, not 1,000). Examples include strict regulatory constraints and mission-critical parameters.
Define Design Criteria
Label all other inputs as design criteria, explicitly setting the expectation that they are subject to change. Use design criteria as inputs to models or performance targets, connecting them to simple analyses whenever possible (e.g., thrust linked to Delta-V).
Streamline the Process
Program-Level Requirements: Systems engineers write down the top-level requirements, associating each with a confidence level (e.g., “best guess”).
Design Criteria by REs: Responsible Engineers (REs) define design criteria based on program-level requirements, iterating in close collaboration with other teams.
Side-Shuffle Between Teams: Teams validate and adjust their criteria against each other, ensuring alignment across subsystems.
Bi-Weekly Cadence: Synchronize every two weeks to update and validate assumptions, minimizing feedback loops.
Just start building without hard requirements
Cost is driven by time, not by metal. SpaceX probably spends 250M+ a month (made up number), so 8M+ a day. If you wait to have full requirements down to the penny, you’ll have spent 500M and not shipped/learned anything.
Skate to where the puck is going to be.
What are the real requirements for starship? Cost per kg, launch rate, full reusability, Mars.
Re-affirm the top line requirements, and trust your engineers to look after the bottom line. Requirements will come - but probably after we’ve started designing. Expect mistakes, and try to call it learning.
Example: SpaceX’s Design Review Model
SpaceX achieves agility by integrating requirements and design criteria into a dynamic, stage-gated process:
Back-of-the-Envelope Calculations: Initial high-level analysis to determine feasibility.
Preliminary Design Review (PDR): Within 3-6 months, teams present rough designs and concepts.
Critical Design Review (CDR): Detailed CAD models and geometries are finalized. This is the go/no-go moment for manufacturing.
Flight Readiness Review (FRR): Final checks before launch, ensuring all systems are go for flight.
This iterative approach allows SpaceX to adapt quickly while maintaining rigorous standards.
Conclusion
The future of engineering lies in embracing change. By distinguishing between hard constraints (requirements) and flexible inputs (design criteria), teams can build faster, innovate more effectively, and adapt to an ever-changing landscape.
SpaceX’s approach—where requirements serve the mission, not the other way around—is a model for modern systems engineering. For legacy organizations stuck in the JPL trap, the lesson is clear: stop pretending nothing will change, and start building systems that thrive on it.
Chapter 8 — Regulatory and Agile: Iterative Certification for Real-World Safety
“Using agile for safety-critical systems will get people killed.” Wrong.
People often say “Agile vs. Regulatory” or “Agile vs. Safety,” implying they are in conflict. This is a misconception. The misunderstanding comes from a flawed assumption: that iteration means “just do stuff and see what happens.” It doesn't.
Consider SpaceX Dragon as an example.
When people look at the process and rigour needed when going from Crew Dragon V3 to V4, they assume the same level of detail existed in Dragon V1. It didn’t.
This assumption is understandable. Many people look at the end state (e.g., the complex process for going from Crew Dragon v3 to v4) and assume they need to start there for Dragon v1.
They assume they need to design everything perfectly from the start. That is incorrect. It all comes down to scoping and iteration.
The biggest mistake? Designing backward from regulations. Certification docs from the FAA don’t design aircraft.
Instead, we advocate for designing iteratively and bringing in regulatory considerations as you go. But it’s a fine balance. You also shouldn’t wait until the end to validate regulatory requirements or you risk sinking your timeline.
Even teams that take an agile approach fall into a set of common traps when designing safety-critical systems. Let’s get started.
Iterations in the Context of Certification
Iteration means taking calculated risks early in development to achieve a higher safety at production faster and with lower cost.
Each step is planned to work the first time—though the program as a whole might not work perfectly at first.
Most agile companies progress through something like the following iterations:
V0 Something that looks good (e.g., for investors) but doesn’t work at all.
Tech Demo A functioning system or subsystem to prove you’re real. This is not the end goal, but it's real engineering done quickly.
Integrated Iterations (V1, V1.1, etc.). Continuously test and improve the core system (e.g., core engine tests, cooling system, powertrain).
Complete System Prototype More advanced, closer to a functioning product.
Real Product Something that truly works. This is what you might bring to the FAA, though it’s still not “flyable” or “road legal” yet.
Certified Product The final product that meets regulatory requirements.
Achieving these iterations requires more than just good engineering. Three key teams must work together along this path.
Engineering Engineers primarily work in CAD. They are the people who do the actual hands-on development.
Systems Engineers Systems engineers live in requirements tools (e.g., Flow), Excel, ICDs, and meetings. They determine what to build, why it’s needed, and how everything integrates.
Certification Certification teams work in FAA/EASA documents, forms, and reports. They prove the system is safe to regulators and keep them satisfied—often years before a final product is complete.
However, these three teams operate on very different cadences and speak very different “languages.”
A natural question is, “How do we bring them together into one integrated unit?”
Short Answer: Don’t.
Reason: They are doing three different jobs. You don’t want them to become a single group because each role requires different approaches and focuses. Yet, in order to succeed:
Engineering & Systems Engineering should operate on the same cadence (requirements → design → iterate).
Certification runs on a different cadence (what do we need to start doing now to satisfy regulators long-term?).
When teams ignore these differences or try to force a single, unified process, they often derail agile efforts. Common failure points include misaligned timelines, over-documented initial designs, and late regulatory surprises. Leadership is key to managing these diverse cadences. By guiding each group to focus on its unique objectives—and aligning them at key milestones—agile methods can thrive, even in safety-critical projects.
Prototype → Certified Product: Parallel Paths
Designing safety-critical systems requires more than just good engineering—it demands early and ongoing attention to the certification process. Leadership sets the tone by integrating certification needs into each step of development, rather than tacking them on at the end. This ensures compliance and safety grow in tandem with the product.
Visualisation of the parallel path and different cadences of engineering and systems and certification
Engineering/Systems Move quickly with iterative designs, adding more requirements and detail over time.
Certification Works in parallel, preparing the evidence/regulatory framework needed to certify the final product.
Treat Iterations as Configurations/Variants
Early prototypes should have only a handful of requirements (e.g. thrust, flight, power on/off). With each iteration, teams add more fidelity, data, and proof. By the time you reach a certified vehicle, you may have 10,000+ requirements.
That doesn’t mean you should start with 10,000.
Each iteration increases in complexity and fidelity.
Early prototypes (2–3 requirements).
Successive iterations add more capabilities and more proof.
Certified vehicle (10,000+ requirements).
Designing safety-critical systems requires more than just good engineering—it demands early and ongoing attention to the certification process. Leadership sets the tone by integrating certification needs into each step of development, rather than tacking them on at the end. This ensures compliance and safety grow in tandem with the product.
Implementing Agile Certification in Practice
Hire Certification People Early They are on the critical path, ensuring safety and compliance are integrated from the beginning. Delaying their involvement risks expensive rework and design drift.
Certification Won’t Be Doing the Main Bulk Immediately. Early on, they set up frameworks for documentation, testing, and risk assessment. This groundwork keeps the project aligned and prevents last-minute chaos.
Establishing two parallel tracks helps keep design flexible while meeting regulatory requirements.
One track handles Design Requirements (Design Criteria), which is softer and evolves rapidly. The other focuses on Certification (Regulatory Requirements), which is more rigid and aligned with specific standards (FAA, EASA, NASA, etc.). By mapping these requirements back to the systems, both design and certification stay connected without blocking each other’s progress.
Design Requirements Project Begin as one project and split later if it grows too large. Expect the details and fidelity of requirements to change with each iteration, reflecting lessons learned and new engineering data.
Certification Project Centralize all certification activities to maintain clarity on standards. Mark each requirement with the relevant regulatory framework and capture the complete set of obligations, ensuring none are overlooked.
Embrace Parallel Development You can’t certify what doesn’t yet exist, so certification and design must advance in parallel—often for years—until the real product is ready. This approach allows each track to focus on its core tasks while still contributing to the overall safety and compliance of the final product.
Leadership’s Role in Agile Certification
Provide Ongoing Support: Leadership should ensure certification experts have the resources and authority to guide design decisions, even if their workload is light at first.
Champion Cross-Functional Communication: Encourage direct collaboration between Engineering, Systems Engineering, and Certification. Each team must understand how its work impacts the others.
Align With Regulatory Timelines: Leaders should anticipate regulatory checkpoints, funding, and staffing needs so that certification doesn’t become a bottleneck.
Emphasize Transparency: Create a culture where findings—both successes and failures—are shared openly, building trust with internal teams and regulators.
Recommendations
Develop a Real Relationship with Regulators. Bring engineering leads, systems engineers, and certification folks together in conversations. Build trust by maintaining open lines of communication and sharing progress regularly.
Go Above and Beyond the Reports. Provide regulators access to core design and engineering data, not just final deliverables. Show your work: share test data, assumptions, and engineering rationale so regulators see the depth of your safety culture.
Track Your Assumptions. Don’t just track requirements; track the design inputs (NQA1). Document where these inputs come from and how they evolve. These assumptions often shape the entire safety case, so clarity is vital.
Additional Notes and Clarifications
Throughout each iteration, (1) capabilities gradually increase, and the role of a Requirements Engineer (RE) spans systems engineering, project management, and hands-on design.
This (2) iterative method is more common in space (like SpaceX) but can still work in traditional aerospace, where it’s sometimes seen as harder to adopt.
Aim for (3) fixed reliability, increasing capability. Despite the need for flexibility, “iteration” doesn’t mean “just do stuff and see what happens.” Each step is planned to work on its own, though the overall program might not be flawless at first.
Within each iteration, there’s often a (4) mini-waterfall—from requirements to design, build, and test. Turbulence is normal because you’re testing assumptions by doing, not just by analysis.
Avoid the (5) analysis trap: perfecting every calculation can slow real progress. Often, it’s better to build something and learn from tangible data than to rely on endless models.
In summary
Iterate quickly on engineering and systems. Keep certification in the loop and evolving. By the time you reach your final product, you’ll have all the data and requirements needed to prove compliance—and you won’t be stuck designing everything up-front.
Start small with minimal requirements for early prototypes.
Iterate rapidly to refine design and increase functionality.
Run certification work in parallel, building trust with regulators and mapping the final, large set of requirements.
Don’t let regulations drive the initial design; regulations and certification should inform the design as you progress, not dictate it from day one.












