Requirements Management

‘AI Fitness Standards:’ Why Contractors Must Adopt AI for Requirements Management in 2026

Anne Wen

Earlier this year, the Department of War (DoW) released a memo outlining its strategy to become an AI-first department. The memo calls for sweeping AI adoption by establishing technological “AI fitness standards.”

Just as every service member must meet minimum physical thresholds, the DoW is now expected to meet a baseline for AI use, which includes deploying AI quickly, using the latest AI models within 30 days of public release, eliminating bureaucratic blockers to AI adoption, and building systems on open architectures that support third-party integration without prime contractor involvement.

“Becoming an ‘AI-First’ warfighting force requires more than integrating AI into existing workflows. It requires re-imagining how existing workflows, processes, TTPs, and operational concepts would be designed if current AI technology existed when they were created,” the memo says.

What the Department of War's AI Fitness Standard Means for Defense Contractors

These new “AI fitness standards” apply to the DoW, but they signal a broader shift in the defense ecosystem.

AI will be built into acquisition decisions, and contractors who cannot demonstrate comparable AI-enabled workflows are increasingly exposed competitively. AI integration is moving from differentiator to baseline expectation.

“Operationally, contractors should plan for materially faster development and deployment cycles, continuous field experimentation with user feedback in days rather than years,” a report authored by law firm Holland & Knight, which describes the memo’s impact, says.

Requirements management is a key indicator for the kind of operational discipline the DoW is now measuring. How a contractor manages requirements—the speed and accuracy of traceability, the ability to surface compliance gaps, the auditability of the record—reflects whether they are applying AI effectively to engineering workflows.

Why Requirements Management Is the Right Place to Start

Of all the places AI can be applied in a defense program, requirements management has the highest leverage-to-risk ratio. A single requirements record touches every phase of the acquisition lifecycle, from initial capability definition through design reviews, verification, and sustainment.

The cost of getting it wrong is concrete. Compliance matrix errors mean errors surface at audit. Undocumented requirement changes between PDR and CDR invite scope creep. Failed verification traces leave no clear ownership when a program review demands answers.

AI-driven requirements management addresses these failure modes directly, not by replacing engineer judgment, but by making it faster and more reliable to maintain a traceable, audit-ready record across a complex program.

What AI-Driven Requirements Management Actually Looks Like

Engineering teams can use AI to perform or assist with discrete tasks throughout the requirements lifecycle. Rather than replacing engineers or automating the entire process, AI is used to accelerate specific activities that traditionally require large amounts of manual effort: searching across requirement sets, identifying gaps in coverage, suggesting traceability links, classifying requirements, and assessing verification coverage.

AI-driven requirements management focuses on reducing manual burden. By quickly analyzing relationships between requirements, tests, and specifications, AI can surface missing links, identify gaps in verification coverage, and highlight areas where requirements may be incomplete or inconsistent. Engineers still make the final decisions, but they can move through analysis and documentation work significantly faster.

AI in requirements management can generally be separated into two categories: read actions and write actions.

Read Actions: High Value, Lower Risk

AI searches, filters, and analyzes requirements data to surface insights for engineers to review, but it does not modify the system of record.

These tasks deliver immediate productivity gains with minimal risk. AI can search across large requirement sets, filter artifacts by risk level or compliance status, identify orphaned requirements, surface missing parent–child relationships, and flag gaps in verification coverage. The AI retrieves and organizes information that would otherwise require manual investigation.

Engineers might run queries such as:

“Show all high-risk requirements with no linked verification evidence in this program.”

“Are there missing parent–child links between these two specifications?”

“Which requirements in this compliance matrix reference external standards that have not been mapped?”

Write Actions: Powerful, But Requiring Guardrails

AI can also support write actions within the requirements environment. These include creating traceability links, classifying requirements, updating compliance mappings, and applying structured updates to requirement records. These capabilities are powerful, but they require careful guardrails.

In a production system that serves as the program’s system of record, any modification to requirements data must be explicitly controlled. Allowing an AI to modify requirements without engineer approval introduces clear compliance and auditability risks. For that reason, write actions should always be permission-gated and executed only after explicit engineer approval.

This philosophy influenced several design decisions when Stell built its AI agent for requirements management, Zelda.

The AI Fitness Self-Assessment: Six Questions to Ask About Your Requirements Tools

Before your next proposal or program review, run this check against your current requirements environment. These questions are calibrated to the posture the Department of War is now setting:

  1. Can your team search across all requirements in a program by risk, compliance status, or verification coverage—without exporting to Excel?

  2. When a requirement changes, can you identify all downstream items affected in minutes rather than hours?

  3. Is your requirements traceability matrix (RTM) a living, queryable record — or a deliverable that gets reconstructed at each program review?

  4. Does your tool provide a clear audit trail of who made changes and when, accessible without prime contractor support?

  5. If you brought in an AI agent today, would it operate within your existing access controls — or would it require special permissions that create a security gap?

  6. Can your requirements data be shared securely with subcontractors and government customers through a controlled portal?

Teams that cannot answer "yes" to most of these questions are operating below the emerging baseline, carrying program risk into every review, every audit, and every new proposal they submit.

Book a Demo to see how Stell approaches AI-driven requirements management for defense programs: Schedule a meeting

Frequently Asked Questions About AI Requirements Management

What does the Department of War's January 2026 AI Strategy mean for defense contractors' internal tools?

The January 2026 memorandum establishes AI fitness standards as a baseline expectation for DoW components, directing the Joint Force toward faster deployment, model parity, and open data architectures. The memo does not address contractors directly, but its procurement implications are being tracked closely across the industrial base. Contractors who cannot demonstrate AI-enabled workflows and data discipline are increasingly at a disadvantage as these criteria inform acquisition decisions.

How does AI improve a requirements traceability matrix?

A requirements traceability matrix (RTM) is the documentation record linking each requirement to its source, its downstream implementation items, and its verification evidence. AI improves the RTM by automating candidate link suggestions, flagging missing or broken traces, and enabling real-time search across the matrix rather than manual spreadsheet review.

Are AI-driven requirements management tools compatible with secure DoW environments?

Compatibility with secure DoW environments depends on the infrastructure underlying the tool, not AI capability alone.

Stell’s AI agent, Zelda, runs within AWS GovCloud's environment. When a user submits a query, data is processed within AWS GovCloud, not routed to an external service. Stell does not train on user data, and the model powering Zelda does not retain requirements, specifications, or conversation history.

How should contractors evaluate whether their requirements management tool is AI-ready?

Evaluate on four dimensions:

  • Capability - Can the tool perform AI-assisted search, gap analysis, and traceability?

  • Control - Does AI require engineer approval before executing write actions?

  • Security - Does the tool operate within the appropriate infrastructure for defense-adjacent data?

  • Access - Does the AI operate within the user's existing permissions, with no special agent privileges?

Tools that meet all four criteria are positioned to support the AI fitness standard the Department of War is now setting for the programs they serve.


BOOK A DEMO

Ready to replace Your legacy workflow?

See how Stell turns scattered docs and manual traceability into a single, audit-ready platform - in a 30-minute demo tailored to your program.

BOOK A DEMO

Ready to replace Your legacy workflow?

See how Stell turns scattered docs and manual traceability into a single, audit-ready platform - in a 30-minute demo tailored to your program.

BOOK A DEMO

Ready to replace Your legacy workflow?

See how Stell turns scattered docs and manual traceability into a single, audit-ready platform - in a 30-minute demo tailored to your program.