
Next Move: Practical insights on regulatory and policy developments in tech
Next Move discusses the latest regulatory and technology policy developments and how risk leaders can react. Read the latest issue on Responsible AI.
The Trump administration recently issued a policy directive on artificial intelligence with broad implications for industry. Released on April 3 by the Office of Management and Budget (OMB) ― which oversees implementation of the president’s regulatory agenda ― the new standards outline how federal agencies must use and govern AI to drive innovation.
The OMB guidance represents a significant milestone — not only because it establishes minimum requirements for how federal agencies deploy and govern AI, but because it will likely set the tone for the private sector as well. Federal policies often shape corporate norms, especially in an area such as AI risk management, where many organizations have been seeking clarification on expectations at the federal level while sorting through a patchwork of state AI laws.
The standards, announced in Memorandum M-25-21, set a baseline for accelerating AI adoption, strengthening governance and building public trust. They place significant emphasis on Responsible AI and risk management, imposing requirements for use case inventorying, risk tiering, testing and ongoing monitoring, accountability and human oversight.
Businesses should take their cue and plan accordingly. They now have clear direction on the administration’s approach to AI risk management ― one that’s aligned with other leading frameworks, including PwC’s Responsible AI approach.
The OMB memo takes a risk-based approach to AI governance, establishing broad expectations for all agency use of AI while imposing stricter requirements for systems with greater potential impact. This methodology aligns with frameworks such as the EU AI Act and the NIST AI Risk Management Framework. By calibrating governance to the level of risk posed by each use case, it enables institutions to innovate at speed while balancing the risks — accelerating AI adoption while maintaining appropriate safeguards.
The memo calls for an inventory of AI use cases, an assessment of each for potential risk, and a determination as to whether a use case qualifies as “high-impact” — defined as having a legal, material or significant effect on rights, safety or access to services. For these systems, agencies must implement a set of minimum risk management practices, including pre-deployment testing, AI impact assessments, ongoing monitoring, human oversight and intervention, operator training, accessible appeals mechanisms and public feedback channels. These practices must be documented, and use must be discontinued if systems fail to meet the standards.
Certain requirements may be waived but only under narrow, well-defined conditions. Waivers have to be justified through a system-specific risk assessment approved by the agency’s chief AI officer and centrally tracked and reported to OMB. If an AI system fails to meet the required standards and can’t be remediated, agencies must discontinue its use.
This level of rigor underscores the seriousness of the federal government’s expectations and reinforces that Responsible AI isn’t optional — it must be operationalized and enforced. For the private sector, it signals the level of accountability regulators may begin to expect more broadly as AI oversight frameworks continue to mature.
Consistency with leading practices and PwC’s Responsible AI framework. The OMB memo’s expectations and minimum standards align with how leading institutions are already thinking about AI governance — and with how PwC has been approaching this issue. Similar to the approach adopted by the EU AI Act, it emphasizes capabilities that enable risk-based governance. These include inventorying use cases, risk-tiering to identify high-impact or high-risk use cases, applying pre-deployment testing and continuous monitoring, and establishing clear roles, responsibilities and accountability around AI use cases.
PwC’s Responsible AI framework, as shown below, underscores and emphasizes the importance of developing foundational risk management capabilities, effective operating models and governance structures, and AI lifecycle management standards. The OMB requirements echo and validate this approach.
PwC RAI framework component |
Covered? |
Example supporting quote from M-25-21 |
Page |
|
Foundational capabilities | Responsible AI principles | Yes | “Agencies must also include plans to update any existing internal AI principles and guidelines to ensure consistency with this memorandum” | 12 |
AI use case inventory | Yes | “Each agency…must inventory its AI use cases at least annually, submit the inventory to OMB, and post a public version...” | 12 | |
AI risk taxonomy | Yes | “Methods for Understanding AI Risk Management…The term ‘risks from the use of AI’ refers to risks related to efficacy, safety, fairness, transparency, accountability, appropriateness…” | 23 | |
AI risk intake and tiering | Yes | Agencies must establish “a process for determining and documenting AI use cases as high-impact” | 11 | |
Operating model and governance | Lifecycle roles & responsibilities |
Yes | “Agencies must allocate appropriate resources and responsibilities…Agency heads are responsible for establishing Chief AI Officers... with accountable officials assuming risk” | 2, 10, 13 |
Governance committee & escalations | Yes | “Each CFO Act Agency must convene its relevant agency officials to coordinate and govern issues related to the use of AI…Agency AI governance boards include a chair at the Deputy Secretary level... and appropriate representation from key stakeholder offices.” | 11 | |
AI risk and control matrix | Implied | “Agencies must implement the following minimum risk management practices for high-impact AI use cases…” | 15 | |
Training and communication | Yes | “Agencies should leverage AI training programs and resources... to strengthen the technical skills of staff..." | 9 | |
AI lifecycle management | Development and deployment standards | Yes | ”The AI impact assessments must be documented and address or include, at a minimum: the intended purpose of the AI and its expected benefit…the quality and appropriateness of the relevant data and model capability…the potential impacts of using AI...” | 16 |
AI pre-deployment testing | Yes | “Agencies must develop pre-deployment testing and prepare risk mitigation plans that reflect real-world outcomes…” | 15 | |
AI monitoring and observability | Yes | “Agencies must conduct testing and periodic human review of AI use cases… Ongoing monitoring must be designed to detect unforeseen circumstances, changes to an AI system after deployment, or changes to the context of use or associated data.” | 17 | |
Risk tracking and reporting | Yes | “Agencies…must continue with all relevant reporting requirements, including updating their annual AI use case inventory, compliance plans, and reporting as requested by OMB.” | 2 | |
Policies across risk domains | Yes | “AI risk management policies must be written…Agencies must revisit and update where necessary their internal policies on IT infrastructure…data…cybersecurity and privacy…” | 12, 13 |
For organizations looking to align with expectations outlined in the OMB memo, several foundational, “no regrets” steps can set the right trajectory.
When integrated into your organization strategically, these foundational capabilities form the bedrock of Responsible AI and can position you to navigate a wide range of frameworks and regulations — including the EU AI Act, the NIST AI Risk Management Framework and other emerging regulatory and industry expectations.
The OMB memo provides much-needed clarity for companies that are just beginning implement Responsible AI by offering a clear signal on the types of practices and safeguards expected at the federal level. For companies that have been developing these capabilities already, it offers reassurance that their efforts are aligned with emerging federal expectations. The memo reinforces that Responsible AI isn’t just a set of principles, but a set of concrete, operational requirements that institutions — public and private — will increasingly be expected to meet.
Next Move discusses the latest regulatory and technology policy developments and how risk leaders can react. Read the latest issue on Responsible AI.
What is responsible AI? What can it do for your business and why is it important? Responsible AI is a set of practices that can enable organizations to unlock AI’s transformative potential while holistically addressing inherent risks. Discover how PwC is leading the way in Responsible AI, ensuring ethical and...
Streamline model risk management and model governance with a cloud-based platform that accelerates the end-to-end model lifecycle.