The next wave of responsible AI

Four ways to help protect customers and win their trust in the age of agentic commerce

  • April 30, 2026

As companies deploy more AI agents to act on behalf of consumers and businesses—making personalized recommendations, managing inventory, and completing transactions autonomously—new responsibilities and risks are emerging. Here’s how to manage them.

AI agents are everywhere—on tech platforms and on your own company’s website, where consumers are dispatching them to browse, compare, and evaluate your products and services. Businesses are deploying them to enhance speed, personalization, and operational efficiency across every stage of the commerce journey. But will consumers trust an e-commerce experience powered by agents? Increasingly, the answer comes down to whether they feel safe doing so.

The trust gap in agentic commerce is significant, and brands should work on closing it. And that means addressing multiple dimensions including cybersecurity, fraud, consent, and the specific risks that emerge when AI agents are making financial decisions on consumers’ behalf. Malicious actors are already probing AI systems for weaknesses—finding ways to manipulate decision logic, redirect purchases, impersonate brands, and access back-end payment data without authorization. And many companies aren’t ready.
The good news is that security, privacy, and consumer empowerment can be built in from the start. Brands that move decisively to do deploy agents can build the kind of durable brand loyalty that translates directly into long-term growth.

Only 6%

of business and tech leaders express confidence in having addressed all of the vulnerabilities surveyed.

Source: PwC’s 2026 Global Digital Trust Insights

What consumers want: clear indications from companies that their agents can be trusted

People are increasingly using AI to discover products—especially millennials, Gen Z, and even kids in Generation Alpha—but many still aren't comfortable letting an AI agent complete the purchase. The questions running through their minds are valid: What am I actually getting in exchange for my data? What exactly am I consenting to, and where does that data go? What security is in place to protect me? Has this been tested, and is it continuously monitored?

Those aren’t unreasonable questions. Consumers want to know what protections exist if an agent makes a bad decision—or charges them more than another customer for the same product. They want assurance that agents are acting on their wishes. They want protection against unauthorized transactions: if an agent is compromised or manipulated into a purchase they didn't sanction, they want to know they won't be left holding the bill. And they want to be able to revoke permissions, review what an agent has done on their behalf, and override decisions at any point.

Companies that offer this level of transparency and control—instead of so-called “black box AI”—are likely to boost both sales and long-term loyalty. With regulations tightening, integrating privacy and safety controls from the start is also far less costly than retrofitting them later under legal pressure. But most companies are behind on responsible AI and haven’t upgraded to meet the rise of agentic commerce.

Agentic commerce can’t scale unless safety is part of the design.

Four actions to promote trust and enable agentic commerce

The brands that lead in this environment are the ones that deploy the most capable agents responsibly, visibly, and in ways their customers can verify. Here is what that looks like in practice.

  • Provide clear notice and consent options, and be transparent about data usage. Inventory and assess your agentic AI use cases end to end. Document data inputs, consent requirements, contractual obligations, and guidelines for agents' activities—and modify them where needed. Then make it easy for customers to understand how their data is being used, what they are gaining by consenting, and how transaction data will inform the system going forward. Build platforms that allow users to grant, manage, and revoke permissions for AI agents acting on their behalf—including access to financial data—and provide mechanisms that explain how agents operate and how their decisions affect consumers.

    Establish a data taxonomy and implement automated metadata tagging to flag and segregate youth data from general user data, enabling cleaner compliance workflows and more reliable deletion processes. Build retention, deletion, and archival controls designed for the continuous data streams that agentic systems generate—including derived preferences, behavioral logs, and records of delegated authority. Limit third-party access to sensitive data by design, and clearly document what is shared, with whom, and for what purpose. This matters especially as Gen Alpha enters the picture. Protecting minors is no longer just a reputational consideration. It’s a legal imperative.

  • Strengthen boundaries. Design data clean room capabilities and implement controls that limit third party access to sensitive data. Track what data is shared with third parties, for what purpose, and establish procedures to regularly recertify that access, including requesting deletion from third-party systems where applicable. Ensure sensitive data is de-identified through anonymization, pseudonymization, or tokenization wherever possible to reduce exposure and limit downstream risk.

    Adopt leading practices such as payment card industry (PCI) security standards, zero trust architecture—including zero trust principles for agent-to-agent communication—and machine identity segmentation to continuously verify access to data and control agent privileges. Secure your API design and microservices architecture, and implement identity and access management across both human and machine identities to prevent lateral movement in the event of a compromise. These aren't just technical measures—they're the infrastructure of consumer confidence, and the foundation on which operational resilience depends.

  • Secure the decision layer. This is where agentic commerce faces its most distinctive risks. Agentic systems introduce failure modes that traditional security architectures weren't built to handle: model drift, hallucinations, adversarial prompt injection, and API misconfigurations that can expose payment credentials or manipulate order logic. A malicious actor doesn't need to breach your perimeter. They may only need to manipulate what your agent believes it's been asked to do.

    Leading practices to counter this include red-teaming agents’ logic by simulating adversarial prompts and attack scenarios, establishing AI-specific incident response and resilience playbooks, and using LLM observability mechanisms to track potential prompt abuse, model drift, and emerging risk signals before they become customer-facing failures. Implement resilience planning, backup policies, and recovery time and point objectives aligned to agentic system failure modes — not just traditional IT outage scenarios.

    Making these protections visible matters too. Real-time fraud alerts and user-facing audit trails give customers concrete evidence that you're looking out for them. Disclose how agent-generated data is used for model training and continuous improvement, including any resulting impacts on model behavior and risk posture. In a market full of vague "privacy-first" claims, that kind of specificity is itself a source of competitive advantage.

  • Test before you launch, and keep watching after you do. Deploying an agent is not a one-time event. Consumers and regulators are beginning to want more than assurances; they want evidence. So the retailers and brands that earn lasting trust will be the ones willing to show their work publicly, not just internally.

    That means publishing the results of pre-deployment testing, not just conducting it. Confirm that specific guardrails are in place and share evidence they perform as intended, before your agent ever reaches a customer. Post-launch, give consumers and stakeholders visibility into how your systems are monitored: what you're watching for, how often, and what happens when something flags. Disclose clearly how agent-generated data—including behavioral logs and interaction histories—feeds back into model training, and what governance controls ensure that process doesn't introduce new risk. Retailers that get ahead of this won't just avoid regulatory scrutiny. They'll own a meaningful point of differentiation at a moment when most of their competitors are still figuring out what their agents are doing.

Making safety a source of growth

Agentic commerce is already here, and it’s accelerating. Companies that don't protect consumers in this environment face real risks to their reputation—and could miss out on new opportunities, leaving growth on the table. But brands that embed trust, privacy, and consumer empowerment into each layer of the experience stand to earn something increasingly rare: consumers’ confidence that you're actively and effectively looking out for their best interests. In a crowded market, that confidence isn’t just good ethics. It’s durable competitive advantage.

Contact us

Jason Colo

Principal, Cyber, Risk and Regulatory, PwC US

Brett Croker

Principal, Data Risk and Privacy, PwC US

Aparna Giridharadas

Partner, PwC US

Ali Furman

Consumer Markets Industry Leader, PwC US

Eric Shea

Commerce Lead, PwC US

Follow us