The next “customer” many companies will face is not a person clicking through your journey. It’s an AI agent acting on that person’s behalf — or a procurement bot acting for a business — with permission to search, compare, negotiate, purchase, and manage post‑purchase tasks. Cognizant describes this as the “agentic internet”: an ecosystem of AI-enabled tools and agents “working on behalf of consumers” to autonomously locate, evaluate, purchase, and maintain products and services. In the same research, AI-friendly consumers are projected to account for up to 55% of consumer purchasing activity by 2030.
This changes the centre of gravity in CX. Agents will still influence experience, but they will increasingly enforce execution: data quality, API reliability, permissioning, identity, pricing integrity, fulfilment truth, returns logic, and operational resilience. In other words: the “experience” becomes a contract between systems.
That’s not theoretical. Visa reports that 47% of U.S. shoppers already use AI tools for at least one shopping task and that hundreds of secure, agent‑initiated transactions have been completed with partners; Visa also says it is working with 100+ partners, with 30+ building in its sandbox and 20+ agents/agent enablers integrating directly. At the same time, platform operators are drawing hard boundaries: Reuters has documented the Amazon–Perplexity AI. dispute over an agentic shopping tool, including a federal injunction and a subsequent temporary appellate stay.
The practical implication is uncomfortable but liberating you cannot “design your way out” of broken execution. If an agent cannot safely verify what’s true (availability, total cost, delivery window, returns eligibility, identity, authorisation), it will route around you — or it will be blocked from transacting altogether.
This article lays out a distinct, execution‑architecture view of agentic customers: what’s changing, what evidence we have, and what to build first — across CX, systems, APIs, and data quality.
1. Thesis: CX is becoming an execution contract
Agentic customer functionality is not “a better chatbot.” It is the capability for an AI system to form a goal, evaluate options, take actions across tools and vendors, and learn from outcomes — under explicit user or organisational delegation. In many categories, the agent is effectively your new front door: it decides if a human ever sees your designed journey.
Two forces are converging:
1) Agentic demand: consumers and businesses are adopting AI to reduce cognitive load and time‑to‑purchase. Cognizant’s modelling and survey work (with Oxford Economics puts a hard number on the scale: AI‑friendly consumers could drive up to 55% of purchasing activity by 2030.
2) Execution gating: platforms, payment networks, and sellers are formalising what agents may do, how they must identify themselves, and what “trusted” automation looks like — because unauthorised automation is now a security, fraud, and integrity problem, not a UX nit.
So the thesis is simple:
Agentic customers don’t reward your “experience” narrative. They reward your ability to execute safely, verifiably, and predictably as a set of machine‑readable commitments. When your execution is clean, agents can transact. When it’s fuzzy, agents cannot trust you — and trust, for agents, is mostly an engineering property.
Your traditional CX work focused heavily on “B” (evaluate) through messaging, persuasion, and journey design. Agentic CX forces you to operationalise “C” and “D”: policy checks, identity, data integrity, and execution reliability — at system speed.
Evidence and analysis
The fastest way to see the pattern is to look across real deployments where agents either transact, negotiate, embed into customer workflows via APIs, or trigger platform policy responses.
Comparative view of real company cases
The thread across these cases is not “cool AI.” It is execution exposure: what can be safely automated, what must remain human‑confirmed, and how systems express “truth” in a way machines can verify.
The next sections unpack what this means for CX and systems.
2. The demand side is shifting from journeys to delegated outcomes
Cognizant’s multi‑country research frames the shift bluntly: by 2030, AI‑friendly consumers could account for up to 55% of purchasing activity. The report also describes a three‑wave progression where AI moves from product discovery into aftersales and then into more automated purchasing, emphasising that consumer adoption is driven by convenience but constrained by a desire for control at high‑stakes moments.
This is the key: Agentic customers don’t remove human preferences — they encode them. Which means you should expect two simultaneous realities:
- More automation in low‑stakes categories (subscriptions, repeat purchases, standard SKUs, stable services).
- More explicit consent and “human check” gates where risk is high (payments, identity, regulated products, disputes).
If your CX model assumes a human will patiently interpret exceptions (“inventory is wrong but maybe I can call support”), that assumption breaks. An agent will either (a) fail fast and pick an alternative it can verify, or (b) ask for clarification — but only if your systems make that interaction cheap and unambiguous.
Commerce is becoming “agentic by infrastructure”, not by UI
Visa’s December 2025 update is notable because it shifts agentic commerce from a vague prediction into infrastructure milestones: hundreds of secure, agent‑initiated transactions completed with partners and controlled pilot. It also quantifies the behavioural baseline: 47% of U.S. shoppers use AI tools for at least one shopping task.
The most important detail, though, is not the percentage. It’s the operational framing: Visa claims the ecosystem is moving from “find” to “buy,” and it spells out partner counts (100+ partners; 20+ integrating). That is classic platform play: standardize the execution layer so agents can transact in predictable ways.
For companies, this means agentic readiness is less about adding an “AI shopping assistant” and more about making your transactional core compatible with delegated, auditable execution. Payments networks, not just retailers, are now shaping what “good execution” must look like.
3. The market is already litigating the boundaries of agentic customers
The Amazon–Perplexity dispute matters because it reveals what breaks when agents try to transact in a world built for humans (sessions, scraping, behavioural assumptions, ad-driven interfaces). Reuters reports Amazon’s allegation that Perplexity’s agentic tool covertly accessed customer accounts and disguised automated activity as human browsing. In March 2026, Reuters reports a federal judge granted Amazon a temporary injunction blocking Perplexity’s agent on Amazon, and shortly after, the U.S. Court of Appeals for the 9th Circuit temporarily stayed that order while it considers the request for a longer pause.
Two execution lessons fall out of this:
1) “User wants it” is not enough. Marketplace operators will enforce permissioning and integrity rules, especially where customer accounts and payments are involved.
2) Agentic customers trigger a security model change. If your identity and bot-detection layers cannot distinguish “legitimate delegated automation” from malicious automation, you either block too much (losing innovation) or allow too much (losing trust). Visa explicitly positions “Trusted Agent Protocol” as a way to separate legitimate agents from malicious.
This is why agentic CX is inseparable from security engineering.
Platforms are moving from “UX guidelines” to “agent access policies”
If the Amazon dispute shows the hard enforcement end, marketplace policies show the soft governance end the rules that will become defaults. The UX guidelines past article.
- For instance Modern Retail documents that eBay updated its robots.txt with a “Robot & Agent Policy” and explicitly prohibits “buy‑for‑me agents” or end‑to‑end flows attempting to place orders without human review; it also notes eBay tightened rules on its cart subdomain to block automated agent access (with an exception described in the reporting).
- If you are a merchant, manufacturer, insurer, bank, travel provider, or utility, assume that agent access control will become normal — and your ability to participate will depend on how cleanly you offer machine-ready execution surfaces.
Procurement proves the point: execution wins when negotiation becomes machine-to-machine
Retail “buy‑for‑me” gets headlines, but procurement is where agentic customers become brutally practical: bots negotiate and execute purchase orders.
A vendor blog from Pactum cites Walmart outcomes of 3% savings and 83% supplier satisfaction.
Data quality and APIs are becoming part of the product
If you want a clean demonstration of “execution as CX,” look at Moody’s. Their messaging is consistent across product and communications: the advantage is not the model; it’s the trusted, permissioned data estate and the ability to embed it into customer workflows.
In its 2023 Research Assistant launch, Moody’s reports pilot metrics: users could save up to 80% of time on data collection and up to 50% on analysis; overall, the tool could save up to 27% of a financial analyst’s typical task time.
– Banks embedding the solutions report decision times reduced by up to 80% and loan processing cycles reduced by as much as 15 times.
The connective tissue to CX is simple: for agentic customers, your product increasingly includes your data contract (schemas, freshness, provenance) and your execution contract (APIs, latency, error rates, idempotency, audit).
4. Adoption is accelerating, but trust and ROI discipline are the differentiators
There is real adoption, but it is uneven and hype‑distorted.
From Capgemini’s research on agentic AI: – 23% of organisations are piloting AI agents; 12% have implemented at partial scale; 2% at scale; with 30% exploring and 31% preparing for near‑term experimentation.
From the Thomson Reuters Institute’s 2026 report (professional services): – Only 15% of professionals say their organisation uses agentic AI now; 53% say their organisations are in planning or consideration; 77% expect agentic AI to be central to workflow by 2030 [20].
– Only 18% say their organisations collect ROI metrics; 40% don’t know whether ROI is measured.
The combined insight is the sober one: execution architecture is the filter that determines whether you end up in the 60% of “pilots” or in the thin slice that scales with trust. Agentic customers intensify this because “trust” is not just brand trust — it is operational verifiability.
Why “machine readability” is now a CX primitive
If agents are going to act, they need structured truth. The tooling ecosystem already assumes this.
Google’s Merchant Center documentation describes structured data markup as a machine‑readable representation of product data that helps platforms “understand and process” content reliably; it highlights that structured data can support automatic item updates and reduce issues caused by price and availability mismatches.
This is the point that many CX programmes miss: to an agent, structured data is not “metadata.” It is the interface. If you cannot express your truth in a machine‑verifiable way, you will be invisible or untrusted in agentic decision loops.
Some Tested Practical Recommendations
The goal is not to “build an agent.” The goal is to make your business executable by agents safely. That requires prioritization, as we are doing at Samsung Group.
Below is the first checklist designed for typical enterprise constraints: legacy platforms, fragmented data, and security/compliance realities. Owners and effort are planning estimates. The KPIs are designed to be measurable in action. The source is the eGlobalis South Korean team in partnership with Samsung Group.
It is not “scientific,” but it has worked well for us. This does not mean it will apply exactly to
your company, but is a path.
Overview technical checklist
Effort key: S = weeks; M = 1–2 quarters; L = 2+ quarters (dependent on legacy complexity). [E1]
direct warning: if you cannot measure “agent success rate” end‑to‑end, you’re still at demo phase — regardless of how impressive the interface looks.
Conclusion
This is no longer about design experiences. It is about making outcomes reliable, verifiable, and safe to execute under delegated authority. If your data, APIs, and policies are not aligned, agents will not compensate — they will bypass you.
Governance becomes the backbone. It defines what can be executed, under which conditions, and with full accountability. Strategy follows by prioritizing machine-readable truth, consistent execution, and measurable outcomes.
In this model, trust is not a perception — it is proven through performance.
Companies that succeed will not be those that experiment more, but those that execute better — with clarity, control, and discipline at scale.
👉 Stay ahead of CX, AI, and innovation trends — Subscribe to my weekly LinkedIn Newsletter “CX Insights by Ricardo S. Gulko.
If this article resonated with you, feel free to share it — and let’s connect on LinkedIn for more insights and future posts: Ricardo Saltz Gulko
My columns in several respected CX publications.
- My recent articles on Eglobalis: https://www.eglobalis.com/blog/
- My recent articles on CMSWire: https://www.cmswire.com/author/ricardo-saltz-gulko/
- My articles on CustomerThink as Author number one: https://customerthink.com/author/rgulko/
- My German articles on CMM360: https://www.cmm360.ch/author/ricardo/






