Introduction
Artificial Intelligence is now embedded in customer interactions across industries worldwide. From telecom and banking to retail and automotive, organizations in B2C and B2B spaces are deploying AI solutions for customer service, sales, and personalization. Yet too often these initiatives backfire. Poor design and execution of AI lead to customer stress, inefficiency, and loss of trust. Automation gone wrong can frustrate users, siloed implementations create inconsistency, and misaligned strategies fail to deliver returns. The following analysis highlights how well-intentioned AI projects undermine customer experience (CX) when executed poorly. We begin with a summary of the top strategic and operational mistakes organizations make when preparing for and implementing AI, along with their impact on CX.
Summary of Common AI Project Mistakes and Their Impact on Customer Experience:

Use the table above to spot some of the common mistakes as a practical starting point to assess what your business truly needs from AI—because what you want to implement may not always align with what delivers real value. For a strategic evaluation, read Agentic AI: How to Evaluate if Your Business and Customers Need It (Strategic Framework) at:
https://www.eglobalis.com/agentic-ai-how-to-evaluate-if-your-business-and-customers-need-it-strategic-framework/
1. No Clear AI Strategy or ROI Focus

One of the costliest mistakes is jumping into AI without a solid strategy. Many companies adopt AI because it’s trending, not because they have identified a clear customer problem to solve. Without defined objectives and key performance indicators (KPIs), AI projects drift aimlessly. For example, a financial services firm might implement an AI chatbot “because competitors have one,” but if it isn’t aligned to a specific service gap or efficiency goal, it adds little value. Executives across industries have poured billions into AI pilots only to find that up to 95% fail to deliver expected ROI.
The outcome is wasted budget and fragmented efforts. From a customer perspective, lack of strategic focus means the AI does nothing materially better for them – it might answer questions, but not the ones customers really care about, or it automates a process that wasn’t a pain point to begin with. The result is a missed opportunity (or outright failure) to improve CX. Customers remain with the same frustrations as before, and now the company has an expensive system to maintain with no clear benefit. This erodes internal support for AI and leaves customers underwhelmed. Senior leaders should treat AI initiatives like any other major investment: define the customer-centric use cases, set measurable success criteria (e.g. faster resolution time, higher satisfaction scores), and ensure the project serves a broader CX strategy. Without this discipline, AI becomes a solution looking for a problem – an approach that virtually guarantees poor returns and indifferent customer reactions.
2. Unrealistic Expectations and Hype
Closely related to strategy issues is the tendency to overhype AI as a silver bullet. Leaders may be sold on grand visions of AI revolutionizing customer experience overnight. In reality, deploying AI effectively is complex and iterative. High expectations can clash with hard reality – for instance, a retail CEO might expect an AI-driven virtual assistant to handle all customer inquiries immediately after launch, only to discover it can only solve basic questions and often makes mistakes on edge cases. Overpromising AI capabilities sets up both customers and internal teams for disappointment. There have been cases in consumer electronics and automotive industries where AI-powered features (like voice assistants or “smart” self-service kiosks) were marketed aggressively, but upon release they performed poorly, confusing users and generating support calls. When expectations aren’t managed, customers feel misled. A prime example was an international airline that introduced an AI “smart agent” touted as being as helpful as human staff – but during its first real test (a major flight disruption), the system gave canned responses and failed to assist anxious passengers. Public backlash was swift, and the airline had to apologize and revert to extra human support. The lesson is clear:
AI is powerful, but it’s not magic. Successful companies pilot new AI services quietly, learn from small-scale deployment, and train the AI (and the organization) gradually. Setting realistic expectations means telling customers what the AI can and cannot do, and ensuring leadership understands that meaningful improvements in CX will take continuous refinement, not just flipping a switch. Underpromise and overdeliver – not the other way around.
3. Poor Data Quality and Management
AI’s effectiveness hinges on data. If the data feeding an AI model is incorrect, outdated, biased, or fragmented across silos, the customer experience will suffer. This mistake manifests in multiple ways. In banking and finance, for example, AI-driven recommendation systems have offered customers products that don’t fit their needs because the underlying customer data was incomplete or miscategorized. Imagine a bank’s AI suggesting a premium credit card to a student with low income, simply because the data model wasn’t updated with the latest customer profile – a clear miss that annoys the customer and makes the bank look out of touch. Data silos are another common culprit. A telecom company might have billing info in one system and support interactions in another; if an AI customer service bot can’t access both, it may give answers that ignore the customer’s full history. One real-world incident involved an insurance firm’s AI app misquoting policy details because it wasn’t pulling the latest policy updates from the database, leading to customer confusion and compliance headaches. “Garbage in, garbage out” is painfully true in AI.
For customers, poor data quality translates to wrong answers, irrelevant suggestions, or even offensive mistakes (for instance, an AI addressing a customer by the wrong name or making culturally insensitive recommendations because of faulty data assumptions). Beyond frustrating individuals, such errors chip away at brand credibility. Organizations must invest in data cleansing, integration, and governance as foundational work before deploying AI. In practice, this means unifying customer records, ensuring training data reflects reality and diversity, and continuously monitoring outputs for errors. Skimping on this groundwork is a recipe for AI that leads customers astray and diminishes their confidence in digital channels.
4. Fragmented Integration (Siloed AI Implementations)
Another major mistake is treating an AI solution as a standalone tool rather than embedding it into end-to-end customer journeys. When AI isn’t well-integrated with other systems and channels, customers experience disjointed service. A classic example occurs in omnichannel retail: a customer interacts with a chatbot on the website about a product issue, but when they call support later, the human agents have no record of the chatbot conversation. The customer has to start over from scratch – repeating information and losing patience. This fragmentation often arises when companies hastily bolt on an AI application without aligning it with their CRM, databases, and workflows. In the telecom sector, for instance, some providers launched AI assistants on their mobile app that could report outages or troubleshoot device settings. However, if the AI couldn’t create a service ticket in the back-end system, the customer still had to call the hotline, effectively nullifying any efficiency gain. Poor integration not only wastes the customer’s time but also causes inconsistencies: the AI might quote a different policy or price than a live agent because they’re drawing from different sources. Such inconsistencies undermine trust — customers begin to question which answer is correct or why the experience is so clumsy. Internally, poorly integrated AI becomes a burden as employees have to bridge the gaps manually. To avoid this, organizations should design AI as part of the unified customer experience fabric. That means involving IT architecture early: connect the chatbot to customer databases, ensure the AI in a smart kiosk syncs with inventory systems, and so on. When integration is done right, AI can seamlessly hand over context to humans and vice versa, and information flows without repetition. Customers then feel like the company knows and values their time, rather than making them navigate a maze of unconnected bots and people.
5. Over-Automation: Removing the Human Touch
Leveraging AI to automate processes can yield great efficiencies, but a common mistake is pushing automation too far. Some organizations attempt to eliminate human involvement entirely from customer service, sales, or support, entrusting every interaction to chatbots, voice response systems, or automated decision engines. The reality is that certain moments in a customer journey demand human empathy, creativity, or flexibility that AI (today) cannot provide.
Over-automation tends to backfire in complex or emotionally charged situations. Consider the telecom industry: an Australian mobile provider introduced an AI-only online support system to handle all inquiries. It worked for simple tasks like checking data usage, but when customers had billing disputes or technical issues not in the script, the AI simply repeated help articles and generic apologies. Frustrations mounted. In fact, that telco’s customers publicly vented that the virtual assistant was “a complete waste of time” and even “a virtual moron” because it could not recognize when it was out of depth. Similarly, in the banking sector, a multinational bank that routed nearly all customer emails to an AI responder saw resolution rates plummet — the AI gave standardized replies but missed the nuances of fraud reports and mortgage hardship requests, leaving vulnerable customers feeling neglected. Over-automation also showed its limits in the non-profit world: an organization replaced a sensitive help hotline with a chatbot, only to find the bot giving inappropriate, even harmful responses in situations that required human compassion. These examples illustrate that removing the human touch entirely undermines CX and can even cause harm. Customers resent a company that appears to value cost savings over genuine service. The better approach is “right-sizing” automation: use AI to handle high-volume, low-complexity tasks and assist humans with information retrieval or suggestions. But always leave room for human intervention where judgment, empathy, and creativity are needed. Companies known for great service often publicize that they use AI to support their agents, not replace them. This balanced approach prevents the brand-damaging scenarios that pure automation can trigger and ensures customers feel cared for rather than processed by a machine.
6. No Human Escalation Path (Failing to Handoff)
Even the best AI systems have limits – and when those limits are reached, a smooth transition to a human agent is critical. A serious mistake, often related to over-automation, is not providing an easy and immediate “exit” to a human helper. We’ve all experienced it: a customer service bot that won’t take no for an answer. It keeps offering self-service options or repeats “I’m sorry, I didn’t understand that,” instead of connecting to a person. This scenario infuriates customers. In the UK and Europe, telecom and utility companies have faced public backlash when their AI-driven phone menus and chatbots effectively trapped users in loops. One European parcel delivery company’s chatbot infamously spiraled into nonsense when it couldn’t locate a package – it started apologizing repeatedly and even insulted itself, but crucially it never handed the issue to a human. The conversation went viral as an example of AI run amok. A more common (but equally damaging) case happened in Australia: a telecom customer desperately tried to get a chatbot to summon a human, repeating the word “human” dozens of times. The bot kept responding with scripted prompts and even misinterpreted a name as a service request, utterly failing to escalate. The customers, and thousands who saw the exchange on social media, walked away with the impression that the company simply didn’t care.
The impact on customer experience is severe – customers feel powerless and angry when there’s no escape hatch. They often abandon the purchase or cancel their account out of frustration. Companies also risk higher costs down the line: an unresolved issue can become a formal complaint or legal problem. To fix this, organizations must bake in escalation protocols. This means programming bots to recognize phrases like “I want to talk to a representative” or detecting frustration (e.g., repeated requests) and then seamlessly routing to a human agent with context of the conversation so far. It also means keeping live support available as a safety net, even if AI handles first-tier interactions. Brands that get this right tend to advertise their hybrid approach, reassuring customers that a person is always accessible. In summary, an AI without a human fallback is a ticking time bomb for customer ire.
7. Under-Personalization: One-Size-Fits-All Experiences
Personalization is a core promise of AI in customer experience – the ability to tailor recommendations, support, and communications to each individual. A surprising mistake many organizations make is failing to capitalize on this and delivering generic experiences despite advanced AI tools. Under-personalization can be as simple as an e-commerce site showing the same “popular products” to every visitor, or a bank’s mobile app offering a blanket promotion that doesn’t fit the customer’s profile. Customers have come to expect that companies use the data they have to make interactions more relevant. When that doesn’t happen, it feels like a letdown or a sign the company doesn’t understand them. For example, large consumer electronics retailers in North America and Asia invested in AI-driven marketing platforms to customize offers, yet some still blast out mass emails with irrelevant products – say, advertising gaming accessories to a customer who only buys kitchen appliances. This not only annoys customers (leading them to ignore communications or unsubscribe), but also leaves money on the table. In the Middle East, telecom operators discovered that churn increased when they sent generic retention offers; many high-value customers left, later citing that the company “should have known what I needed.” Essentially, under-personalization undermines the very efficiency and delight that AI is supposed to create. Sometimes the cause is organizational silos or caution around data privacy that results in a minimalistic approach. Other times it’s a lack of insight into what customers truly want, so the AI is just automating guesswork. The impact is subtle but cumulatively damaging to CX: interactions feel impersonal and transactional, not smart or proactive. To avoid this, businesses need to use AI’s analytic power to truly know their customers – but responsibly. That could mean integrating purchase history, browsing behavior, and service records (with consent) to tailor the next offer or response. A bank in Europe found success by using AI to pre-fill forms and suggest relevant support articles based on a customer’s profile, greatly speeding up service calls and impressing customers with the sense that “the bank knows me.” This is the opposite of under-personalization: it’s using AI to add personal touches at scale. The key is to focus AI on enhancing customer relevance, not just automating generic interactions.
8. Ignoring Ethics, Bias and Transparency
AI deployments that ignore ethical considerations can trigger serious customer experience crises and public relations nightmares. Mistakes in this category include algorithmic bias – where the AI system unintentionally discriminates against a group – and lack of transparency about how AI is being used, especially with personal data. Customers today are increasingly sensitive to these issues. In the finance sector, a well-known example emerged when an AI-driven credit decision system offered significantly lower credit limits to women compared to men with similar profiles, causing outrage over gender bias. In that case, the backlash tarnished the bank’s image and led to regulatory scrutiny; many affected customers felt betrayed and took their business elsewhere. Bias can creep in from unrepresentative training data or flawed assumptions in the model, and if a company isn’t actively testing for it, customers on the wrong side of that bias will have a very poor experience. Another scenario is content or language bias: a voice AI assistant might struggle to understand certain accents, leading to consistently worse service for some ethnic groups – an unacceptable disparity that erodes trust in the brand. Beyond bias, privacy is a huge ethical dimension. If customers sense that an AI is using their personal information in a creepy way (like overly intimate product recommendations) or fear that their data isn’t safe, they pull back. A recent global survey found the majority of consumers worry about misuse of their personal data by AI. We also have examples of AI systems that went off the rails morally: a chatbot launched in Asia as a friendly conversationalist started spewing offensive comments after learning from trolls, because the developers hadn’t put proper content moderation in place. This was not just a technical failure but an ethical one – the company had to shut it down amid public outrage, and apologetic statements did little to rebuild trust with users and the community. Transparency is equally important; if customers are interacting with a bot, they should know it’s a bot (not be deceived into thinking it’s a human), and they should know what data it’s using. Companies that are proactive here – for instance, clearly labeling AI interactions and publishing guidelines for ethical AI use – tend to foster more trust. Ignoring ethics might not cause an immediate operational issue, but it is a slow (and sometimes sudden) poison for customer loyalty. Brands must actively audit their AI for fairness and bias, involve diverse stakeholders in development, and be upfront about how AI is enhancing service. In short, if an AI hurts fairness or privacy, it directly hurts customer experience and brand reputation.
9. Inadequate Employee Training and Change Management
Behind every successful AI implementation is an organization that has prepared its people to work alongside the technology. A common mistake, however, is rolling out an AI solution without properly training employees or addressing how their roles and processes will change. This oversight leads to a disconnect between the AI’s potential and the reality of service delivery. For example, a global bank introduced an AI tool for customer support agents to recommend next best actions during calls. But at launch, many agents ignored the suggestions or even felt offended by them, perceiving the AI as an intruder or a comment on their abilities. The root cause was a lack of training and change management – management hadn’t taken time to explain how the AI would help agents or to get their input, so frontline staff simply didn’t trust the system. The immediate impact on customers was inconsistency: some agents used the AI prompts (sometimes clumsily due to insufficient training), while others stuck to old methods. Customers received mixed messages and uneven service quality as a result. In another case, an Asia-Pacific e-commerce company implemented AI to automate fraud detection and order approvals. They failed to train the operations team on the AI’s criteria or how to handle exceptions; when the AI started flagging a high volume of legitimate orders as fraud (false positives), staff were overwhelmed and unsure how to override decisions. Customers faced unusual delays and order cancellations that the support team struggled to explain, leading to confusion and lost sales. These scenarios underscore that AI is not a plug-and-play solution – it changes how work gets done. If employees are not on board and educated, the customer experience will suffer no matter how advanced the algorithm. Moreover, frontline staff are crucial for providing feedback to improve the AI. Without their engagement, the AI remains static and suboptimal. To avoid this mistake, executives should invest in robust training programs before and during AI deployment. This includes not just technical training on new systems but also contextual training on new workflows and the role employees play in supervising or collaborating with AI. Change management efforts – clear communication, involvement of employees in design, and addressing “what’s in it for me” – are vital to get buy-in. When employees understand that AI is a tool to empower them (e.g., handling routine tasks so they can focus on complex customer needs), they are more likely to embrace it. A well-trained, AI-augmented team can deliver far superior CX than AI or humans alone. In short, treating the AI rollout as purely a tech project, rather than a people project, is a mistake that will echo in every customer interaction.
10. No Continuous Improvement (Set and Forget Mentality)
Implementing an AI solution is not the finish line – it’s the starting point. A serious mistake companies make is adopting a “set and forget” mentality, where they deploy an AI system and then fail to regularly maintain or enhance it. Unlike static software, AI systems learn from data and can also drift over time as customer behavior changes or new data emerges. If you’re not monitoring and updating them, their performance can degrade. The consequences for customer experience can be quite visible. Think of a consumer electronics firm that launched an AI-powered recommendation engine for its online store a few years ago. Initially it boosted sales by suggesting relevant accessories to customers. But as new products came out and trends shifted, the algorithm wasn’t retrained – so it kept recommending older or out-of-stock items, irritating customers and causing confusion. Some customers even questioned whether the company was intentionally pushing obsolete inventory. Another example: a chatbot for a government service in the Middle East was launched to answer citizens’ questions. It worked well on FAQs it was trained on. But after policy changes and a year of new citizen queries, the bot hadn’t been updated with fresh information. It began giving incorrect answers about procedures, forcing people to call or visit offices anyway. Trust in the digital channel plummeted, and usage dropped. In the automotive industry, there have been cases where AI-driven navigation or voice assistants in cars were not updated regularly – drivers found that the assistant didn’t recognize newer slang or destinations, making it less useful over time compared to smartphone apps that constantly learn. All these illustrate that an AI that isn’t evolving becomes a liability. Additionally, continuous improvement includes listening to customer feedback and analyzing AI’s errors. For instance, if customers keep asking a chatbot the same thing that it cannot answer, that’s a strong signal to update the bot’s knowledge base or alter its dialogue. Without such feedback loops, small problems fester into big frustrations. Leading companies treat AI models like a product that needs periodic tuning: they allocate teams to review performance metrics, retrain models with recent data, patch vulnerabilities, and enhance features as customer needs evolve. This ongoing optimization is essential to keep the AI helpful and relevant. On the flip side, when organizations neglect their AI after deployment, they essentially leave customers in a time capsule – interacting with a system that grows more out-of-date each day. Eventually, customers notice and lose confidence, and the initial investment in AI yields diminishing returns. Continuous improvement isn’t just about avoiding failure; it’s how AI keeps its promise of improving customer experience as times change.
Conclusion:
Designing AI Programs that Truly Enhance Customer Experience
The ten mistakes above highlight a fundamental theme: successful AI in customer experience is not just about technology, it’s about strategy, integration, and a human-centered approach. For senior leaders, the strategic takeaway is clear – AI must be implemented with the customer’s experience as the North Star. That means every AI project should begin by asking: how will this make things better for our customers, and how will we know it’s working?
To avoid lack of ROI and misalignment, tie AI initiatives to concrete CX outcomes (faster resolution, higher satisfaction, increased loyalty). Set realistic goals and measure them. One practical approach is to start with pilot projects targeting specific pain points – for example, reducing the average wait time in contact centers by using an AI triage system – and expanding based on results. This guards against the temptation of deploying AI everywhere without purpose.
Leaders should also foster collaboration between technology teams and customer-facing teams early on. Breaking down silos is both an organizational and technical endeavor: ensure that AI systems are integrated with core databases and channels so they have a 360° view of the customer and can provide consistent answers. At the same time, encourage different departments (IT, customer service, marketing, compliance) to share knowledge during development. This cross-functional planning can preempt issues like poor data quality, bias, or lack of integration that often only become apparent after deployment if teams work in isolation.
Maintaining the human touch in an AI-driven world is paramount. As several examples illustrated, a hybrid “AI + human” model is often the sweet spot. Executives should design AI processes such that AI handles the heavy lifting but humans handle the exceptions and emotional moments. For instance, use AI to instantly retrieve customer info and simple answers, empowering human agents to focus on empathy and complex problem-solving. Make sure that at any critical juncture, customers have an outlet to reach a person. This kind of safety net not only saves customer relationships when things go wrong, but also gives customers confidence to engage with AI in the first place, knowing there’s always a parachute.
Avoiding the over-automation trap also involves company culture and policy. If cost-cutting is the only driver for AI in customer experience, take a step back – cutting costs at the expense of customer happiness will backfire long-term. Instead, frame AI projects as enhancing service quality (which, incidentally, often reduces costs through efficiency after it improves CX). Many leading organizations have publicly shifted their narrative: rather than “we’re automating to serve you cheaper,” they say “we’re using AI to serve you better.” And when things go wrong, responsible companies acknowledge it and fix the mix of AI and human involvement rather than doubling down on a failing automation strategy.
Trust and transparency are strategic pillars as well. Customers will reward companies that respect their data and treat them fairly. So, incorporate ethics reviews into your AI design process. Test algorithms for biased outcomes before they scale. Establish clear data privacy guidelines – collect only what you need, explain to customers how AI uses their information to help them, and give them control where possible. When an AI touches decisions that materially affect customers (like credit approvals, pricing, or important advice), have human oversight and clear explainability. Strategically, this reduces the risk of public missteps and builds a brand reputation for innovation with integrity.
From an operational standpoint, prepare your people and processes for AI. Invest in training programs to upskill employees on new AI tools and on interpreting AI outputs. Frontline staff should understand that the AI is there to assist, not judge or replace them. Involve them in fine-tuning the system – for example, a customer service team could have regular meetings with the AI developers to provide feedback from actual customer interactions. This inclusion not only improves the technology but also increases employee buy-in. Change management should address fears and highlight opportunities: when employees see that AI can eliminate drudgery (like logging tickets or searching knowledge bases) and help them shine in front of customers, they become champions of the new tools rather than detractors.
Finally, adopt a mindset of continuous improvement and agility for AI programs. Post-launch, treat the AI like a living part of your service team that needs coaching and care. Set up a monitoring dashboard for key performance metrics (accuracy, resolution rates, customer sentiment) and assign owners to react when numbers dip. Regularly update the AI’s knowledge with new product info, policies, and answers to new frequently asked questions. It can be useful to schedule periodic reviews – for instance, quarterly audits of AI decisions and customer feedback – to decide on retraining models or adding capabilities. Executive oversight is important here: make sure teams have the resources and mandate to iterate. The companies that see sustained success with AI in CX are those that refine their systems based on real-world learning. They stay ahead of customer expectations by evolving their AI just as their customers evolve.
In conclusion, improving customer experience with AI is absolutely achievable – many organizations are getting it right by avoiding the pitfalls outlined above. It comes down to thoughtful leadership. Executives who approach AI as a tool to augment human service, who demand integration and ethics, who prepare their organization for change, and who continually learn and adapt – those are the leaders who will deliver standout customer experiences in the AI era. By steering clear of these common mistakes, companies can harness AI’s power to reduce customer effort, personalize interactions, and build trust at scale. Done right, AI will not undermine customer experience, but become a cornerstone of a modern, loyalty-winning customer experience strategy.
If this article resonated with you, feel free to share it — and let’s connect on LinkedIn for more insights and future posts: Ricardo Saltz Gulko
We invite you to subscribe to the open ECXO.org (European Customer Experience Organization) and be part of shaping our Global Business Network, now open for individual access: https://ecxo.org/individuals/
Ricardo Saltz Gulko, columns in several respected CX publications.
- My recent articles on Eglobalis: https://www.eglobalis.com/blog/
- My recent articles on CMSWire: https://www.cmswire.com/author/ricardo-saltz-gulko/
- My articles on CustomerThink as Author number one: https://customerthink.com/author/rgulko/
- My German articles on CMM360: https://www.cmm360.ch/author/ricardo/
Data Sources:
- Why 74% of Enterprise CX AI Programs Fail — And How to Make Them Work https://www.eglobalis.com/why-74-of-enterprise-cx-ai-programs-fail-and-how-to-make-them-work/
- Agentic AI: How to Evaluate if Your Business and Customers Need It (Strategic Framework) https://customerthink.com/agentic-ai-how-to-evaluate-if-your-business-and-customers-need-it-strategic-framework/
- AI-Powered Customer Service Fails at Four Times the Rate of Other Tasks – Qualtrics (Oct 7, 2025) – https://www.qualtrics.com/articles/news/ai-powered-customer-service-fails-at-four-times-the-rate-of-other-tasks/
- Why Are Traditional Bank Chatbots Failing Customers? – Galileo Financial Technologies (2025) – https://www.galileo-ft.com/lp/traditional-bank-chatbots-failing-customers/
- CX in the AI Era: Leveraging Data to Fuel Loyalty https://www.eglobalis.com/cx-in-the-ai-era-leveraging-data-to-fuel-loyalty/
- AI Copilots to Agents: Shaping Employee Experience & Trust https://www.eglobalis.com/ai-copilots-to-agents-shaping-employee-experience-trust/
- The 6 Biggest Chatbot Fails & How to Avoid Them – MoinAI (Oct 1, 2025) – https://www.moin.ai/en/chatbot-wiki/chatbot-fails
- 7 AI disasters that prove humans are irreplaceable in customer service – AnswerConnect (Natalie Ruiz, 2023) – https://www.answerconnect.com/blog/business-tips/ai-customer-service-disasters/
- Customers rage as Telstra’s online support robot fails – Starts at 60 (Mar 9, 2018) – https://startsat60.com/media/news/telstra-codi-virtual-support-system-struggling-customers-frustrated
- These AI Mistakes Are Slowly Killing Your Customer Experience – Forbes (Mar 3, 2022) – https://www.forbes.com/sites/theyec/2022/03/03/these-ai-mistakes-are-slowly-killing-your-customer-experience/
- 93% of Executives Admit Their Customer Experience Is ‘Broken’ – Press Release (WSJ Intelligence & Code and Theory, Oct 2, 2025) – https://www.prnewswire.com/news-releases/93-of-executives-admit-their-customer-experience-is-broken-302573023.html
- AI in Customer Service: Billion-Dollar Mistake When Deployed Wrong – CMSWire (Tue Sottrup, Oct 29, 2025) – https://www.cmswire.com/customer-experience/ai-in-customer-service-billion-dollar-mistake-when-deployed-wrong/
- Top 10 Chatbot Fails and How to Avoid Them – Comm100 (Mar 24, 2024) – https://www.comm100.com/blog/top-10-chatbot-fails-and-how-to-avoid-them.html





