Organizations and leaders are currently able to pick and choose from hundreds of Artificial Intelligence (AI) platforms and enterprise solutions on the market, each solving a different gap in customer experience, services, and efficiency. All these solutions have their own approaches, pros and cons, and algorithms, aiming to enhance and simplify Customer Experience (CX) and generate better performance in the customer’s eyes. The pandemic triggered all types of organizations to turn to AI, which has already been solving issues and improving experiences pre-pandemic. With such a variety of options for solving the “customer gap,” we need to step back for a second and analyse how each solution will answer our customer needs – as well as what kinds of risks it will present for our brand and its reputation if we do not use it responsibly and carefully.
AI can reduce costs if implemented correctly – positively impacting waiting time, headcount, and operational costs at any touch point – by answering questions much faster than a human. It can even do that very accurately.
AI is evolving so fast, it will be increasingly impossible to recognize the difference between human and AI interactions. As progress happens faster due to our present pandemic, we need to think about what will really solve problems for our customers in a way that will leave our brands and reputation better.
Key points recently presented by BCG GAMMA’s Chief AI Ethics Officer Steven Mills, along with Elias Baltassis, Maximiliano Santinelli, Cathy Carlisi, Sylvain Duranton, and Andrea Gallego (also from BCG GAMMA), discuss the intentionality, ethics, principles, responsible use, and reality of implementing AI solutions. The figure below presents the most important aspects of kicking off a responsible AI initiative to ensure customer adoption.
Physical and digital experiences are merging, as are technologies like AI, Internet of Things (IoT), ultra-sensitive sensors, and augmented and virtual reality, which intend to offer consumers a range of enhanced experiences. Development accelerated on all these during 2020.
The more data insights we receive and the better we understand customers’ triggers, the more we learn about what moves companies and individuals to execute a purchase or to stick with us as a customer. An added benefit: we learn how to hyper-personalize their experiences. At the end of the day, many technologies depend on the same things: emotions, experience, services, and data analytics, as well as your solution’s value and its ability to solve customer issues. As the ability to digest and analyse data improves, companies will develop better algorithms to obtain a 360-degree view of the customer. The more we embrace AI-powered automation in customer service operations, the more we will be able to deliver faster and more efficient solutions and better service.
What would we like to happen in the future, and what is currently happening? The following are three areas of focus.
1. The imperative to avoid invasive technologies
Technological risks are everywhere due to ethical concerns regarding AI utilization. The challenge lies in avoiding or balancing the need to extract data for analysis and the implementation of technologies that can be perceived of as intrusive in the customer’s life, according to a recent report by Futurum Research. The report found that 47 percent of consumers said asking an AI assistant (like Alexa, Siri, etc.) for help might be a good way to get information faster, but 38 percent admitted to having concerns about personal exposure and dealing with the technology. Certain technologies that can be part of an immersive experience or living services (as mentioned in my previous article) can also be problematic. In B2B software solutions, we encounter the exact same issues. The difference – if any – would be that technologies are used to lead our company forward, instead of for purely personal reasons.
2. The line between limits, ethics, and the lack of it
Where should we draw the line for AI, data analytics, and data collection? I cannot point to a definitive answer, because it is something that every country is treating in a different way so far. As the technologies that are so intrinsically connected evolve – robotics, machine-to-machine, drones, IoT, AI, data analytics, etc. – we need to define the lines for everyone. It’s like a nuclear agreement: if only the USA and Russia sign it, all other nuclear powers will still have the freedom to do whatever they wish.
For years, AI has been successfully used in a variety of sectors, from chatbots to the automotive industry. The easier question to answer would be: Where is AI not being used yet?
The challenge is how to place ethics above boardroom priorities, which put profit ahead of people. I wish I could say this is a thing of the past, but many companies still have this goal, even if it’s not intentional. I am not saying that profit is not or should not be the aim of businesses. I mean that fairness toward customers should be considered above any self-interest. Profit is often the result of substantial and desirable ethical work for employees, customers, partners, and stakeholders. Unfortunately, we still have companies that unintentionally (?) break customer trust, and in some cases, had even been aware of the issue. This can seriously affect any past efforts to develop a brand people trust. Even if your CX is perceived of as great, with one breach of trust, you will lose your key assets: employees and customers.
Customers oftentimes love AI and machine learning (ML). Both businesses and customers alike reap the benefits of this complementary technologies. What we all want – based on a recent BCG survey – is something quite simple.
The public wants to see responsibility and principles, along with training programs to ensure that everything is based on a secure framework, to be accepted and adopted by all. You can visualize this in the graph below:
That’s not much to ask for a technology that brings increasing benefits for humans and businesses. There are several ways to tackle this issue, but the most important one is based on the culture of your organization. Culture still a key part of your company since it helps your employees and leaders to properly communicate the company’s values and principles, to develop an execution plan for them, and then to execute it. The same applies for AI. You need a leader to drive AI in your organization, the way your C-suite does with other subjects, such as the CIO, CMO, and CCO. The AI or business intelligence leader should define and propagate the culture of ethics for AI across the company, live and preach ethics, and establish the right teams and governance alongside the right tools to manage the company’s principles. Another – but much more complex – solution would be to have a global agreement among the leading countries utilizing and developing AI in different areas right now. But we are still far from that.
The main outcome of AI governance and a leadership-organized program is to develop a framework to lead the development of services and products under certain pre-defined rules. This would close the chasm between leadership, R&D leaders, designers, customers, and other employees. One example of an organization working in this area is RIKEN – one of the largest research institutions in Japan – which is currently discussing these precise topics in order to manage the balance between Japanese customers and companies.
3. Defending employee and customer interests
Imagine now that ethics prevail. Imagine the present and a bit ahead into the future, when we already understand that AI will have abilities to decide certain things, and then later, when AI will decide perhaps much more. What will happen if the wrong decision is made? Based on an initial framework, problems of that nature could be mitigated up front by having established procedures that enable employees, customers, and partners to contest decisions – whether they are believed to be biased, unfair, or even discriminatory. As companies increasingly adopt these technologies and related solutions, customers should request the use of transparent AI policies and AI statements to ensure a fair way of doing business. What needs to happen in order to ensure transparency and fairness?
The paradox of ethics in AI
As I was writing this article, which is based on a conversation I recently had with my largest customer – a South Korean enterprise with a large line of products that are dependent on AI in some way or another – I started to understand that this was not only my concern, but also that of the European parliament. The concern is how to develop and implement technologies that ensure the protection of customers and employees, as well as the transparency of companies.
As we consider this topic of AI and transparency to customers, I would like to pose the following questions:
- How will companies ensure that consumers are protected from unfair and discriminatory commercial practices, as well as from risks caused by AI-driven solutions and services?
- How will businesses guarantee greater transparency in these processes?
- How will businesses ensure that only high-quality and non-biased datasets are used in algorithmic decision systems?
“We have to make sure that consumer protection and trust is ensured, that the EU’s rules on safety and liability for products and services are fit for purpose in the digital age,” said German Greens/EFA member Petra De Sutter, chair of the internal market and consumer protection committee.
My question for you: How would you define a framework that would protect customers and employees? What have I missed? I hope you enjoyed this article and can share too.
Merry Christmas and a Happy and a Prosperous 2021! Stay safe!