AI is calling: Is your contact centre ready for consumer bots?
Industry experts in EMEA weigh in on the impacts of the new AI customers

AI customers are here and contact centres are already feeling the impact. A recent TTEC Digital whitepaper probed how bots can place calls, navigate IVRs, request account actions, and escalate complaints, often faster and more persistently than humans. The next “caller” may be software acting on someone’s behalf and contact centre leaders need to decide now how to verify it, serve it, and keep customers safe.
In the ECCA webinar, “AI Is Calling: Is Your Contact Centre Ready for Customer Bots?,” a panel of expert customer service leaders explored how bots are changing customer interactions and what leaders should do next. The themes from the webinar focused on balancing convenience with safety, building governance, updating verification, and preparing frontline teams.
The expert panel
Hosted by Stephen Yap, Research Director, CCMA UK, who framed the core questions around the operational, legal, and people impact of AI customers, the session featured:
- Andy Cook: Head of Colleague & Customer Success at AXA Health
- Julie Kay: Service Optimisation Consultant
- Marko Ivanovic: Director of Digital and Data at Haleon
- Wayne Kay: Vice President of Sales Leadership, EMEA, at TTEC Digital
AI customers are already here
The panel agreed that AI bots are already calling businesses, gathering information, attempting transactions, and pushing issues toward resolution. This is not a “someday” roadmap item. It affects call flows, identity and access, fraud controls, and how teams work.
A key point was scale: as more people delegate routine tasks to AI tools, contact centres should expect higher volumes of automated interactions. Teams built only for traditional callers risk being overwhelmed by the speed, persistence, and repetition of AI.
Convenience vs. safety: The defining trade off
The discussion kept returning to a hard balance: customers want fast, low-friction service, but contact centres can’t trade away protection to deliver speed.
Andy Cook emphasized planning for worst-case scenarios, then designing controls, escalation paths, and fail-safes before businesses interact with AI callers.
Stephen Yap added that heavy-handed restrictions could create new friction for customers who use AI for accessibility or convenience. The practical takeaway: make deliberate “allow / limit / block” decisions by journey, not a single blanket rule.
Accountability is already here, even if rules evolve
Marko Ivanovic highlighted that regulations, such as the EU AI Act, may not define every bot-to-human pattern yet, but expectations are moving toward transparency, disclosure, and accountability for AI deployed by organizations.
In practice, that means governance can’t be an afterthought. It needs to show up in vendor selection, testing, monitoring, and incident response. Governance needs clear ownership, especially in high-risk scenarios where privacy, fraud, or customer harm is possible.
Frontline operations will change
Julie Kay focused on what AI callers mean for human agents. Human-to-human service is often built on empathy and rapport. Bot-mediated interactions may require more structure, with clearer language, tighter confirmations, and disciplined documentation to reduce misunderstandings and ensure consistent outcomes.
That shift will affect:
- Training and coaching: New scripts and escalation habits must account for AI callers.
- QA scorecards: Clarity and policy compliance will become even more important.
- Hiring and skills: Precision and judgment will be critical in handling ambiguous situations.
- Metrics: Teams will need to redefine “good” when interactions are delegated or partially automated.
Verification and policies especially important for vulnerable customers
Wayne Kay reinforced the need to strengthen detection and verification. If AI callers fail to reliably identify themselves, contact centres need layered checks that don’t depend on self-disclosure.
The panel also came back to a critical priority: supporting vulnerable customers safely and consistently.
Used well, AI can expand access and make it easier for people to get help. However, it needs clear guardrails. A blanket ban on AI agents could limit support for those who benefit from delegation, while allowing AI callers without controls could introduce new risks.
The path forward is policy: define what an AI agent can do, how delegated authority is verified, and how decisions and outcomes are documented.
What leaders can do now
Across the panel, the message was consistent: don’t wait for perfect clarity. Start cross-functional planning across CX, operations, risk, legal, and IT.
A practical plan:
- Map high-risk journeys such as payments, account changes, complaints, and vulnerable customer scenarios
- Set “allow / limit / block” policies by risk tier
- Modernise verification beyond knowledge-based questions
- Update agent training and QA for bot-aware workflows and escalation
- Establish governance with monitoring and incident response
The AI customer is here. Is your team ready? Early adopters who act now on detection, governance, agent training, and operational playbooks will gain a real edge: stronger protection, smoother experiences, and less disruption as AI-driven calls ramp up.
Want a practical AI roadmap?
Download the "AI Is Calling" whitepaper for real-world scenarios and clear next steps.