What the King's Speech Means for AI in Contact Centers
As part of the UK's first King's Speech today under the new Labour Government, plans were announced by Charles III for his government to “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. This will potentially impact on the contact center industry.
Jonathan Mckenzie, Senior AI Contact Center Product Manager at 8x8, feels today's announcements were a positive start but that the Government could do more moving forward.
He said: “It is promising to see the Government’s commitment to advance the UK’s Artificial Intelligence (AI) sector announced in the King’s Speech, and we’re keen to see how tighter AI regulations could shape the contact center industry.
“Whilst laxer monitoring of AI allows contact centers to deploy Large Language Models (LLMs) more freely to track customer behaviour and trends, improve efficiency and cut costs, tighter regulation would be beneficial for a number of reasons. Firstly, contact centers deal with a great amount of personal and sensitive information, hence, the Government should consider enforcing stricter data privacy and security measures.
“Secondly, the Government should introduce guidelines for the ethical use of AI. This would prevent biases and discrimination against any customer groups in customer service interactions which is a prominent concern for LLM adoption in contact center currently.
“Lastly, regulating AI more closely should encourage innovation within the sector, pushing contact center to adopt advanced AI technologies to stay competitive whilst adhering to legal standards. This in turn should drive quicker development and adoption of effective AI technologies that enhance customer experience without compromising service quality.
“Ultimately, Generative AI and LLMs have real potential to transform the contact center industry and ensure agile customer service. However, the Government needs to act swiftly with AI regulation to prevent toxic chatbot responses and abuse from becoming a greater issue.”