Get the best, coolest, and latest posts delivered to your inbox.
The EU AI Act entered into force last week on August 1st, 2024 and it marks a substantial milestone of governmental bodies trying to maintain the safety of their citizens and businesses when both the private and public sector use artificial intelligence. This post details the scope and consequences of the act, especially for the regulated businesses for which Ushur provides automation applications. Ushur has an international presence and so has been considering its consequences and thinking of how best our customers can automate their CX in this context — where automation is more often driven by AI than not.
The EU AI Act (AI Act) deploys a risk-based approach in which it categorizes the use case in the application of AI into one of four groups: unacceptable risk, high risk, limited risk, and minimal risk.
The application of technology systems like social scoring, and behavioral manipulation towards dangerous behavior represent an unacceptable risk. There is no appetite in the EU for having AI systems emotionally manipulate people or businesses.
The next group, high risk applications, includes examples like technological systems that support critical infrastructure or law enforcement systems. There are not specific examples provided in the act itself, but it’s not hard to intuit the kinds of system it intends to regulate closely.
A more common consideration, however, is how the AI Act affects applications like chatbots; categorized within the limited risk group. Chatbots look like many of the deployments Ushur clients are familiar with because the Ushur CXA Platform utilizes automation technologies, machine learning and generative AI to make customer experiences more fluid, flexible and powerful. So what does the EU AI Act say about them?
Chatbots are increasingly common as an interface between brands and their customers, and the EU AI Act says that those engagements need to make it clear and transparent when a customer is interacting with a machine. In other words, AI Agents deployed to engage with customers can’t hide the fact that they are not real humans!
The implications of the EU AI Act are interesting as consumers, but significant as businesses consider which technology partners can or cannot help them navigate these varieties of novel regulation. At Ushur, we build our chat interfaces and AI features to comply and according to best practices so customers always understand with whom they’re engaging.
Ushur has been thinking about the risks of AI services since the company’s inception, and we consider more than just the categories of risk included in the EU AI Act.
There are also concerns to remember like reputational risk associated with misdirected AI solutions. What does your business stand to lose reputationally if its GenAI chatbot interface is convinced by a bad actor to speak in hateful language? Or to show up in the news as having been manipulated for commercial gain?
Consider also the dangers of operational risks like having long-building AI projects stall. Businesses spend months and millions of dollars to deploy responsive and flexible AI-first solutions, and the wrong partners, with insufficient understanding of how to deploy enterprise-grade applications, put their deployment in jeopardy.
Ushur has spent a decade operationalizing enterprise-quality automated customer experiences. We have deployed document processing models to accelerate work that depends on document-bound data, and leveraged conversational automation to better understand the people engaging with its platform. The company was founded as an automation and AI platform from day 1 and has this built into our DNA, from the models we train and place guardrails around, to the security and compliance of our platform.
It’s exciting to think about customer experiences driven by AI-first capabilities, and troubling to business leaders who feel like they are missing out.
When evaluating whether AI-driven projects are the right fit for your business, while it’s important to think about the risks that arise during and post implementation, don’t forget the danger associated with choosing not to use AI. Internal and external processes will eventually grow to be unwieldy without the support of automation, and AI-first automation solutions are more flexible than deterministic solutions alone.
Compared to some technologies, like data storage, artificial intelligence-based projects are relatively new. The risks feel more substantial but they are largely understood. The quality of the outcome of the projects depends on the cleanliness and quality of the data associated with the project, and having advisors with expertise. Expertise and technology with proven success make implementation and utilization of AI technologies possible.
At Ushur, we believe AI built and implemented to improve the lives of everyone should always be implemented in safe, compliant ways — and we take the necessary steps to ensure all organizations can implement CXA solutions to provide their customers with frictionless, engaging, and fulfilling experiences.
The EU AI Act which has just gone into full force is an example of how the world is responding with caution to the newest and most exciting developments of artificial intelligence. Technology is advancing quickly, so legislators are developing frameworks to respond accordingly.
Businesses around the world with an international presence must know how to respond to changes in the legal landscape, and at the least must have technology partners they’re close with who can help guide them through the evolving implications. If the risks associated with AI that drive your customer experiences are critical to strategize around, Ushur is here to support you. With a decade of experience building, designing, and automating customer experiences in regulated enterprises to be secure and compliant, Ushur can serve as your close technology and digital transformation partner. Get in touch with us at ushur.com/request-demo.