How to fix the AI trust gap in your business

Although people use AI extensively for personal and business purposes, they don’t necessarily fully trust it.

For business leaders, using AI is no longer just an option but something they must do to stay competitive. Putting AI into everything, from helpful assistants to automatic processes, could boost productivity and open up new ways to make money.

CIOs and Chief Data Officers have the difficult job of guiding their companies through digital changes. Even though workers and customers are using AI tools, they don’t really trust them, which brings up new worries about how to control things and could hurt a company’s reputation.

Plenty of use, but not enough trust

In the UAE – which is known for quickly adopting new technology – 97 percent of people are using AI for their jobs, studies, or personal stuff, according to a report from KPMG.

That’s one of the highest rates in the world, but this widespread use hides some deep worries. The same report said that 84 percent of people would only trust AI systems if they were sure they were being used in a trustworthy way, and 57 percent think there need to be stronger rules to make AI feel safe.

Clearly, even though people are using AI a lot, trust hasn’t caught up. And it’s not just happening in the UAE.

Information from the UK shows a similar, maybe more developed, gap in trust. KPMG found that only 42 percent of people in the UK are willing to trust AI. While 57 percent accept or approve of using it, 80 percent think stronger rules are needed to make sure it’s used responsibly.

These AI trust numbers should worry business leaders. 78 percent of people in the UK are concerned about bad things that could happen because of AI, and only 10 percent say they know about the AI rules that already exist in the country. The information suggests that even in places where technology is common, trust and understanding are still way behind how much AI is being used.

When 80 percent of an important market wants stronger rules, it’s a sign to the people in charge that they’re not doing a good enough job of keeping things in check. Putting a new AI customer service tool into a market that already doesn’t fully trust AI is a big risk to a company’s image.

The difference between using and trusting AI will be a key point in the next stage of digital change, according to Lei Gao, CTO at SleekFlow. Lei says that using AI isn’t the problem anymore, but being responsible is. People are okay with using AI as long as they think it’s being used in a responsible way.

“Adoption is no longer the issue; accountability is. People are comfortable using AI as long as they believe it’s being used responsibly,” explains Lei. “In customer communication, for example, users trust AI when it behaves predictably and transparently. If they can’t tell when automation is making a decision, or if it feels inconsistent, that trust starts to erode.”

For companies working in markets where AI is used a lot but not trusted, the most important thing should be to create trust. Lei believes that leaders need to include openness, consistency, and human control in every AI interaction. This means having clear rules, whether a company is using basic AI models from AWS Bedrock, managing information in a Dell AI Factory, or using AI helpers like SAP Joule.

A plan to build AI trust in your business

To fix this problem, Lei suggests three main ideas that turn AI strategy from a technical issue into a question of control:

  1. Companies need to be open about when AI is being used. Customers and workers appreciate honesty. Lei suggests making it clear when people are talking to AI and when a human takes over. This simple action is essential because clarity builds trust.
  1. Technology leaders need to use AI to help people, not leave them out. This is important for getting people to use AI within the company and reducing resistance. Lei says that AI should make people better, not replace them.
  1. Business leaders need to check AI for its tone and fairness. This is where control becomes an ongoing process. Lei notes that using AI responsibly is something that needs to be done all the time; it’s not enough to just launch it and forget about it. Keeping trust and following the rules means regularly checking the tone, bias, and how well AI handles problems.

The UAE has proven that AI can be adopted faster than almost anywhere else, but how fast it’s being put in place isn’t the best measure of success anymore. The challenge for business leaders is to show that their AI systems are not just strong, but also reliable, fair, open, and can earn trust.

“The next milestone is trust and showing that automation can work in the service of people, not just performance metrics,” Lei concludes.

See also: OpenAI connects ChatGPT to enterprise data to surface knowledge

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How to fix the AI trust gap in your business appeared first on AI News.

Scroll to Top