fb-share
Contact Us menu-bars menu-close

AI projects fail due to trust

avatar

Emerson Taymor

September 30, 2024

AI is everywhere these days. The hype is off the charts, especially around tools like ChatGPT and other generative AI technologies. There’s no denying that the excitement is real. 

But if you look at the stats, you see that this hype has not necessarily translated into real value. The Reuters Institute research shows that ChatGPT gets experimented with, but hasn’t become part of most people’s daily or even monthly lives.

Reuters Institute research on ChatGPT usage by country

The same holds true in the enterprise world. Bain Consulting’s recent research highlights this perfectly. Between October 2023 and February 2024, they tracked enterprise AI adoption rates. Initially, many companies were tinkering with AI—testing out proof-of-concept (POC) versions of generative AI solutions. But when it came time to move these solutions into production, the numbers took a nosedive. In fact, production deployments actually decreased during this period, which is concerning.

Bain Consulting data on enterprise adoption rates of GenAI in development vs production

Why do AI projects fail?

Harvard’s research indicates that 80% of AI projects fail. The ultimate reason is trust.

When Bain Consulting asked executives what was holding them back from moving faster with generative AI, the top reasons fell into three buckets: trust, talent, and data readiness. 

Bain Consulting Survey

BCG’s research backs this up as well, with seven out of the top ten concerns about generative AI revolving around trust issues.

BCG Consulting Survey

Trust Issues in AI: The Main Culprits

So, what’s stopping businesses from fully embracing AI? Here are the most common trust concerns:

  1. Traceability: Many AI systems are black boxes. Without understanding where the data comes from or how conclusions are drawn, companies are hesitant to trust AI with important decisions.
  2. Factual Inaccuracy: AI can hallucinate or provide factually wrong information. No one wants to make critical decisions based on faulty data.
  3. Privacy: Concerns about data privacy and potential breaches loom large. Companies are understandably cautious about how their and their customers’ data is handled.
  4. Regulatory Hurdles: Navigating the legal landscape around AI is tricky. Fear of non-compliance adds another layer of complexity.
  5. Bias: AI models can inadvertently make biased decisions, leading to unintended and sometimes harmful outcomes.
  6. Unreproducible Outcomes: AI can be a bit of a mystery. When models produce inconsistent results, it’s hard to build trust.

So, What Do We Do About It?

Jensen Huang, NVIDIA’s CEO, nailed it when he said,

Trustworthiness is a fundamental property of our technology.

Building trust isn’t just a buzzword; it’s essential. And at the heart of trustworthy AI are six core principles:

The Six Principles of Trustworthy AI

  1. Speed: AI can supercharge the product development process, allowing for more experiments and faster execution. The faster we iterate, the quicker we find what works. You can find specific tools and tips on this in our AI-powered product development playbook.
  2. Small Wins: Start small and build momentum. Many AI projects fail because they aim too high right out of the gate. Focusing on “singles” rather than “home runs” creates a series of small successes that gradually build trust.
  3. Privacy First: Privacy concerns are at the heart of trust issues. Prioritize obfuscating user data before sending it to language models (LLMs). Services like Private AI can help de-identify data in real-time, ensuring privacy is always front and center.
  4. Explainable AI: Ditch the black box. We build trust by making AI systems transparent and user-friendly. Design patterns should be easy to understand, and testing these designs with end-users is crucial for building confidence in the system.
  5. Model Agnostic: Don’t get locked into a single AI model. Today, OpenAI’s ChatGPT might be the best; tomorrow, it could be Meta’s LLaMA or Google’s Gemini. Stay flexible and always choose the best model for your specific needs.
  6. Expert in the Loop: AI isn’t ready to fly solo just yet. Always have a human expert review AI outputs before they reach customers. This human-in-the-loop approach adds an extra layer of security and trust.
Build better, faster with our AI-powered product development playbook ad showing the playbook cover and a download button

Two More Keys: Governance & Change Management

Aside from these principles, two other critical components can help build trust in AI projects: governance and change management.

  1. Governance: The size and scope of your governance team should align with the risk level of your AI projects. Higher-stakes projects may require larger, more specialized committees. This governance body should operate outside the core team, offering an objective layer of oversight.
  2. Change Management: Let’s face it—AI is still a bit scary. To catalyze adoption, you need to get your teams comfortable with AI. Encourage them to play with tools like ChatGPT or other free online models to build familiarity. Regularly socialize wins and learnings within the organization. This helps build momentum and fosters an AI-ready culture.

The Path Forward

Yes, trust in AI is a complex issue. But by following these principles and adopting a thoughtful approach to governance and change management, you can break down those barriers. Even though over 50% of executives discourage their teams from using AI, there is a path forward. By building trust, showcasing small wins, and focusing on explainable, privacy-first models, we can unlock the real value of AI for both businesses and their customers.

Watch our webinar to go into more depth

So, while the journey to trustworthy AI might seem daunting, it’s entirely possible—and worth every step. Let’s go out there and build an AI future that everyone can trust.

Get updates. Sign up for our newsletter.

contact-bg

Let's explore how we can create WOW for you!