Trustworthy AI: From Pilot to Production Success
NotebookLM
Episode 2
-16 minutes
In this episode, we dive into the world of Trustworthy AI and uncover why most AI projects struggle to move from pilot to production. Learn about the six key principles for building trust in AI, including speed, small wins, privacy, explainable AI, model agnosticism, and the importance of having an expert in the loop. Featuring insights from real-world case studies like our work with Royal Caribbean’s and InfoBeans’ AI-driven product development tool, Expona, this episode offers actionable strategies for implementing AI solutions that are not just effective but also trusted. Discover how to align AI projects with business goals, ensure data privacy, and create a collaborative human-AI relationship that drives successful adoption. Whether you’re an AI enthusiast or a business leader looking to implement AI, this episode will guide you on your journey to building AI systems people trust.
Transcript
All right. So this week we’re diving deep into trustworthy AI. And wow, you guys send in some really
interesting stuff. A webinar transcript, a slide deck both about launching successful AI projects. Yeah, it
seems like everybody is trying to escape pilot purgatory these days. Right? We’re done with the
experiments. Time to make. I work in the real world. Absolutely. And the timing couldn’t be better.
What’s fascinating is there’s this massive wave of enthusiasm around AI right now. Oh, yeah. Totally.
Everyone’s talking about it. But here’s the thing. The actual adoption rate, it’s telling a different story.
Yeah, it’s like everyone’s window shopping, but very few are actually buying. That’s a great way to put it.
The data from Bain Consulting really jumped out at me. Only about 30% of companies are actually using
AI in production. It’s a pretty big gap. And those charts, those charts on slides eight, nine and ten from
the deck, those really show that difference between like the hype and then the reality. Exactly. And it
highlights a really important point. It’s not just about the tech itself. You know, one of the biggest
roadblocks is this simple fact. People don’t use what they don’t trust. Yeah. Make sense? Like, if it feels
risky, why would you exactly. It’s funny. There’s that, um, that image on slide one. Did you see that one
with the. Oh, yeah. The elephant. The little lock on its leg? Yes. Such a powerful visual. Right. It’s like
this big, powerful thing held back by this tiny little lock. And it just it makes you realize it’s not always
about brute strength. They it’s about perceived security. Yeah. And it’s the same with AI. You know, we
can geek out about algorithms all day long, but if users don’t trust the system, it’s a non-starter. Yeah. It’s
DOA, it’s like, what’s that old saying? Like, trust takes years to build, seconds to break, and forever to
repair. Right? With AI, it’s like you might not even get to the repair phase if that trust is broken early on.
People like, nope, I’m out. Absolutely. And that’s why I understanding the reasons behind this lack of
trust is so crucial, right? If we can get at the root of that, then we can start to figure out how to build
systems that people actually trust and use, right? Yeah. So it’s not enough to just build it. Yeah. You’ve
got to make sure that people actually believe in it and are going to use it. Exactly. And if you look at say
like, uh, slide 12, it actually breaks down some of the main concerns that companies have. Like, you see
a lot of hesitation around the things like tech readiness, you know, is this thing actually ready for prime
time and then data quality garbage in, garbage out. Right, exactly. And then, of course, there’s that
overarching lack of trust that just kind of permeates everything. Yeah. If you don’t trust it, you don’t trust
it. And I feel like slide 13 really drives that point home. They talk about this lack of trust in what they’re
calling the threats. Have you heard this I have yeah. Trust talent and data. Yeah. It’s like that three legged
stool. You know, if one of those legs is wobbly, the whole thing collapses. Exactly. Okay, so we know
trust is important, but how do we actually, like, build it when it comes to something as complex as AI?
That is the million dollar question, right. And thankfully, this webinar actually lays out some really
practical steps and key principles for building trustworthy AI, and I’ve got to say, some of these were real
aha moments for me. Yeah, for me too. Let’s dive in. What was the first one okay. So principle number
one is speed. And I know what you’re thinking is an AI all about speed. But it’s not just about being fast.
It’s about using AI to accelerate our work, not necessarily replace it. Right. It’s about working smarter,
not harder. And if you think about it, even those earlier AI models, they were already showing significant
time savings. I mean, Cornell did some research on this, and they found that certain tasks you could see
time savings of like 15%. Wow. That was years ago. Exactly. Imagine the potential now. Right. So it’s
like, let’s use this tool to free up our time and our brainpower so that we can focus on the things that
really require that human touch. Absolutely. Right. Like, who wouldn’t want to get like 15% of their day
back? Sign me up, I know, right? And speaking of shortcuts, I did love that the webinar actually offers.
Like an AI driven product development playbook at the end. Oh yeah, I was going to say that’s in the
show notes. Everyone go get that. Download that right now. It’s like a cheat sheet for building better
products faster. But you know, speed without direction that can lead to disaster. Totally. It’s like easy to
go off the rails if you’re not careful. Exactly. And that’s where principle number two comes in. Small
wins okay. So don’t go for the moon landing on day one I like it. Start small. Exactly. Don’t try to boil the
ocean. Yeah. Focus on those achievable projects. The low hanging fruit, the things that you can get done
quickly and that can demonstrate, you know, return on investment and really build that internal
confidence. Yeah. Because if you can prove it works on a small scale, yeah. Then it’s easier to get buy in
for those bigger, more ambitious projects down the road. Right, 100%. And too often you see these
projects, they get bogged down in those overly ambitious goals right out of the gate. And then what
happens? They stall out. They never even get off the ground. Yeah. It’s demoralizing. You need those
early wins. Yeah. Okay. So we’ve got speed. Small wins. What’s next? All right, principle number three.
And this one’s huge. Especially in today’s world. Privacy is king. Oh, absolutely. Yeah. You can’t talk
about trust without talking about data privacy these days. No way. It’s paramount. It’s not just good
practice. It is essential for building and maintaining trust. And you know what really drove this home for
me was that anecdote about the Royal Caribbean project. Oh, yeah. Okay. So for those who haven’t
gotten to that part of the webinar yet, they used facial recognition to speed up, get this the whole car to
bar process for cruise passengers. Can you imagine you get off the ship, get scanned, you go straight to
the bar 500% faster. They said ambitious, yes, but not without its challenges, you know, especially when
you consider like, the compliance requirements around sensitive data. We’re talking passports. Visas?
Yeah. You don’t want to mess around with that stuff. No, not at all. But they were smart about it. If you
look at slide 46, it actually outlines their approach. They worked really closely with TSA and customs
implemented, like really robust data security measures. They even had automatic deletion of unused
photos. So they were really thinking ahead. Yeah, they’re like, let’s anticipate every concern and address
it head on. Exactly. Because they knew a single misstep with that kind of data disaster, both for user trust
and their brand reputation. Totally. Okay. So we’re cruising along here with these principles. What’s
number four? Number four is tackling that infamous black box problem. It’s explainable AI. Okay. So it’s
not enough for AI to be right. It’s got to be able to explain why it’s right. Exactly. Transparency builds
trust. It can’t just be magic. People need to be able to see the logic underneath. And Google’s done a lot of
research on explainable AI and they emphasize these five key principles. One, you need clear
expectations. Two user friendly explanations three smooth onboarding for transparency into the decision
making process. And. It’s all about building that user confidence. Yeah, I feel like it’s that whole show
your work thing, but for AI, it can’t just be like, here’s the answer. Trust me, you’ve got to show how you
got there. Exactly. Which brings us very nicely to principle number five, being model agnostic, because
in the world of AI getting too attached to one specific model, that is a recipe for trouble. Yeah. Especially
since it’s changing so fast. Didn’t, uh, Ethan Malik, didn’t he say something like, this is the worst AI
you’ll ever use? He did. Isn’t that great? Because it’s true. There’s always something newer, faster, better.
Just around the corner. Always. It’s like that image. Was it the cheetah? Slide 35. Slide 35. Yeah. That
cheetah. Always evolving, always adapting, always on the move. Al’s kind of the same way it is. And
that’s why you got to design your systems, you know, to be flexible, to be able to integrate those new
models as they emerge. You don’t want to be stuck with something that’s like obsolete a year from now.
Yeah. Don’t be afraid to, like, switch horses midstream if it means getting to the finish line faster. Okay,
onto the final principle. Number six expert in the loop. So even with all these advancements, it seems like
we’re not quite ready to hand over the reins to AI completely. And that’s a good thing, right? The goal
here is not for AI to replace us, but to augment our capabilities. It helps us work smarter, more
efficiently. But it doesn’t replace that human element. It’s that human machine superhuman idea. Exactly.
And that PwC Google project is a great example of this. They used AI to assist dispatchers, not replace
them. So it’s streamline their workflow, freed them up for more strategic tasks, but they still had that
human oversight. It wasn’t just like, okay, you handle all the calls now. Exactly. And you know what? I
bet that probably addressed some job security concerns in the process. It’s always good to see AI being
used to enhance jobs, not eliminate them. Right, exactly. It’s about collaboration, not replacement. So
we’ve got the six principles for building trustworthy AI speed, small wins, privacy, explainability. Model.
Agnosticism. An expert in the loop sounds great in theory, but how does it actually work in the real
world? That’s what I’m saying. Let’s dive into these real world case studies. See these principles in action.
First up, Royal Caribbean. Yes, the Carrboro cruise ship challenge. Talk about ambitious. I mean, that
image on slide 42 of that massive ship. Yeah, that’s a customer experience you want to get. Right. And
they did. And you know what? They did it by following those six principles. So first speed they had a
proof of concept ready in a week a live test in four weeks a week a week. That’s insane. Talk about
impressive. That’s how you make a statement. Exactly. And it’s not just about being fast. It’s about using
that speed strategically, you know, to get buy in from those stakeholders to build confidence. Right.
Because if you come in and you’re like, yeah, we’ve got this AI thing, but then it takes like a year to
develop, people are going to lose interest. But if you can show them something tangible quickly, that’s
huge. Totally. Those early playtests were crucial. Yeah that’s huge for. And people excited and getting
them to buy in. Absolutely. And ultimately, you know, those playtests paved the way for that bigger
press event. You know that splash they made? It’s a perfect example of small wins, big impact. Just like
building a house. You know, you don’t start with a roof. You lay a solid foundation first. Love that
analogy. And speaking of foundations, let’s talk about privacy because they weren’t messing around when
it came to principle number three. Oh no, not at all. Especially with that whole like sensitivity around
passport and visa data. I mean, they knew they had to be extra cautious with that. Yeah. One misstep
there and it’s game over. But it sounds like they really were on top of it. They were they knew the stakes.
Working with TSA customs, offloading that data to secure environments, even having automatic deletion
of unused photos. Smart. They really dotted their eyes and crossed their t’s. They did because a privacy
breach in that situation nightmare. Total PR nightmare Okay, so they’re fast. They’re smart about data.
But can we talk about that God mode thing for a second? Oh yeah, that was cool. Explainable AI taken to
the next level. It was. It’s like they gave their team, you know, that ability to actually go in and
understand the AI’s decision making process and if necessary, even override its choices. Right. It wasn’t
just like, okay, I you’re in charge now. Yeah. There was still that human oversight, which I think is so
important. Oh, absolutely. 100%. It builds trust, right. You know, the AI is not just out there making
decisions in a vacuum. Exactly. And it ensures that, like, ultimately the AI is serving the passengers, not
the other way around. Okay. So speed, privacy, explainability. They nailed it. But they didn’t forget about
principle number six either. The expert in the loop. Nope. They really thought about the whole user
journey, not just the tech itself. They made sure that like even if you didn’t use the app as a passenger,
there were still alternative options. You know, they had that human backup just in case. Smart. So even if
the technology hiccups, there was still that safety net. Exactly. And it sounds like from what they shared,
it was a huge success. Yeah a real win for Royal Caribbean. Okay. So that was a pretty epic example.
What about a slightly smaller scale project that exponent io, the AI driven product development tool from
Info Beans? That one really stood out to me. Yeah, it’s a great example of how starting small, you know,
with a really focused solution can actually snowball into something much bigger, which I love. I’m all
about that iterative approach. And that’s what they did. They started with like one low fidelity tool, which
if you’re looking at the slides that slide 50 and then they just gradually expanded it, you know, based on
feedback based on business needs until it became this like full suite. Yeah. Proof that you don’t have to
boil the ocean on day one. Exactly. And they were all about speed and efficiency. To which I mean, who
isn’t these days, right. Like if you look at slate 51 and slide 52, you can actually see how quickly they
were able to like test. And compare those different models. ChatGPT. Gemini prompt hub. They tried
them all. It’s like speed dating for AI, right? But it’s that agility that’s essential when you’re working with
technology that’s changing so rapidly. It is. You gotta be ready to adapt. So did they end up sticking with
one? So they ended up going with GPT four. It seemed like the best fit for their needs, at least for now.
But the cool thing is they design their system to be model agnostic, which means that they can basically
just like plug and play with whatever new advancements come down the line. Smart. So they’re not, like
locked into anything. Exactly. They’re playing the long game. Nice. So we talked about speed. We talked
about that small wins approach. But it also seemed like they really prioritized explainable AI. And you
know keeping that human in the loop. Oh yeah. They really struck that balance between like AI
assistance and human control. So it’s not just AI spitting out answers. No, no. They were very intentional
about making sure that humans were still making the final decisions, I like that. So how did they do that?
So they did it in a couple of ways. First, they actually had like some really thoughtful UI elements, things
like loading animations that actually communicate progress. So you’re not just staring at a blank screen
wondering what the AI is doing. And again, you can see an example of this on like slide 53. Yeah, I was
just looking at that. That makes a huge difference, right? Just those little things to make it feel less like a
black box. And then they also make sure that like humans always had the final say on edits, you know,
overrides that kind of thing. I like that. So it’s like AI is there to assist, to make suggestions, but
ultimately it’s still a human driven process like collaboration, not automation. I love it, it’s like they took
the best of AI, the best of human capabilities and combine them into this like super powered tool. And I
feel like slides 54 and 55 really capture that synergy. They do. And you know what? It’s a great segue
into what we really want to leave our listeners with today, because after diving into all of this amazing
research, all these inspiring examples. It’s your turn. Are you ready to take off on your own AI project?
I’m feeling so much more prepared. Let’s do it. That’s what we like to hear that checklist on slide 56 of
Key Questions to Ask before you even start. That is going straight on my wall. It’s gold. It is strategic
alignment, risk management, data compliance, tech integration, human impact. They’ve got it all covered
because you can have the most amazing AI in the world. But if it doesn’t align with your business goals,
if it doesn’t have that human buy in, it’s not going to succeed. And speaking of alignment, that
governance framework you were talking about earlier. Yeah. So important. Essential, especially if you’re
thinking about AI driven product development. You know, you need that independent oversight,
especially around things like data privacy. And slide 57 actually outlines a really great structure for that.
So helpful. It’s like they thought of everything they really did. And the last thing I’ll say is it’s not just
about the technical side. It’s about creating that culture. Where your team, your employees, they see AI as
a tool for empowerment, not a threat. So important because if people are afraid of it, they’re not going to
use it. Exactly. And that roadmap for change management on slide 58, with the emphasis on like having a
clear vision, celebrating those successes, that really resonated with me. It’s about bringing everyone along
on the journey. Absolutely. Which ultimately leads to, you know, greater buy in and just better outcomes
all around. And speaking of journeys, that final thought from the webinar, the one about how a computer
is just a rock that we’ve tricked into thinking so good, it really makes you think, right? Especially when
you see that image on slide 61. That stone head with the wires. Are our anxieties about AI actually more
about us than the technology itself? It’s a fascinating question. Are we projecting our own fears about job
displacement, about the unknown onto these systems that we’ve created? I mean, it makes you wonder,
are we more afraid of what I could become or what it reveals about us? Exactly. And the reality is the
future of AI. It’s not predetermined. It’s being written right now by the choices we make, by the values
that we embed in these technologies. So as we wrap up this deep dive, what’s the one thing you want our
listeners to take away from all of this, especially those who are about to embark on their own AI
adventures? Build trust, start small. Prioritize the human element. Remember, you’re not just building
algorithms, you’re shaping the future. I love that. Build trust, start small, prioritize people. Such a great
note to end on. And with that, this has been another deep dive. We’ll be back next week with another
exciting topic. Until then, keep learning, keep exploring, and as always, keep asking those big questions.