fb-share
Contact Us menu-bars menu-close

Slow down the process - AI design patterns

avatar

Emerson Taymor

February 27, 2025

Slow down the process so that people trust the system. Even if your system can complete a task instantaneously, make it seem like it is working a little harder. People trust AI more when they think it is taking longer than it actually is.

Why do we need AI design patterns?

AI experiences are different from normal digital experiences. They are non-deterministic. They change based on many scenarios. People don’t trust what they don’t understand. They don’t want their software to seem like it is rolling dice. As a result, the data shows people don’t trust AI systems. This is my first post in an AI design pattern series that is meant to help you improve outcomes and adoption of your AI experiences. All part of our Design-led AI framework.

Slow and fast thinking 

You’ve probably heard of Daniel Kahneman’s Nobel Prize-winning model of human thinking, slow and fast thinking. It lends itself perfectly to the AI design pattern: “Slow down the process.” The framework highlights two distinct types of thinking and neural networks:

  • System 1 (S1): Fast, intuitive brain processing where you make decisions based on pattern matching
  • System 2 (S2): Deliberate reasoning requiring more time to pause and reflect

If you haven’t read Kahneman’s book, “Thinking Fast and Slow,” I suggest ordering it now.

The counterintuitive nature of speed & trust

We build technology to execute tasks or solve problems faster. Our brains are one of the most magnificent things to exist in the world, but computers can still process thousands of times more information faster.

Technology and AI are designed to be as fast as possible. Our brains don’t always agree. Instant results can feel suspicious.

As humans, we equate time spent with effort, thoughtfulness, and quality. If your doctor gave you a diagnosis as soon as you finished sharing your symptoms, without pausing, what would you think?

You probably would be a little skeptical.

Humans subconsciously treat socially interactive technology like chatbots as human-like and apply human traits to it. Research on chatbots shows that slightly slowing down the response experience increases trust

Coinstar Machine

We can see this back in the early days of technology. In the early 1990s, Coinstar was one of the first automated coin-counting machines. These machines could count thousands of coins a second, but they were so fast, people didn’t trust the results. After early testing, Coinstar intentionally delayed showing how much money people had put in. As a result? They saw dramatic increases in customer satisfaction and desire to use the system.

AI design pattern: “Slow Down the Process”

Intentionally add friction or artificial delays to make users feel like the system is thinking. This “labor illusion” principle is seen as a big win in trusting results in the research behind Benevolent Deception in Human Computer Interaction.

The key principles behind this pattern include:

  • Perceived effort matters more than actual effort
  • Delays should be purposeful and not annoying
  • Tie delay to an explanation

Real-world examples of this pattern in action

Field Services case study

While working on a field services platform, we discovered that one of our key personas, dispatchers, spent a lot of time planning the schedule of routes by hand. This is obviously a task that a computer is much faster (and better) at than a human.

In our early prototypes, the dispatcher would press a “Schedule job” button, and we would show the recommended field technician and order. In essence, our designs matched the speed of our system. But when we tested this prototype, we found dispatchers didn’t trust the results at all; they immediately started poking holes in the output. Finding reasons to say it was wrong.

So for our next prototype, we slowed down the process. Even though we were able to instantly predict the right job to schedule, we paused and showed the system “thinking” through the steps.

AI Progress Bar - Slowing down the process
Our chatbot, Patrick, slowed down the process and shared what was going into the decision process.

TurboTax case study

TurboTax also followed this pattern when launching their updated system in 2021. They created a “fake loader” animation that is 8-seconds long, rotates through random audit checklist items, and visualizes savings being found.

Like us and Coinstar, they found that ⅔ of people were less likely to double-check results by introducing this 8-second verification loading state.

Turbo Tax fake loader

How to apply this in design-led AI

You should look to introduce a delay if people perceive something to take more time than it actually needs to. You can uncover this by testing product experiences for trust and seeing if users have a negative perception or are confused with the interfaces.

And if you are looking at data, you can see where people are trying to edit results or spending an unusually long amount of time on the post-AI decision step.

Best practices for the AI design pattern – “Slow down the process”

  • Add an explanation of what is happening. Even if this doesn’t directly mirror what your system is doing, think through what your users will expect to happen. Show this in your loading step vs. a default loader.
  • Use human-like pacing; for example, chatbot pauses to think.
  • Avoid excessive delays.  Trust drops and frustrations rise if the user feels stalled. Determine the right balance and amount of time you are slowing down the process. This will depend heavily on perceived effort and require testing.

Ultimately, the goal isn’t to slow people down from accomplishing their goal randomly, but to match human expectations, which will build trust.

I will share more AI design patterns in future posts to help you get people to use your products more and improve business outcomes.

Get updates. Sign up for our newsletter.

contact-bg

Let's explore how we can create WOW for you!