Loading episodes…
0:00 0:00

The AI Hype Trap: Salesforce's Costly 'I Told You So' Moment

00:00
BACK TO HOME

The AI Hype Trap: Salesforce's Costly 'I Told You So' Moment

10xTeam December 07, 2025 6 min read

There’s a unique satisfaction in saying, “I told you so.” It’s a powerful moment when you predict a bad idea, watch it unfold, and then see the inevitable backtracking. But what’s even better is when you can add a little bit of tinfoil-hat prediction to the mix.

This story is a beautiful, cautionary tale.

The Inevitable Reversal

First, let’s look at the timeline. On September 2nd, 2025, the Salesforce CEO confirmed 4,000 layoffs, attributing the decision to needing “less heads with AI.” Fast forward to December 27, 2025, and the headlines read that Salesforce regrets firing those 4,000 experienced staff after replacing them with AI. Yes, that’s right. Just a few months into the grand experiment, they realized their mistake.

The core of the issue is revealed in this statement: “Salesforce is pulling back from its heavy reliance on large language models after encountering reliability issues that have shaken executives’ confidence.”

Initially, I had a somewhat rose-colored view of these executives. Not because they were firing thousands of people and bragging about it in press conferences, but because I assumed they were at least using the AI they were promoting. I was wrong. After reading about their failures, I realized these leaders likely don’t use AI to any meaningful extent. Sure, they probably rewrite an email and marvel at the blackbox magic turning corporate speak into even better corporate speak. They aren’t using it in a real, consequential way.

The Developer’s Insight

This is where we, as developers, have a unique advantage. We have some of the best insight into AI because we use it on large projects. We have to maintain context, continuously ask questions, and navigate the back-and-forth of development. We understand the numerous pitfalls and how a single, misplaced word can send an AI spiraling into the weirdest directions.

So when I see executives expressing surprise that their AI strategy failed, I don’t see confidence; I see a lack of experience. They were told AI could solve everything. Their limited experience with text generation for corporate emails convinced them it was intelligent enough to replace people. They saw a “Hello, World!” application and thought they no longer needed software developers.

“Deterministic Triggers”: A Fancy Name for Programming

The saga with the home security company Vivant perfectly illustrates this ignorance. Vivant, which uses Salesforce’s “Agent Force” for its 2.5 million customers, experienced these reliability problems firsthand. Despite clear instructions to send satisfaction surveys after each customer interaction, Agent Force would sometimes fail to send them for unexplained reasons.

The solution? Vivant worked with Salesforce to implement deterministic triggers to ensure consistent survey delivery.

Let’s be clear about what “deterministic triggers” are. Back in my day, we called it programming. They are simply programming the action to happen now. Instead of waiting for a magic black box to maybe make a call, they are hard-coding the logic.

Note: The logic is simple. When a customer call ends, send the survey. The end.

Imagine trying to prompt an AI to send a survey based on the context of a conversation, rather than just executing a command when the call ends.

Here’s a simplified look at the two approaches:

// The "magic" AI approach they seemed to be trying
ai.processConversation(conversation).then(result => {
  // Hope the AI correctly infers that a survey is needed
  if (result.shouldSendSurvey) {
    sendSurvey(customer.id);
  }
});

// The "deterministic trigger" (aka basic programming)
eventBus.on('callEnded', (call) => {
    // Just send the survey. No magic needed.
    sendSurvey(call.customerId);
});

This is maybe a few lines of code, wrapped behind a nice interface. The fact that they had to “discover” this solution shows the massive ignorance caused by the AI hype hysteria among executives. They see a hammer, use it once, and declare it the greatest tool ever, to be used for everything.

If they had tried to build something or even just sat in on a customer call and dictated it to the AI, they would have quickly seen its limitations. They could have run simple tests on previous conversations to see all the places it goes wrong. Can we handle a 5% error rate with customers? Can we handle giving customers discounts by accident for unknown amounts? If you can’t, you must change your strategy.

The Tinfoil Hat Theory

This is the point where the tinfoil hat comes on. Remember the announcement about firing 4,000 people to be replaced by AI? I’ve come to realize this was likely never about whether the AI worked or not. This was about making a quick buck on the stock exchange.

This is, of course, an allegation. A conspiracy theory. But who goes on the internet and brags about firing 4,000 people? You do that in the age of AI, claiming you’re replacing everyone with your “Agent Force,” and then use that press conference to sell your product to other companies.

If this is true, it’s disgusting behavior. Using people’s lives as a marketing tool to show other companies they can do it, too, is incredible. And nothing is sweeter than the subsequent reversal: “Well, okay, maybe we shouldn’t have fired all 4,000 people. Maybe we goofed up a little bit. Our bad.”

The Annoying Future of AI Integration

I do believe that AI has the ability to displace some of these roles. A world exists where 80% of customer calls—simple requests like canceling an order—can be handled perfectly by an AI.

However, a significant portion of calls, perhaps 10-20%, will still go off the rails. Weird discounts will be given. Incorrect items will be sent for free. Orders will be canceled when they shouldn’t be. The next few years will be incredibly annoying as everyone tries to integrate AI into everything, and we, the customers, will have to deal with the fallout. An order will be wrong, canceled, or sent to the wrong address because of some stupid AI error.

On the bright side, maybe we can prompt-inject our way to an amazing discount. If I can manipulate a company’s poorly designed AI to give me a discount, that’s on them. They’re the ones who fired people to save money.

The future is looking both brighter and darker at the same time.

This is a frustrating cycle to watch. People are making decisions with other people’s lives without understanding the technology they claim is a fantastic replacement. It’s stupid, annoying, and ignorant. These executives should feel the effect of their decisions. Someone, namely the board, should hold them accountable and say, “You’re incompetent. Get out.”


Join the 10xdev Community

Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.

Audio Interrupted

We lost the audio stream. Retry with shorter sentences?