In 2025, it was a year of mostly tire-kicking.
Most of our conversations started with a shrug: "I think I have an AI budget I’m supposed to spend?" It was about checking boxes.
But in 2026, the mood has shifted. People are ready to deploy, but they’re also skeptical. The question isn't "Do we have the money?" anymore—it's "Is this actually going to solve a problem, or is it just a distraction that’s going to burn my credibility?"
I get it. The last thing an L&D leader needs is to roll out a "chatbot" that managers roll their eyes at.
The 30-Day "Stress Test"
We had a casual first call with a large enterprise team. Instead of going straight into a proposal and a months-long procurement debate, we tried something new: we gave the L&D team 30 days of live access.
We ran a series of short, 30-minute sessions where they could go hands-on, stress-test the tool, and try to break it. What we discovered was that most enterprise buyers are hungry to actually use something rather than hear about it. AI beyond chatbots is just too abstract right now until you experience it directly.
“What’s different about Ren is that it fits the way we think about manager development — not as a replacement for our coaching priorities, but as the infrastructure that makes them actually happen. The prep time reduction was immediate. But what surprised us was the shift in the quality of conversations our managers were having.”
— Head of Manager Development, Fortune 500 manufacturing
Creating the Space to Decide
A few weeks in, the conversation naturally shifted. Once they could see the value for themselves, some of them were managers, some not, they started thinking out loud about where in the org a pilot would make the most sense. We'd built trust, yes, but it was really that they had the space to ask the questions that were rumbling around the org but hadn’t totally taken shape. It helped us get into the conversation beyond features and into the things that actually determine if a tool lives or dies in an organization:
The Mundane & The Serious: We had the room to tackle everything from "What does this button actually do?" to deep-dives into privacy, security, and data handling.
The Scenario Test: They ran dozens of different informal scenarios with colleagues they interacted with outside their team, to see if Ren was actually different from the generic bots they were familiar with. They wanted to see if others saw the nuance.
The People Test: Most importantly, they evaluated the tool against the people they know best—their own stressed-out managers. They weren't asking "Is this good AI?" they were asking "Will our people actually use this?"
The Technical Bridge: We had the time to answer some of the gating IT and technical questions in real-time, while the tool was actually being used, rather than in a vacuum.
Let’s talk about your coaching infrastructure
Most L&D leaders I speak with are trying to figure out where AI actually fits. Not as a shiny distraction, but as a way to make their existing culture, and their managers, better.
If you’re looking at your 2026 roadmap and wondering how to bridge the gap between your existing programs, the noise in the market, and the actual goal, which is to help your managers with the most impactful conversations in their day, I’d love to show you a different approach to evaluating an AI coaching solution.
— Jonathan
