

Bottom Line Up Front: This article uses a simple family dinner to illustrate a deeper enterprise truth: building trust in AI starts with small, low-risk interactions. You’ll learn why organizations must go beyond access and actively create space for experimentation—encouraging employees to solve “salmon-sized problems” before applying AI to complex, high-stakes decisions. Trust, like competence, grows through repeated practice. This story makes the case for embedding safe, practical AI use into daily workflows—one question, one outcome, one success at a time.
It started with a fish.
A very large fish.
Louellen came home from Costco with a whole side of salmon—thick, pink, barely fitting in the fridge. She handed it to me like a prize catch, grinning with a clear message: Don’t Screw This Up! I nodded, accepting the responsibility with the seriousness of someone entrusted with both a gift and a challenge.
You should know something important here. Louellen hates fishy fish—the kind reminding you exactly where it came from. Salmon is acceptable only under strict conditions: fresh, lemony, and devoid of oceanic memory. Peter adores salmon, Lenie happily follows suit, while Hannah remains wary, holding her fork like a skeptic prepared for disappointment. The stakes were high—extremely high.
Years ago, I might have opened a cookbook or texted a friend. Last year, I probably would have scrolled through blogs cluttered with autoplay videos and pop-up ads promoting “life-changing” salmon rubs. But this time, I opened an AI chat window and typed:
“How do I cook a pound of Costco salmon so it doesn’t taste fishy?”
The response arrived immediately: clear instructions for roasting, grilling, or pan-searing. Temperatures. Techniques. Flavor profiles. Tips specifically to combat that dreaded fishiness: lemon, garlic, fresh herbs, olive oil.
Scanning the ingredient list, reality hit—I was missing about a third of them. No dill, no shallots, no Dijon mustard. And I didn’t have a one-pound filet—I had a five-pound slab. So, I kept going:
“What can I use instead of Dijon mustard?”
“What’s a good herb swap if I don’t have dill?”
“What if I don’t have a grill?”
“Can I bake it instead?”
“How do I adjust this for a five-pound salmon?”
The chat turned conversational—like speaking with a clever friend who didn’t tire of twenty questions. I listed ingredients I did have: lemons, thyme, garlic, rosemary, fresh parsley from our garden. The AI adapted effortlessly, suggesting a lemon-thyme rub, parsley-garlic compound butter, baking directions tailored for my oven, and even instructions on filleting the fish to fit in my pan.
It felt genuinely collaborative.
Sure, I still made judgment calls—like assigning cooking tasks to each child—but AI stood patiently beside me, like a helpful sous-chef who magically knew my pantry contents, garden herbs, and family tastes.
Dinner was a triumph. Louellen asked for seconds. Hannah cautiously tasted, nodded approval, and returned for more. Peter wondered aloud if we could eat salmon weekly. Lenie licked her fork clean after her fourth helping.
That salmon taught me something significant—not just about olive oil and lemon zest—but about trust.
In the enterprise, we’re bombarded with the message that AI changes everything. Strategy decks overflow with phrases like “AI-first,” “transformational tools,” and “productivity gains.” I’ve written some of those decks myself, and likely more in the future. But many are stuck where I was before that salmon dinner—unsure exactly what using AI practically means. They hesitate, unclear on where it fits, helps, or potentially obstructs.
Cooking dinner with AI carries minimal risk—if I ruin the salmon, pizza’s always an option. But in a large organization? Trusting AI becomes significantly harder. Real stakes emerge—compliance, safety, ethics, jobs, reputation.
Here’s the crucial takeaway:
People in enterprises don’t just need AI access. They need permission to use it.
They require small, safe spaces to experiment—salmon-sized problems—before tackling million-dollar decisions.
Building a culture where employees freely ask AI small questions, receive thoughtful responses, and develop decision-making muscle memory matters immensely. AI cannot define your values, strategy, or politics. It can assist in preparing salmon, but it won’t know who sits at your table or what they might refuse.
That remains firmly your responsibility.
Trust starts when given—and deepens through mutual reinforcement. So, what can you trust AI with today, to foster understanding and mutual confidence? What’s one small step you can take right now toward building trust in AI?
Comments
With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.