Share this post
How to Build AI with Integrity — Why I Started Storytell
May 28, 2025
.jpg)
I didn’t start Storytell because I wanted to build an AI company. I started it because I wanted to build trust.

For years, I’ve worked at the intersection of storytelling, product, and people. I’ve seen how the tools we build shape the stories we tell about ourselves, our work, and our world. I’ve also seen how often those tools leave people confused, overwhelmed, or excluded.
So when the generative AI boom took off, I felt both excitement and unease.
The technology was extraordinary. But the rollout was chaotic. AI systems were being marketed as all-knowing, but in practice, their reasoning was often invisible. Outputs changed without warning. Prompts broke for no clear reason. And for many teams, it became hard to tell whether the AI was actually helping or just hallucinating more confidently.
That gap between promise and reality is where Storytell began.
Making AI more explainable, not just more powerful
Storytell helps teams organize knowledge, expand ideas, and communicate better using generative AI. But our deeper goal is to build AI that’s accountable and aligned with how humans actually think and work.
From day one, we focused not just on what Storytell could do, but on how it did it:
- Transparency by design: Users can see how a prompt is structured, what knowledge it draws from, and where claims come from. No black box magic.

- Context that stays with you: Outputs can be grounded in your own materials, so the AI reflects your perspective, not generic internet data.

- Human-led workflows: Human-led workflows: Storytell is built for collaboration between humans and AI. You choose the prompt, define the context, and stay in control of how the response is shaped so AI becomes a thought partner, not an autopilot.
It’s tempting to treat AI as a shortcut. But building with integrity means slowing down where it matters, asking not just “Can we ship this?” but “Should we?” and “How will this affect the people using it?”
Integrity in practice
To build AI with integrity, we’ve embedded principles into our product process:
- We publish changelogs so users can track how Storytell evolves.
- We test not just for accuracy but for usefulness and alignment with real-world context.
- We involve users in shaping how features grow, treating feedback as part of the design process.
- We create room for experimentation while staying grounded in ethical guardrails.

This is also why I started Building Humane Tech—to bring others into the conversation. Whether we’re prototyping better onboarding flows or co-creating new metrics for accountability; we believe this work can’t happen in silos. Integrity is a collective effort.
The stakes of what we build
The decisions we make about AI today won’t just shape tomorrow’s products; they’ll shape tomorrow’s values.
It’s easy to be swept up in the speed of innovation. But speed without integrity leads to shallow, extractive systems. It leads to users being left out of their own loop. It leads to tools that confuse more than they clarify.
Storytell exists to offer an alternative. A way to build AI that is more grounded, more transparent, and more human.
We’re still learning. Still adjusting. Still asking the hard questions. But we’re also proud of the foundation we’ve built and the community around it.
That’s why I started Storytell. Not just to make AI better, but to make it accountable to the people who use it.
Gallery
No items found.
Changelogs
Here's what we rolled out this week
No items found.