Building with Integrity: Navigating Startup Realities & Humane Tech UX
How do we build a successful startup while staying true to humane tech principles?
March 11, 2025
,
9:00 am
PT
This workshop explores the realities of balancing business growth with ethical design. The Storytell crew will participate in the session, sharing real-life practices we use to align UX with humane tech values. We also want your input—help us refine and expand these practices so they can be applied more broadly. Join us to wrestle with challenges, exchange insights, and co-create a future where tech serves people first.
What to Expect:
- Insights from Storytell – How we apply humane tech in UX design.
- Hands-on UX Activities – Explore ethical, human-centered design solutions.
- Interactive Discussions – Share challenges and exchange ideas.
- Practical Takeaways – Actionable strategies for your own organization.
- Co-Creation & Feedback – Help shape better humane tech practices.

Erika Anderson
Erika Anderson combines a deep background in storytelling, team dynamics, and ethical product design. As Chief Customer Officer at Storytell, she focuses on creating tools that empower users while emphasizing empathy, transparency, and long-term thinking. Erika’s insights into humane tech—featured on Building Humane Tech—are both thought-provoking and grounded in practical action.
Past Webinars
Building with Integrity: Navigating Startup Realities & Humane Tech UX
How do we design AI that truly serves people? That was the driving question behind our latest workshop, where we explored ways to make Storytell more transparent, intuitive, and trustworthy. The event brought together a mix of our Crew and external participants, each offering insights on AI usability, prompt engineering, and user experience. Our goal was simple: assess Storytell through a humane technology lens and rapidly prototype improvements based on real user feedback.
Key themes and discussions
1. What makes AI humane?
The discussion kicked off by defining what humane AI should look like—focusing on design choices that support user agency, inclusivity, and clarity. We noted that AI should:
- Provide transparency around how it processes and generates responses.
- Be intuitive and accessible, especially for those new to AI.
- Avoid manipulative or addictive design patterns.
We evaluated what Storytell does well in this space and where improvements can be made.

2. Rapid prototyping and feedback loops
The hands-on portion of the workshop focused on real-time UX evaluation and prototyping solutions. Using collaborative tools like FigJam and Excalidraw, we explored three key questions:
- How might we help users feel comfortable with AI and unlock its potential to support them at work and home?
- How might we help people craft clearer, more effective AI requests?
- How might we build trust in AI-generated responses?
The breakout groups presented their findings in a show-and-tell session, demonstrating early prototypes and reflecting on key takeaways.
Breakout session insights
Helping AI users feel comfortable
The group pivoted from focusing on beginners to power users, recognizing that long-term AI memory could enhance the experience for both. They proposed leveraging Story Tiles™ from past chat history to inject context into AI responses, providing relevant context for responses.

Improving prompting and well-formed requests
Participants explored ways to nudge users toward better-formed prompts. They emphasized that people come to Storytell with a specific job to be done, and the system should adapt accordingly. Understanding why a user is there could allow Storytell to better guide them through available data and possible analyses.

Building trust in AI responses
Inline citations emerged as a critical feature for trust. Participants also stressed the need for clear, conversational language rather than technical jargon. Additionally, they explored ways to allow users to dive deeper into sources and verify AI-generated insights.

Balancing transparency and efficiency
One of the most debated topics was the balance between AI transparency and usability. While users want to understand AI’s reasoning, too much explanation can create friction. The proposed solution? A tiered transparency approach:
- Summary-level responses for quick interactions.
- Expandable sections for those wanting to explore citations, reasoning, and alternative viewpoints.
This method ensures Storytell remains accessible to both casual users and power users without overwhelming either group.
Takeaways and next steps
From this workshop, several priorities emerged for Storytell’s continued development:
- Better onboarding for new AI users. The Crew will explore ways to guide first-time users with contextual tips and example prompts.
- Stronger AI memory capabilities. Inspired by StoryTiles, there’s potential for longer-term AI recall based on prior interactions.
- Refined feedback loops. Users should have an easier way to refine AI responses through interactive adjustments rather than static outputs.
- Clarity in AI-generated insights. The introduction of inline citations and transparent reasoning will further strengthen user trust.
The workshop reaffirmed Storytell’s commitment to building AI in public—actively inviting feedback to create a more human-centered platform.
To learn more about humane tech, visit our Building Humane Tech website, join our Slack community, or attend a Humane Tech meetup.
Become an Alpha or Beta Tester
Get early access to features as we release them by becoming an alpha or beta tester. Here's how to sign up: https://web.storytell.ai/early-access