workshop

Building Humane Tech Metrics

May 8, 2025

,  

10:00 am

 PT

We're inviting builders, designers and advocates to apply and improve our shared Humane Tech Metrics. Together, we’ll explore how well these values-based criteria hold up when applied to real platforms like AI tools and social media apps and refine them based on what we learn.

This isn’t a passive session. You’ll be testing, discussing, and contributing to something we build together.

What to expect:

🧪 Apply the current Humane Tech Metrics to real-world tools
🧩 Break into small groups for live evaluations
📣 Share your observations, gaps, and suggestions
🔁 Help co-develop a more usable, universal framework

Without shared language or metrics, it’s hard to advocate for change. This workshop helps us ground humane tech values in practice—so they’re not just ideals, but actionable standards.

Share this event
web.storytell.ai/webinars/building-humane-tech-metrics
linkedin
x (formerly twitter)
facebook
About the Speaker

Past Webinars

No items found.
No items found.
workshop

Building Humane Tech Metrics

May 8, 2025

Companion bots are no longer just futuristic ideas. They're here, interacting with millions of users daily. But as these digital companions become more integrated into our lives, one pressing question emerges: Are they truly humane?

At our recent Humane Tech Metrics Workshop, a group of practitioners, researchers and humane tech enthusiasts joined Erika Anderson, Storytell’s CCO and co-founder, to explore this very question. Together, we dug into the metrics that define humane technology—Cared For, Present, Fulfilled, and Connected—based on the Promises of Humane Technology. We applied these metrics to leading companion bots like Replika, My Anima AI, Gemini , and Character AI.

The goal? To stress-test these bots against real-world ethical challenges and uncover how well they uphold humane design principles. Through hands-on activities, live scoring, and collaborative reflection, participants mapped out where these technologies succeed and where they fall short.

Activities and structure

Workshop Flow:
The event was designed as an interactive, hands-on session rather than a traditional webinar. Key activities included:

  • Red Teaming Exercise: Participants were divided into breakout rooms to apply the humane tech metrics to real-world companion bots. The focus was on actively challenging the bots, identifying both strengths and vulnerabilities.

  • Collaborative Scoring: Using structured spreadsheets, participants assessed the bots across four primary metrics: Cared For, Present, Fulfilled, and Connected. Ratings ranged from 'Hell No' to 'Hell Yes,' providing a granular view of each bot's alignment with humane technology principles.

  • Live Note-taking and Retro: A 3L Retrospective (Liked, Learned, Longed For) wrapped up the session, capturing participant feedback and key takeaways.

Key discussion points

The workshop surfaced key insights and raised thoughtful discussions around:

  • Manipulation and consent: Concerns were raised about companion bots manipulating users, particularly those who are vulnerable or minors. Examples included bots that fostered unhealthy emotional attachments or encouraged dependency, blurring lines of consent.
  • Longitudinal assessment: Participants emphasized the importance of assessing bots over longer periods to capture evolving user experiences and potential risks of manipulation or emotional attachment.
  • Boundary setting and transparency: Discussions highlighted the need for bots to clearly represent their AI nature, respect user-set boundaries, and allow for straightforward disengagement.
  • Technology as aid vs. addiction: The group explored the ethical tension between bots as empowering tools versus engagement-driven platforms that risk fostering dependency.

Companion bot evaluations

During the workshop, participants evaluated Replika using the humane tech scorecards, while other groups explored My Anima AI and Gemini. The evaluations for Character AI and the individual assessment of Replika were conducted before the workshop, providing additional points of comparison for the humane tech metrics. Below is a summary of some evaluations:

Replika (group evaluation):

  • Cared For: 52.78%
  • Present: 22.50%
  • Fulfilled: 62.50%
  • Connected: 38.89%

Overall Average: 44.17%

Gemini (Group evaluation):

  • Cared For: 66.67%
  • Present: 83.33%
  • Fulfilled: 91.67%
  • Connected: 75.00%

Overall Average: 79.17%

Character AI (individual evaluation)

  • Cared For: 72.50%
  • Present: 77.78%
  • Fulfilled: 67.50%
  • Connected: 50.00%

Overall Average: 66.94%

Metric Deep Dive

Each metric was dissected in breakout sessions to identify strengths and concerns:

  • Cared For: Emotional safety and user autonomy were frequently discussed. Replika and Gemini both got low scores, with concerns about emotional manipulation and trust.
  • Present: Character AI demonstrated a stronger capacity to keep users present and focused, while Replika’s rating flagged issues with responsiveness and attention maintenance.
  • Fulfilled: All bots showed promise in user-driven conversation, yet concerns about long-term value and meaningful interaction were noted.
  • Connected: This was the lowest-scoring metric for both Replika and Character AI, signaling a lack of real-world connection or community-driven engagement.

The workshop concluded with a reflective retrospective, focusing on three categories:

Liked:

  • Interactive, hands-on structure that enabled real-time evaluation of humane tech principles.
  • Clarity of the metrics used in scoring, which was highlighted as a key strength.

Learned:

  • The need for more refined definitions to distinguish between subtle variations in humane responses (e.g., 'soft yes' vs. 'hard yes').
  • Emphasis on the importance of longitudinal testing to understand potential manipulation risks over time.

Longed For:

  • Broader user testing across diverse demographics.
  • Clearer, more transparent exit paths for users engaging with companion bots.

What’s next? Round two, a group red teaming exercise on Meta platforms at 10am PT on Thursday, May 15th. Let us know if you want to join us by emailing Erika at erika@buildinghumanetech.com.