blog category

Storytell Architecture Updates and Developments in Browser-Based Machine Learning

Storytell's new architecture, data model and in-browser LLM experiment

Aaron Greenlee
June 11, 2024
Storytell Architecture Updates and Developments in Browser-Based Machine Learning

Hi, users! The engineering team has been hard at work at making improvements on how Storytell works. We're excited to share some of these updates with you and provide a glimpse into the future of our platform:

  1. Architecture and Data Model Updates
  • Overview:
    • The control plane (the "brain" of the system) coordinates various tasks such as content ingestion, machine learning services, and user interactions. We're continuously improving the architecture to provide a seamless user experience.
    • The model plane will have a multi-model planner that breaks down user intents into multiple tasks for efficient and targeted content generation. An LLM router will choose the best model based on the task, ensuring optimal performance and cost-effectiveness.
  • Data model improvements
    • We're implementing a user aggregate system that tracks user events (e.g., account creation, email verification) as a linear timeline. This allows for better auditing, data integrity, and the ability to reinterpret events without complex data migrations.
    • Every operation in the system will be represented and tracked, allowing for cost accumulation and transparency. This will enable us to optimize performance, analyze user behavior, and potentially offer flexible pricing models in the future.
  1. Running Machine Learning Models in the Browser
  • Proof of concept: We successfully ran BERT, a text-classifying model, directly in the browser. This groundbreaking development opens up a world of possibilities for enhancing the user experience and reducing latency.
  • Implications for user experience: By running models in the browser, we can provide near-instant results, enabling features like auto-complete suggestions and real-time model selection based on user input. Imagine typing and receiving immediate, context-aware suggestions powered by machine learning models running right in your browser!
  • Cost-saving benefits: Offloading computations and processing to the user's browser reduces the load on our systems, potentially leading to cost savings that we can pass on to our users


Share this post
x (formerly twitter)