Share this post
Cost tracking, file upload diagnostics, automation, and model updates
March 28, 2025
.jpg)
In our most recent March 28th engineering demo, the engineering team shared updates on internal tools for usage tracking and file processing, improvements to automations with Actions, changes to our LLM router, and new prompt-level metrics. These updates aim to improve reliability, support, and flexibility for anyone using Storytell.
Cost tracking to support platform reliability
Storytell now tracks ingestion and prompt activity internally. This helps our team monitor how the platform is being used and identify any unusual spikes in usage.
This helps:
- Catch performance issues earlier.
- Resolve support requests faster.
- Maintain high usage limits without risking platform slowdowns.
A daily usage cap is now in place. It’s set high enough that normal use won’t be affected. If someone does hit the limit, our team has the tools to respond quickly.
More efficient file support with upload diagnostics
A new job detail view in Beacon helps our team debug issues with uploads more efficiently. It shows every processing step, when a job started, how long it took, and whether it succeeded or failed.
This helps us:
- Identify slow or failing uploads.
- Spot file-specific issues more quickly.
- Provide faster, clearer support when something goes wrong.
This tool allows for faster resolution and fewer delays when uploading files.
Actions: Schedule or automate prompt-based workflows
Actions let users define prompts that run automatically—on a schedule, on demand, or when new content is added to a workspace.
Updates shared this week included:
- Ability to set up notification triggers (e.g., send a text when a condition is met).
- Support for linking prompts to tagged content or specific Concept relationships.
- Work underway to connect Actions with the prompt library for easier setup and reuse.
The goal is to reduce repetitive prompting and help users act on new information as it becomes available.
Gemini 2.0 Pro added to the LLM router
Gemini 2.0 Pro has replaced Gemini 1.5 Flash and several lower-performing models (like Claude 3 Haiku and LLaMA 3.2 3B) in Storytell’s LLM router.
This change improves:
- Model response quality.
- Performance across more complex prompt types.
- Alignment with vendor-supported models (e.g., Google deprecating Gemini 1.5).
No user action is needed—routing happens automatically based on the prompt content.
Improvements to mentioning for faster answers
Mentioning—our feature that helps surface relevant people, files, and concepts as you write or ask questions—has been improved for greater consistency and speed.
These updates improve:
- How reliably mentions show up in longer threads or large workspaces.
- Accuracy when linking to related content, especially when multiple concepts are involved.
- The system’s ability to detect which mentions are most useful based on context.
This helps reduce the time you spend searching or rephrasing, and increases the chance that the right reference shows up when you need it.
Become an Alpha or Beta Tester
Get early access to features as we release them by becoming an alpha or beta tester. Here's how to sign up: https://web.storytell.ai/early-access
Gallery
No items found.
Changelogs
Here's what we rolled out this week