July 25, 2024

Storytell Webinar: Building AI With Integrity

Register Now

Past Webinars

No items found.
No items found.
Webinars

Storytell Webinar: Building AI With Integrity

July 25, 2024

What does it mean to build responsible and ethical AI

In our July 25, 2024 webinar, hosted by Erika Anderson, co-founder and CCO of Storytell, we delved into the critical topic of Building AI With Integrity. This conversation is more relevant than ever as AI technologies continue to evolve and integrate into various aspects of our work and lives. The panelists, including Alix Rübsaam, Alja Isaković, and Chad Woodford, shared their insights on what it means to develop AI responsibly and offered practical advice for organizations looking to implement ethical AI practices.

Understanding AI and Its Ethical Implications

What is AI?

Artificial Intelligence (AI) is a broad term that encompasses various types of computer programs designed to identify patterns in data. These can range from simple statistical models to complex large language models (LLMs) like ChatGPT. As Alix Rübsaam explained, AI often involves translating real-world data into a format that machines can understand and process. This process is inherently mathematical and requires careful consideration of the data used and the outcomes desired.

"We often talk about responsible AI, but it comes down to humans making a conscious choice to be responsible to each other, to the planet, to other life on our planet, and collectively just take a deep breath and stop rushing towards AGI without really understanding what it is for and what we actually value and want to get out of there." - Alja Isaković

The Importance of Integrity in AI

Building AI with integrity means ensuring that the technology aligns with human values and serves its intended purpose without causing harm. This involves a deep understanding of the biases inherent in data and the potential impacts of AI on different communities. Alja Isaković emphasized the need for organizations to integrate ethical considerations into every level of their operations, from design to deployment. She points out, "We often talk about responsible AI, but it comes down to humans making a conscious choice to be responsible to each other, to the planet, to other life on our planet, and collectively just take a deep breath and stop rushing towards AGI without really understanding what it is for and what we actually value and want to get out of there."

For Alix Rübsaam, ensuring AI integrity means staying true to its intended purpose. She explains, "Measuring the success of integrity in AI involves assessing how closely it aligns with its original goals."

Practical Steps for Ethical AI Development

1. Define and Align with Company Values

Chad Woodford highlighted the importance of aligning AI development with the company's core values. "The way to look at it (integrity in AI) is actually more of a way to collaborate and improve product quality, boosting your company culture and, become ready for oncoming regulations and really build something that you can be proud of, in the long term," Chad says.

This involves getting buy-in from all stakeholders and ensuring that everyone understands the ethical principles guiding the project. Regular training and open discussions about ethics can help maintain this alignment.

"The way to look at it (integrity in AI) is actually more of a way to collaborate and improve product quality, boosting your company culture and, become ready for oncoming regulations and really build something that you can be proud of, in the long term." - Chad Woodford

2. Choose the Right Data and Models

Selecting appropriate data and models is crucial for minimizing bias. Alja Isaković suggested exploring open-source models and platforms like Hugging Face, which offer transparency and allow for customization. It's also essential to test models for biases by examining how they handle different variables, such as gender.

3. Document and Monitor the Process

Transparency is key to ethical AI development. Organizations should document their processes, including the data sources used, the criteria for model selection, and the steps taken to address bias. Regular audits and updates can help ensure that the AI remains aligned with ethical standards.

On addressing algorithmic bias, Alix Rübsaam explains that striving for a bias-free AI is futile because any AI inherently has a bias. She emphasizes that the more productive approach is to map out the specific biases and understand what they serve in terms of solving the problem at hand. "To strive for bias free, I think is futile. The more productive approach there would be to map out the specific biases and what they serve in terms of how it solves the problem that we're trying to solve," Alix says.

"Measuring the success of the integrity in AI is mapping out how much it genuinely overlaps with the thing it set out to do." - Alix Rübsaam

4. Engage with Marginalized Communities

Incorporating diverse perspectives can help address biases and ensure that AI technologies serve all communities fairly. Alix Rübsaam shared examples of projects that work with indigenous communities to integrate their knowledge into AI models. This approach not only enriches the data but also helps preserve valuable cultural insights.

5. Be Specific and Transparent About AI Capabilities

One of the most critical aspects of building AI with integrity is being clear about what the technology can and cannot do. Alix Rübsaam stressed the importance of setting realistic expectations and being transparent about the limitations of AI models. This honesty can build trust and prevent misuse.

Related resources: