Episode 178

Safe Ecto Migrations and AI Updates

00:00:00
/
00:29:54

November 21st, 2023

29 mins 54 secs

Your Hosts
Tags

About this Episode

In this episode, we revisit the Safe Ecto Migrations guide and get an update on improvements. We also discuss the role and importance of OpenSource AI models. We cover updates in the Elixir LangChain library, the advantages of self-hosted AI models like Mistral, and learning how to run Bumblebee on Fly.io GPUs. Tune in for an insightful blend of database best practices and the cutting-edge of AI in Elixir, plus more!

Show Notes online - http://podcast.thinkingelixir.com/178

Elixir Community News

Do you have some Elixir news to share? Tell us at @ThinkingElixir or email at show@thinkingelixir.com

Discussion Resources

  • 7:43 - David introduces and explains Safe Ecto migrations.
  • Updates on Safe Ecto for additional safety features and latest improvements.
  • Review of the performance of using text columns in databases showing that they have the same performance as VARCHAR types.
  • Examples provided of non-immutable expressions within database contexts.
  • Highlighting an error that can occur when backfilling data without a sort order.
  • Suggestion that Common Table Expressions (CTE) offers a more reliable method for certain database operations.
  • David's call for a library to assist with running database operations through a UI, indicating the desire for tooling improvements.
  • Consider the use-cases in the development and implementation of safety tools for databases.
  • 18:47 - Mark discusses new Fly.io GPU hardware, model improvements, and the Bumblebee tool.
  • Mistral LLM and its capabilities in the AI space.
  • Insights into running Bumblebee on GPUs and performance considerations.
  • Importance of Mistral being self-hosted.
  • Explanation of why self-hosting AI models like Mistral is significant for developers and users.
  • OpenAI's outage interrupted Mark's AI-powered workout trainer.
  • Outlining the Elixir LangChain goals, its roadmap, and potential impact on AI and data processing.
  • Discussion on how Large Language Models (LLMs) are effectively used for data extraction tasks.
  • Discussion on what an AI router is and what problem it solves.

Find us online