Unleashing the Developer’s Arsenal: Harnessing the Power of AI and LLM is a special meetup hosted by Databricks and Xebia at Scala Days.
The talks included at the meetup are:
Why Dolly is just the beginning for open LLM models
Brought to the forefront with Open AI’s ChatGPT, we will start with a primer on the purpose and importance of large language models or LLMs in this session. We will also discuss the historical context of building ever larger generalized LLM models. Within this context, we showcase and demonstrate Dolly 1.0: initially democratizing the magic of ChatGPT with open models and then Dolly 2.0 - the world’s first truly open instruction-tuned LLM. Dolly is a proof of concept that showcases LLMs could be tuned for <$100 and with much smaller amounts of data than previously thought (Dolly utilizes an open-source commercially available 15K dataset).
But Dolly is just the beginning for open LLM models - there are many other great open models, such as Hugging Face’s Open Assistant and Mosaic MPT. Proprietary models like Open AI’s GPT 4.0 are extremely powerful and certainly have an important place in your LLM toolbox. But you can also fine-tune smaller open models on your own data such that your data is your competitive advantage - i.e., your data is your IP.
Integrating AI Workflows into your Project: Introducing Xef.ai will be presented by Xebia Functional’s CTO, Raúl Raja.
This talk will introduce the new Xef.ai library, how it works, and how it allows developers of all skill levels to introduce AI into their programs.
Xef is a multiplatform library initially available for Scala and Kotlin and with ongoing development and plans to support many more languages.
The Xef library provides a set of primitives for integrating AI workflows into projects. Developed by Xebia Functional’s research and open-source team, this library draws inspiration from the low-level architecture of langchain from Python and the functional and error-handling DSLs provided by Kotlin’s arrow-kt.io library. The Xef library furthers this approach with the ai DSL, which treats AI workflows as pure functional values. These functional values track the effects and errors the AI can produce alongside its regular output predictions statically. Interacting with an AI using precise types and pure values makes it easy to reason about the behavior of the AI program and to compose AI workflows with other software components predictably and reliably.
If you already have a Scala Days Seattle ticket, you’re all set! If not, to RSVP visit the Seattle Spark+AI Meetup page.
For more information visit the Scala Days blog.