Large Language Models (LLM), like ChatGPT, have shown miraculous performances on various tasks. But there are still unsolved issues with these models: they can be confidently wrong and their knowledge becomes outdated. GPT also does not have any of the information that you have stored in your own data. In this talk, you'll learn how to use Haystack, an open source framework, to chain LLMs with other models and components to overcome these issues. We will build a practical application using these techniques. And you will walk away with a deeper understanding of how to use LLMs to build NLP products that work.
You can apply LLMs to solve various NLP and NLU tasks, such as summarization or question answering. These models have billions of parameters they can use to effectively store some of the information they saw during pre-training. This enables them to show deep knowledge of a subject, even if they weren't explicitly trained on it.
Yet, this capability also comes with issues. The information stored in the parameters can’t easily be updated, and the model's knowledge might become stale. The model won’t have any of your custom data, your company’s knowledge base for example. Sometimes, the model makes things up. We call that hallucination.
Cases of hallucination can be hard to spot. The model may be very confident while making up a response. It may even make up fake citations and research papers to support its claims.
Haystack is an open source NLP framework for pragmatic builders. Developers use it to build NLP applications, such as question answering systems, neural search engines, or summarization services. Haystack provides all the components you need to build an actual NLP application, which differentiates it from other NLP frameworks.
It provides document conversion, pre-processing, data storage, vector databases, and model inference. It also wraps all these components in a neat pipeline abstraction. You can use a pipeline to run your application as a reliable and scalable service in production.
In this talk, machine learning engineers, data scientists, and NLP developers will learn how Haystack integrates with LLMs, such as GPT-3. We will show how to use the pipeline abstraction and retrieval-augmented generation to address issues like stale knowledge and hallucination. We will also provide a practical example by showing how to create a personal assistant for knowledge workers. Each step will be accompanied with open source code examples. By the end of the talk, you will have seen these concepts applied in practice and you will be able to build an assistant for your own use case.