In this talk, I'll show how large language models such as GPT-3 complement rather than replace existing machine learning workflows. Initial annotations are gathered from the OpenAI API via zero- or few-shot learning, and then corrected by a human decision maker using an annotation tool. The resulting annotations can then be used to train and evaluate models as normal. This process results in higher accuracy than can be achieved from the OpenAI API alone, with the added benefit that you'll own and control the model for runtime.
Software engineering is all about getting computers to do what we want them to do. As machine learning methods have improved, they've introduced a new way to specify the desired behaviour. Instead of writing code, you can prepare example data. Large language models are now starting to introduce a third option: instead of example data, you can provide a natural language prompt. Writing a prompt is far quicker than building a good set of training examples, but it's also a much less precise way to get the behaviour you want. There's also no reliable way to incrementally improve the results, even if better performance would be very valuable to you. Essentially, this new approach has a high floor, but a low ceiling. In this talk, I'll show how large language models such as GPT3 complement rather than replace existing machine learning workflows. Initial annotations are gathered from the OpenAI API via zero- or few-shot learning, and then corrected by a human decision maker using the Prodigy annotation tool. The resulting annotations can then be used to train and evaluate models as normal. This process results in higher accuracy than can be achieved from the OpenAI API alone, with the added benefit that you'll own and control the model for runtime.
Speakers: Ines Montani