Skip to content

Latest commit

 

History

History
20 lines (12 loc) · 1010 Bytes

2022-11-26-GPT3_finetuning.md

File metadata and controls

20 lines (12 loc) · 1010 Bytes
layout title date description
post
Finetuning GPT3
2022-11-26 07:09:00 -0800
Using the OpenAI API to finetune GPT3 on a custom dataset

Large Language Models have quite extraordinary performance. They can generate text, translate languages, and even answer questions. However, they are not perfect (fun fact: this last sentence was actually suggested by Github copilot, which is powered by a language model 😆). If you have a specific task, you may want to finetune the model on your own dataset.

OpenAI API allows you to do that. You can finetune GPT3 and then using it as you wish. In the notebook below, I show how to finetune GPT3 to predict the title of an arxiv paper from its abstract. Hope you enjoy it!

Open In Colab

One disclaimer: to use the OpenAI API you need to register and using it has a monetary cost.