Finetuning GPT-3: Make an AI chatbot that responds to your DMs

Aleem

Dec 28, 2022

4 min read

When I first watched Silicon Valley, this scene stuck with me.

One of the characters, Gilfoyle, made a chatbot that chatted exactly like him – from mannerisms to vocabulary to humor. It completely fooled his coworker, Dinesh. Today, we’ll do just that by fine-tuning GPT-3!

Some questions I’ll answer in this 4-minute read 📓

  • WTF is fine-tuning?!

  • Why would we use it over OpenAI playground?

  • How do I get all the data needed?

  • What do I need to train my own model?

By the end of this note, you will have a custom trained GPT-3 model to talk like you, and respond to your DMs without any supervision.

This is a simple note and you'll learn a lot. Cool thing is you can also extend what you learn to create celeb chat bots. Imagine a Socrates or Steve Jobs bot!

Finetuning GPT-3 explained

I’m sure you’ve seen GPT-3 around the web. You would have played with OpenAI's latest experiment, ChatGPT, or at least used one of the copywriting tools such as CopyAI or Jasper. These are indeed some of the first examples of generally available tools that harness artificial intelligence.

When OpenAI built GPT-3, it was designed as a general language model, to be able to serve many different tasks – with just a couple lines of context we can make it talk in the voice of popular people (like how I built GPT Santa!). This has a shortcoming though — it takes a ridiculous amount of tokens to learn the intricacies of people it wasn’t trained on.

Tokens? What are tokens? Why do they matter?

Tokens are the denominations of the words we put in our prompt, for GPT-3 to then create a response. Having 1,000s of examples to demonstrate how you talk is simply infeasible. Fine-tuning will allow us to effectively add on to GPT-3, creating our own model with everything about you baked in 🍞.

So when should you fine-tune?

It's actually more straight forward than you think. For projects where a lot of context isn't necessary/required, fine-tuning doesn't make sense. When we’re trying to teach it about an esoteric topic with a bunch of data required (like a chatbot that behaves like you), fine-tuning makes sense!

GPT-3 is trained on the internet. It has context of a lot of historical figures, celebrities, famous people, content and culture. If your projects depend on any of these, you can skip fine-tuning!

Collecting the data

Let's get shippin'! First, we’ll need data, a sh*t ton of it. Find a source of data which can be relatively "easy" to scrape and has decent volume in terms of messages you have sent. For many, this will either be discord, slack, or whatsapp.

In my case, I am most active on discord, so I threw up a lil script where you can copy & paste DMs and put it in fine-tuning format. If you want to import from another source, its all good! Just format the data similarly. Script used below. Understand the code rather than just duplicating.

Each formatted entry must look like this:

{"prompt":"question from fren ->\n\n###\n\n","completion":" your answer! END"}

If you need an idea, just ask chatGPT to elaborate.

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread.

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file as data.jsonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

pip install --upgrade openai

Next, link your account to the terminal so we know where the model should end up:

export OPENAI_API_KEY="<OPENAI_API_KEY>"

Now let’s ensure all your data is formatted well for OpenAI.

You’ll notice some obscurities in the suggestions to your data like n\n## and END.

That’s for GPT-3 to understand the opening and closing of your conversations.

openai tools fine_tunes.prepare_data -f <DIRECTORY_OF_DATA.JSONL>

After the suggestions, it’ll spit out a file named data_prepared.jsonl, this is our final file to train with. It’s about time to start baking!

You’ll see I selected davinci as our model, I wouldn’t recommend switching this as while although ada and curie are cheaper. It makes getting good results significantly more difficult.

openai api fine_tunes.create -t <DIRECTORY_OF_DATA_PRERPARED.JSONL> -m davinci

If you are stuck or are facing blockers, reach out and I can help you!

Time to play!

Congrats!! You now have your own custom-trained model :).

Now the question is: how the heck do I use this thing?!

Well. That’s simple. Just open up playground and ensure you switch the model and add your stop sequence.

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Got stuck somewhere or have questions? Don't hesistate to reach out and I can help you!

Autorespond to DMs

Since you can now access this like any other GPT-3 model, you can setup cloud functions to auto-respond to your messages for you by talking to OpenAI’s API.

Some cool examples I found include hitting Twillio to send texts, an API for discord bots, and even sending Twitter DMs!

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Time to train

Once you have ~150 entries, it’s time to train!

I like to think of our data as some dough — and we’re now about to bake it into some bread ;)

We’ll first need to set up our environment.

Go ahead and make a new folder, naming your JSON file asdata.j sonl. (Note: it’s .jsonl, not .json)

Then, install the CLI using the following command so we can do fancy OpenAI commands from our terminal:

Congrats! You have your own AI chatbot!

By now, you should have a sick chat bot that responds to your DMs in a way that feels like you. The more interesting your conversion data set, you'll really be able to identify the personality that your custom chat bot takes.

The use cases for such bots will only evolve as the models evolve. From customer service, to enhancing self-productivity, like scheduling appointments or answering the most commonly asked questions you get, there are so many possibilities.

Hope you had fun! Can't wait to see what you create with this.

– Aleem

Join the the world's best builders for a 6-week sprint

Come join the best builders from around the world to build wild ideas in web3, ML/AI, gaming, bio-anything. You've got what it takes - all you need to do is apply