GPT-3 by Full Stack Python

GPT-3 by Full Stack Python

GPT-3

GPT-3 is a neural network trained by the OpenAI organization with significantly more parameters than previous generation models.

There are several variations of GPT-3, which range from 125 to 175 billion parameters. The different variations allow the model to better respond to different types of input, such as a question & answer format, long-form writing, human language translations (e.g. English to French). The large numbers of parameters make GPT-3 significantly better at natural language processing and text generation than the prior model, GPT-2, which only had 1.5 billion parameters.

OpenAI logo.

GPT-3 can only currently be access by an API provided by OpenAI, which is in private beta.

What’s so special about GPT-3?

The GPT-3 model can generate texts of up to 50,000 characters, with no supervision. It can even generate creative Shakespearean-style fiction stories in addition to fact-based writing. This is the first time that a neural network model has been able to generate texts at an acceptable quality that makes it difficult, if not impossible, for a typical person to whether the output was written by a human or GPT-3.

How does GPT-3 work?

To generate output, GPT-3 has a very large vocabulary, which it can combine to generate sentences. These words are sorted into different categories (nouns, verbs, adjectives, etc.), and for each category, there is a “production rule”, which can be used to generate a sentence. The production rules can be modified with different parameters.

A few examples:

  • noun + verb = subject + verb
  • noun + verb + adjective = subject + verb + adjective
  • verb + noun = subject + verb
  • noun + verb + noun = subject + verb + noun
  • noun + noun = subject + noun
  • noun + verb + noun + noun = subject + verb + noun + noun

In addition, GPT-3 is able to understand negations, as well as the use of tenses, which allows the model to generate sentences in the past, present and future.

Does GPT-3 matter to Python developers?

GPT-3 is not that useful right now for programmers other than as an experiment. If you get access to OpenAI’s API then Python is an easy language to use for interacting with it and you could use its text generation as inputs into your applications. Although there have been some initial impressive experiments in generating code for the layout of the Google homepageJSX output, and other technical demos, the model will otherwise not (yet) put any developers out of a job who are coding real-world applications.

How was GPT-3 trained?

At a high level, training the GPT-3 neural network consists of two steps.

The first step requires creating the vocabulary, the different categories and the production rules. This is done by feeding GPT-3 with books. For each word, the model must predict the category to which the word belongs, and then, a production rule must be created.

The second step consists of creating a vocabulary and production rules for each category. This is done by feeding the model with sentences. For each sentence, the model must predict the category to which each word belongs, and then, a production rule must be created.

The result of the training is a vocabulary, and production rules for each category.

The model also has a few tricks that allow it to improve its ability to generate texts. For example, it is able to guess the beginning of a word by observing the context of the word. It can also predict the next word by looking at the last word of a sentence. It is also able to predict the length of a sentence.

While those two steps and the related tricks may sound simple in theory, in practice they require massive amounts of computation. Training 175 billion parameters in mid-2020 cost in the ballpark of $4.6 million dollars, although some other estimates calculated it could take up to $12 million depending on how the hardware was provisioned.

GPT-3 resources

These resources range from broad philosophy of what GPT-3 means for machine learning to specific technical details for how the model is trained.

GPT-3 tutorials

SOURCE: https://www.fullstackpython.com/gpt-3.html

Comments are closed.