- Source: GPT-3
Generative Pre-trained Transformer 3 (3/info/gpt" target="_blank">GPT-3) is a large language model released by OpenAI in 2020.
Like its predecessor, 3/info/gpt" target="_blank">GPT-2, it is a decoder-only transformer model of deep neural network, which supersedes recurrence and convolution-based architectures with a technique known as "attention". This attention mechanism allows the model to focus selectively on segments of input text it predicts to be most relevant. 3/info/gpt" target="_blank">GPT-3 has 175 billion parameters, each with 16-bit precision, requiring 350GB of storage since each parameter occupies 2 bytes. It has a context window size of 2048 tokens, and has demonstrated strong "zero-shot" and "few-shot" learning abilities on many tasks.
On September 22, 2020, Microsoft announced that it had licensed 3/info/gpt" target="_blank">GPT-3 exclusively. Others can still receive output from its public API, but only Microsoft has access to the underlying model.
Background
According to The Economist, improved algorithms, more powerful computers, and a recent increase in the amount of digitized material have fueled a revolution in machine learning. New techniques in the 2010s resulted in "rapid improvements in tasks", including manipulating language.
Software models are trained to learn by using thousands or millions of examples in a "structure ... loosely based on the neural architecture of the brain". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was introduced in 2017—the transformer architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.
On June 11, 2018, OpenAI researchers and engineers published a paper introducing the first generative pre-trained transformer (3/info/gpt" target="_blank">GPT)—a type of generative large language model that is pre-trained with an enormous and diverse text corpus in datasets, followed by discriminative fine-tuning to focus on a specific task. 3/info/gpt" target="_blank">GPT models are transformer-based deep-learning neural network architectures. Previously, the best-performing neural NLP models commonly employed supervised learning from large amounts of manually-labeled data, which made it prohibitively expensive and time-consuming to train extremely large language models. The first 3/info/gpt" target="_blank">GPT model was known as "3/info/gpt" target="_blank">GPT-1," and it was followed by "3/info/gpt" target="_blank">GPT-2" in February 2019. Created as a direct scale-up of its predecessor, 3/info/gpt" target="_blank">GPT-2 had both its parameter count and dataset size increased by a factor of 10. It had 1.5 billion parameters, and was trained on a dataset of 8 million web pages.
In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which they claimed was "largest language model ever published at 17 billion parameters." It performed better than any other language model at a variety of tasks, including summarizing texts and answering questions.
Training and capabilities
On May 28, 2020, an arXiv preprint by a group of 31 engineers and researchers at OpenAI described the achievement and development of 3/info/gpt" target="_blank">GPT-3, a third-generation "state-of-the-art language model". The team increased the capacity of 3/info/gpt" target="_blank">GPT-3 by over two orders of magnitude from that of its predecessor, 3/info/gpt" target="_blank">GPT-2, making 3/info/gpt" target="_blank">GPT-3 the largest non-sparse language model to date.: 14 Because 3/info/gpt" target="_blank">GPT-3 is structurally similar to its predecessors, its greater accuracy is attributed to its increased capacity and greater number of parameters. 3/info/gpt" target="_blank">GPT-3's capacity is ten times larger than that of Microsoft's Turing NLG, the next largest NLP model known at the time.
Lambdalabs estimated a hypothetical cost of around $4.6 million US dollars and 355 years to train 3/info/gpt" target="_blank">GPT-3 on a single GPU in 2020, with lower actual training time by using more GPUs in parallel.
Sixty percent of the weighted pre-training dataset for 3/info/gpt" target="_blank">GPT-3 comes from a filtered version of Common Crawl consisting of 410 billion byte-pair-encoded tokens. Fuzzy deduplication used Apache Spark's MinHashLSH.: 9 Other sources are 19 billion tokens from WebText2 representing 22% of the weighted total, 12 billion tokens from Books1 representing 8%, 55 billion tokens from Books2 representing 8%, and 3 billion tokens from Wikipedia representing 3%.: 9 3/info/gpt" target="_blank">GPT-3 was trained on hundreds of billions of words and is also capable of coding in CSS, JSX, and Python, among others.
Since 3/info/gpt" target="_blank">GPT-3's training data was all-encompassing, it does not require further training for distinct language tasks. The training data contains occasional toxic language and 3/info/gpt" target="_blank">GPT-3 occasionally generates toxic language as a result of mimicking its training data. A study from the University of Washington found that 3/info/gpt" target="_blank">GPT-3 produced toxic language at a toxicity level comparable to the similar natural language processing models of 3/info/gpt" target="_blank">GPT-2 and CTRL. OpenAI has implemented several strategies to limit the amount of toxic language generated by 3/info/gpt" target="_blank">GPT-3. As a result, 3/info/gpt" target="_blank">GPT-3 produced less toxic language compared to its predecessor model, 3/info/gpt" target="_blank">GPT-1, although it produced both more generations and a higher toxicity of toxic language compared to CTRL Wiki, a language model trained entirely on Wikipedia data.
On June 11, 2020, OpenAI announced that users could request access to its user-friendly 3/info/gpt" target="_blank">GPT-3 API—a "machine learning toolset"—to help OpenAI "explore the strengths and limits" of this new technology. The invitation described how this API had a general-purpose "text in, text out" interface that can complete almost "any English language task", instead of the usual single use-case. According to one user, who had access to a private early release of the OpenAI 3/info/gpt" target="_blank">GPT-3 API, 3/info/gpt" target="_blank">GPT-3 was "eerily good" at writing "amazingly coherent text" with only a few simple prompts. In an initial experiment 80 US subjects were asked to judge if short ~200 word articles were written by humans or 3/info/gpt" target="_blank">GPT-3. The participants judged correctly 52% of the time, doing only slightly better than random guessing.
On November 18, 2021, OpenAI announced that enough safeguards had been implemented that access to its API would be unrestricted. OpenAI provided developers with a content moderation tool that helps them abide by OpenAI's content policy. On January 27, 2022, OpenAI announced that its newest 3/info/gpt" target="_blank">GPT-3 language models (collectively referred to as InstructGPT) were now the default language model used on their API. According to OpenAI, InstructGPT produced content that was better aligned to user intentions by following instructions better, generating fewer made-up facts, and producing somewhat less toxic content.
Because 3/info/gpt" target="_blank">GPT-3 can "generate news articles which human evaluators have difficulty distinguishing from articles written by humans," 3/info/gpt" target="_blank">GPT-3 has the "potential to advance both the beneficial and harmful applications of language models.": 34 In their May 28, 2020 paper, the researchers described in detail the potential "harmful effects of 3/info/gpt" target="_blank">GPT-3" which include "misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting". The authors draw attention to these dangers to call for research on risk mitigation.: 34
3/info/gpt" target="_blank">GPT-3 is capable of performing zero-shot and few-shot learning (including one-shot).
In June 2022, Almira Osmanovic Thunström wrote that 3/info/gpt" target="_blank">GPT-3 was the primary author on an article on itself, that they had submitted it for publication, and that it had been pre-published while waiting for completion of its review.
3/info/gpt" target="_blank">GPT-3 models
There are many models in the 3/info/gpt" target="_blank">GPT-3 family, some serving different purposes than others. In the initial research paper published by OpenAI, they mentioned 8 different sizes of the main 3/info/gpt" target="_blank">GPT-3 model:
Half of the models are accessible through the API, namely 3/info/gpt" target="_blank">GPT-3-medium, 3/info/gpt" target="_blank">GPT-3-xl, 3/info/gpt" target="_blank">GPT-3-6.7B and 3/info/gpt" target="_blank">GPT-3-175b, which are referred to as ada, babbage, curie and davinci respectively. While the size of the API models was not originally disclosed by OpenAI, EleutherAI announced the mapping between model sizes and API names in May 2021. These model sizes were later confirmed by OpenAI, but the sizes of subsequent models have not been disclosed.
3/info/gpt" target="_blank">GPT-3.5
Generative Pre-trained Transformer 3.5 (3/info/gpt" target="_blank">GPT-3.5) is a sub class of 3/info/gpt" target="_blank">GPT-3 Models created by OpenAI in 2022.
On March 15, 2022, OpenAI made available new versions of 3/info/gpt" target="_blank">GPT-3 and Codex in its API with edit and insert capabilities under the names "text-davinci-002" and "code-davinci-002". These models were described as more capable than previous versions and were trained on data up to June 2021. On November 28, 2022, OpenAI introduced text-davinci-003. On November 30, 2022, OpenAI began referring to these models as belonging to the "3/info/gpt" target="_blank">GPT-3.5" series, and released ChatGPT, which was fine-tuned from a model in the 3/info/gpt" target="_blank">GPT-3.5 series. OpenAI does not include 3/info/gpt" target="_blank">GPT-3.5 in 3/info/gpt" target="_blank">GPT-3.
= Models
=There are three models:
Chat
3/info/gpt" target="_blank">gpt-3.5-turbo
Text completion
text-davinci-003
text-davinci-002
= 3/info/gpt" target="_blank">GPT-3.5 with browsing
=On April 10, 2023, OpenAI introduced a new variant of its 3/info/gpt" target="_blank">GPT-3.5 series model, known as 3/info/gpt" target="_blank">GPT-3.5 with Browsing (ALPHA). This updated model was described to build upon the capabilities of its predecessors "text-davinci-002" and "code-davinci-002". The 3/info/gpt" target="_blank">GPT-3.5 with Browsing (ALPHA) model incorporated the ability to access and browse online information. This has led to more accurate and up-to-date responses to user queries.
The 3/info/gpt" target="_blank">GPT-3.5 with Browsing (ALPHA) model has been trained on data up to September 2021, giving it more information compared to previous 3/info/gpt" target="_blank">GPT-3.5 models, which were trained on data up until June 2021. The model attempted to provide developers and users with an advanced natural language processing tool that can effectively retrieve and synthesize online information.
To enable browsing capabilities, OpenAI implemented a new API that allows the 3/info/gpt" target="_blank">GPT-3.5 with Browsing (ALPHA) model to access selected online resources during operation. This feature allows users to ask questions or request information with the expectation that the model will deliver updated, accurate, and relevant answers based on the latest online sources available to it.
On April 27, 2023, OpenAI made the 3/info/gpt" target="_blank">GPT-3.5 with Browsing (ALPHA) model publicly available to 3/info/gpt" target="_blank">GPT Plus users. This allowed more people to access to its new features.
= InstructGPT
=InstructGPT is a fine-tuned version of 3/info/gpt" target="_blank">GPT-3.5 trained on a dataset of human-written instructions.
Reception
= Applications
=3/info/gpt" target="_blank">GPT-3, specifically the Codex model, was the basis for GitHub Copilot, a code completion and generation software that can be used in various code editors and IDEs.
3/info/gpt" target="_blank">GPT-3 is used in certain Microsoft products to translate conventional language into formal computer code.
3/info/gpt" target="_blank">GPT-3 has been used in CodexDB to generate query-specific code for SQL processing.
3/info/gpt" target="_blank">GPT-3 has been used by Jason Rohrer in a retro-themed chatbot project named "Project December", which is accessible online and allows users to converse with several AIs using 3/info/gpt" target="_blank">GPT-3 technology.
3/info/gpt" target="_blank">GPT-3 was used by The Guardian to write an article about AI being harmless to human beings. It was fed some ideas and produced eight different essays, which were ultimately merged into one article.
3/info/gpt" target="_blank">GPT-3 was used in AI Dungeon, which generates text-based adventure games. Later it was replaced by a competing model after OpenAI changed their policy regarding generated content.
3/info/gpt" target="_blank">GPT-3 is used to aid in writing copy and other marketing materials.
A 2022 study from Drexel University suggested that 3/info/gpt" target="_blank">GPT-3-based systems could be used to screen for early signs of Alzheimer's disease.
= Reviews
=In a July 2020 review in The New York Times, Farhad Manjoo said that 3/info/gpt" target="_blank">GPT-3's ability to generate computer code, poetry, and prose is not just "amazing", "spooky", and "humbling", but also "more than a little terrifying".
Daily Nous presented a series of articles by nine philosophers on 3/info/gpt" target="_blank">GPT-3. Australian philosopher David Chalmers described 3/info/gpt" target="_blank">GPT-3 as "one of the most interesting and important AI systems ever produced".
A review in Wired said that 3/info/gpt" target="_blank">GPT-3 was "provoking chills across Silicon Valley".
The National Law Review said that 3/info/gpt" target="_blank">GPT-3 is an "impressive step in the larger process", with OpenAI and others finding "useful applications for all of this power" while continuing to "work toward a more general intelligence".
An article in the MIT Technology Review, co-written by Deep Learning critic Gary Marcus, stated that 3/info/gpt" target="_blank">GPT-3's "comprehension of the world is often seriously off, which means you can never really trust what it says." According to the authors, 3/info/gpt" target="_blank">GPT-3 models relationships between words without having an understanding of the meaning behind each word.
Jerome Pesenti, head of the Facebook AI lab, said 3/info/gpt" target="_blank">GPT-3 is "unsafe," pointing to the sexist, racist and other biased and negative language generated by the system when it was asked to discuss Jews, women, black people, and the Holocaust.
Nabla, a French start-up specializing in healthcare technology, tested 3/info/gpt" target="_blank">GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, 3/info/gpt" target="_blank">GPT-3 showed several limitations. For example, while testing 3/info/gpt" target="_blank">GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide.
Noam Chomsky expressed his skepticism about 3/info/gpt" target="_blank">GPT-3's scientific value: "It's not a language model. It works just as well for impossible languages as for actual languages. It is therefore refuted, if intended as a language model, by normal scientific criteria. [...] Perhaps it's useful for some purpose, but it seems to tell us nothing about language or cognition generally."
Luciano Floridi and Massimo Chiriatti highlighted the risk of "cheap production of good, semantic artefacts".
OpenAI's Sam Altman himself criticized what he called "3/info/gpt" target="_blank">GPT-3 hype", acknowledging 3/info/gpt" target="_blank">GPT-3 "has serious weakness and sometimes makes very silly mistakes... AI is going to change the world, but 3/info/gpt" target="_blank">GPT-3 is just a very early glimpse."
= Criticism
=3/info/gpt" target="_blank">GPT-3's builder, OpenAI, was initially founded as a non-profit in 2015. In 2019, OpenAI broke from its usual open-source standards by not publicly releasing 3/info/gpt" target="_blank">GPT-3's predecessor model, citing concerns that the model could facilitate the propagation of fake news. OpenAI eventually released a version of 3/info/gpt" target="_blank">GPT-2 that was 8% of the original model's size. In the same year, OpenAI restructured to be a for-profit company. In 2020, Microsoft announced the company had exclusive licensing of 3/info/gpt" target="_blank">GPT-3 for Microsoft's products and services following a multi-billion dollar investment in OpenAI. The agreement permits OpenAI to offer a public-facing API such that users can send text to 3/info/gpt" target="_blank">GPT-3 to receive the model's output, but only Microsoft will have access to 3/info/gpt" target="_blank">GPT-3's source code.
Large language models, such as 3/info/gpt" target="_blank">GPT-3, have come under criticism from a few of Google's AI ethics researchers for the environmental impact of training and storing the models, detailed in a paper co-authored by Timnit Gebru and Emily M. Bender in 2021.
The growing use of automated writing technologies based on 3/info/gpt" target="_blank">GPT-3 and other language generators, has raised concerns regarding academic integrity and raised the stakes of how universities and schools will gauge what constitutes academic misconduct such as plagiarism.
OpenAI's 3/info/gpt" target="_blank">GPT series was built with data from the Common Crawl dataset, a conglomerate of copyrighted articles, internet posts, web pages, and books scraped from 60 million domains over a period of 12 years. TechCrunch reports this training data includes copyrighted material from the BBC, The New York Times, Reddit, the full text of online books, and more. In its response to a 2019 Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation from the United States Patent and Trademark Office (USPTO), OpenAI argued that "Under current law, training AI systems [such as its 3/info/gpt" target="_blank">GPT models] constitutes fair use," but that "given the lack of case law on point, OpenAI and other AI developers like us face substantial legal uncertainty and compliance costs."
See also
BERT (language model)
Hallucination (artificial intelligence)
LaMDA
Gemini (language model)
Wu Dao
3/info/gpt" target="_blank">GPT-4
GPTZero
References
Kata Kunci Pencarian:
- ChatGPT
- OpenAI
- EleutherAI
- Penyulihbentuk praterlatih generatif
- Model bahasa besar
- GitHub Copilot
- Fiksi interaktif
- Bot percakapan
- Google Search
- Sijori
- GPT-3
- ChatGPT
- GPT-4
- GPT-4o
- GPT-2
- Generative pre-trained transformer
- AutoGPT
- Large language model
- OpenAI
- AI Dungeon