An Introduction to GPT-3

When you think about the future of artificial intelligence (AI) technology, it’s likely that you don’t think of auto-completion. However, you probably should. In July 2020, OpenAI released a beta testing version of GPT-3, a new auto-completion program that could very likely define the next decade of AI programming. 

In this blog post, I will introduce you to OpenAI’s GPT-3 model, and present the strengths, limitations, and potential for this new technology. 

What is GPT-3?

Generative Pre-trained Transformer 3 (GPT-3) is AI technology developed by OpenAI, a company founded by Elon Musk and dedicated to AI investigation. It is the third model in OpenAI’s GPT series of autoregressive language tools. 

GPT-3 is a language model that uses deep learning to create human-like text. Like other language processing systems, GPT-3 predicts the probability of the sequence of words based on the given text and automatically provides the most likely answer. It is similar to the auto completion you see when you type something in the Google search bar or in the messaging application on your phone. 

How it Works

When GPT-3’s application programming interface (API) receives a small piece of text, it returns text based on the entry. The entry can be formulated as a phrase, a task, a question, or any kind of expression. 

GPT-3’s auto-completion success is based on the amount of data from which it is able to gather statistical information. GPT-3 has access to data from a wide range of sources, including data sets, common crawl, books, news, and internet web pages. Where GPT-2, GPT-3’s predecessor, had 1.5 billion parameters to analyze, GPT-3 has 175 billion parameters. To put it in perspective, the entirety of the English Wikipedia only makes up about 0.6 percent of GPT-3’s dataset. 

Supervised vs. Unsupervised Machine Learning

There are two types of learning that can be accomplished with AI: supervised and unsupervised machine learning. The first, supervised machine learning, requires data to be well labeled. This means that any data the model reviews is tagged with the correct answer.

For example, you can tell the model that a photo of a cat is a cat. The algorithm will then create a specialized numerical identification for all cat photos you label. The more labels the model has, the more successful its processing potential becomes. You can also label items as “not a cat” to differentiate them from your cat labels. Once the data sets are complete, the model is then able to analyze unlabeled data and predict whether the new data is a cat or not. 

GPT-3 represents an unsupervised machine learning model. Unsupervised machine learning removes the need for labeled data, and instead allows the model to discover information on its own. With unsupervised machine learning you are able to perform more complex processing tasks, but your results can be more unpredictable. 

Utilization of GPT-3

Though GPT-3 is an auto completion tool, it can be utilized for a number of tasks, including: 

  • Maintaining a conversation. 
  • Taking a specific role in a conversation. For instance, you can ask to talk with a story character. It can talk about any kind of topic.
  • Translating text to any language.
  • Passing a sentence into a mathematical expression.
  • Generating news articles.
  • Designing/creating interface layouts. 
  • Creating pieces of code.
  • Translating code to different programming languages.
  • Summarizing text.

Now that it is in the beta testing phase, new uses for GPT-3 are constantly being discovered. 

Limitations of GPT-3

GPT-3 represents a significant step in the evolution of AI technology. However, despite its potential, there are still limitations to the GPT-3 model: 

  • The model is trained to calculate the probability of words based on the previous text, but it cannot understand the text it receives and generates.
  • The model cannot think independently, which limits usability. 
  • If the model generates or receives text that is meaningless, it will never know because it does not understand the context. However, it is able to say that it doesn’t understand. 

Additional Use-Cases for GPT-3

OpenAI has been quite reserved about the potential uses and use-cases of GPT-3. The proposed GPT-3 model use-cases include:

  • Summarizing and simplifying contracts to make them more understandable and easier to use.
  • Improving keyboard predictions on devices.
  • Writing first drafts of emails, letters, and other communications. 
  • Mass marketing across the web.
  • Creating stories and characters for video games to allow varied user experiences. 

As more users interact with GPT-3, it is likely that OpenAI will continue to update the proposed use-cases for the technology. 

GPT-3 is not an open source program; therefore, users must be granted permission to use it. To receive use rights for GPT-3, users must fill out a form on which they explain how they intend to use the model. Each request is evaluated individually by OpenAI before approval. OpenAI has also been closely investigating ways to mitigate the misuse of the GPT-3 API to avoid cases that cause social, physical or mental harm.

Though the full scope of GPT-3’s potential is not yet known, we can conclude that it has successfully ushered our world into a new phase of AI technology. 

Optimizing User Flow Test Automation with QA IDs

User flow testing, also known as workflow testing, analyzes how an application is performing from the standpoint of the user. In this post, I am going to talk about some of the challenges with automating these types of tests and how we’ve addressed these challenges on several recent projects. 

Continue reading “Optimizing User Flow Test Automation with QA IDs”

Understanding PDF Generation with Headless Chrome

Headless browsers are currently gaining popularity as an efficient way to test web applications because they do not affect the user interface. In this post, I am going to discuss the benefits of Headless Chrome and two approaches for using Headless Chrome to automatically create PDF reports. 

Continue reading “Understanding PDF Generation with Headless Chrome”

The Uncertain Future of Moore’s Law

In 1965, Gordon Moore, CEO and co-founder of Intel, made a prediction that the number of transistors on an integrated circuit (the main component on a computer chip) would double every two years for at least the next decade. This prediction, known today as Moore’s Law, has continued to be fulfilled since 1965. While it is known as Moore’s Law, Gordon Moore’s prediction is not truly a law; rather, it is a trend that chipmakers around the world have been encouraged to match via technological advancements, research, and development. 

Continue reading “The Uncertain Future of Moore’s Law”

Concurrency on Android with Kotlin Coroutines

Concurrency is not something that most people think about on a daily basis; however, it benefits most of us throughout our day. Whenever we ask our technological devices to perform multiple tasks, either within one application or across multiple applications, our device is using concurrency to make it happen. Thanks to concurrent programming, our devices are able to multitask at the same rate that we do.  

Continue reading “Concurrency on Android with Kotlin Coroutines”

The History of Modern Text Editors

Text editors are computer applications that edit plain text. Text editors are fundamental to our work and developers tend to have very strong opinions about which one is the best.  In this blog post I’ll discuss some of the history of computing with respect to text editors, and the pros and cons of two of the text editors that developers have a love/hate relationship with – eMacs and Vim. 

Continue reading “The History of Modern Text Editors”

The Convergence Continues: 2020 Android and iOS Updates

It is officially mobile OS season – Android and iOS have both recently announced the new mobile updates that will be available between now and September 2020. In this post, I’d like to talk about the operating system changes that I believe are the most important, the general themes of the announced updates, and how the two operating systems are slowly converging with each new release. 

Continue reading “The Convergence Continues: 2020 Android and iOS Updates”

Redesigning Grio’s Resource Allocation System

As a company that provides an array of different services for clients, one of the things that we’ve always had to do is manage which employees are assigned to which projects.  This includes making sure that their skill sets line up with the project requirements, knowing when team members will be available to move to a new project, and making sure that no one is either over or under utilized.  This process is called “resource allocation.” 

Continue reading “Redesigning Grio’s Resource Allocation System”

Using Amazon Kinesis Data Streams for Real-Time Data Management

One of the major points that companies must consider these days is how to store, sort, and manage the user data they receive. Especially since the implementation of online information regulatory policies such as the General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), companies must take care to ensure they are managing, storing, and deleting user data in accordance with the applicable regulatory standards. 

Continue reading “Using Amazon Kinesis Data Streams for Real-Time Data Management”

UX/UI Design Across Cultures- USA & Japan

As I discussed in my post Designing Cross-Cultural User Experiences, designers must consider a myriad of points when creating a product that is both accessible and enjoyable for people of multiple countries and cultures around the world. Because different people experience the world through different cultural lenses, it is important to consider how the design of an application is interpreted in different places.

Continue reading “UX/UI Design Across Cultures- USA & Japan”

Close Bitnami banner