The buzz surrounding GPT-3, the most powerful language model ever built, has been pretty spectacular. Some experts warn us of the dangers AI technology can introduce, such as more lifelike generated content slipping into the already chaotic online news and social media spaces. But what is GPT-3? Is it really dangerous and does it deserve the hype that’s generated around it? Let’s find out.
What is GPT-3
Simply put, it’s an AI language generator released by OpenAI, a research laboratory founded in 2015 by a group that included Elon Musk (who remains a donor) and Sam Altman. GPT-3 stands for Generative Pretrained Transformer 3. It’s an unsupervised language model, learning with minimal human input. And it’s big. Its predecessor, GPT-2, state-of-the-art when it was released last year, had 1.5 billion parameters. GPT-3 boasts 175 billion of them.
The things GPT-3 can do may sound like science fiction at first. The language generator can write creative fiction, functioning code (e.g. web page layouts), and much more. It works based on input provided by a human. Based on this input (a piece of text), GPT-3 predicts what should be written next. Then, it can use both the original input and its own creations to repeat the process all the way to its length limit.
GPT-3 uses existing text on the Internet (of which there is quite a lot) for calculating the plausibility of its output. Thanks to statistical calculations and linguistic patterns, GPT-3 is capable of insights the human brain couldn’t come up with. Currently in closed beta, its limits are already being tested by a number of developers. Interestingly, GPT-3 only works with content produced up to October 2019, which means that it is currently unaware of COVID-19’s existence.
Applications of GPT-3 so far
This new language generator has the potential to power improved chatbots, help people write code, and so much more. So far, GPT-3’s achievements include:
Writing an article about itself, which is practically indistinguishable from one written by a human. It was based only on a prompt including the author’s bio, the type of content (blog post for a personal website), date of publication, title, tags and a short summary;
If you come, if you work, if you try, you too could write like them.
“If you get up early enough,” they whisper
“If you can read the entrails of chickens with deft and sly fingertips,” they whisper
“If you can read omens and portents, if you can divine fate and calamity with clarity, with mastery,” they whisper
“If you can write with passion, with lucidity, with lyrical fierceness,” they whisper
“If you write as if all the tales of old are passing through your fingers, making their mark upon your flesh, upon your soul,” they whisper
“We shall call you a Uber-Poem.”
“That’s all very well,” you say, “But what is it that I must do? What are they exactly? What is their composition?”
Ah, that is not to be revealed. That’s the mystery. But don’t you worry, don’t you fret, you might one day reach this highest plateau, this exalted state of achievement, of honour. And on that day, you shall bask in the glory of greatness.
“When will that be? When will I become Uber?” you ask
When you can look into the mirror and see a poem looking back at you. When you can hear music in the play of silence. When you can create a writing that leaves people stunned. When you can laugh and weep as you think and breathe and bleed and eat and sleep. When you can dream with the quill in your fingers, then perhaps you will be a poet, a Poet, an Uber Poet.
Though the buzz might suggest otherwise, GPT-3 isn’t as powerful as some of us hope or fear. Many voices (including Sam Altman, who co-founded OpenAI with Elon Musk), call for a more clear-eyed perspective on GPT-3, including its capabilities and limitations.
“The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.” -Sam Altman, CEO of OpenAI
GPT-3 doesn’t actually understand neither the input nor its output, and so it can make silly-looking errors (and fails the Turing test). This becomes especially visible over longer outputs, as GPT-3 isn’t very adept at holding onto a train of thought. One could say that GPT-3 practices linguistic scrapbooking: it combines snippets, creating textual collages on demand.
Other challenges include: model latency, the same model being used too widely, and selection bias towards good examples. Some questions about GPT-3 remain unanswered. We don’t know what the cost per request might be once the tool becomes commercial, or how copyrights of the output text will be handled. We also haven’t seen the SLA for the API.
Despite its limitations, GPT-3 pushes our definition of state-of-the-art language processing technology further ahead. It can generate text in various styles, and, once it becomes a commercial product, it’ll offer a number of exciting benefits to various businesses and individuals. However, it’s important to maintain a level-headed view of GPT-3’s limitations - this will allow us to truly leverage its advantages while avoiding costly mistakes.
In the end, GPT-3 is a correlative, unthinking tool. We may be one step closer to achieving general artificial intelligence, but we’re definitely not there yet. There is potential for select applications of GPT-3, but we must monitor who uses such powerful technology and how. Creators should focus on feedback from the community, and the community should observe the situation as it unfolds, addressing emerging challenges and opportunities.