[ad_1]
OpenAI has unveiled a new model that tests scalable alignment techniques by summarising those tl;dr (too long; didn’t read) books.
The model works by first summarising small sections of a book before summarising those summaries into a higher-level summary. It carries on this way – hence being a great scalable alignment test – to summarise as little or as much as desired.
You can view the complete steps on OpenAI’s website but here’s an example of where you can start and end up:
To create the model, a combination of reinforcement learning and recursive text decomposition was used. The model was trained on a subset of the predominantly fiction books in GPT-3’s training dataset.
OpenAI assigned two people to read 40 of the most popular books (according to Goodreads) that were published in 2020 and write a summary of each. The participants were then asked to rate one another’s summaries in addition to that of the AI model.
On average, human-written summaries receive a 6/7 rating. The model received that rating 5 percent of the time and a 5/7 rating 15 percent of the time.
Practical uses
Many won’t have even read this article this far. Most visitors to publications only spend an average of 15 seconds reading around 20 percent of any single article. That’s especially a problem when readers then feel educated on an important topic and can end up spreading misinformation.
Social media platforms have started asking users whether they really want to share an article when they’ve not opened it for any context. Using models like OpenAI is demonstrating, such platforms could at least offer a decent summary to users.
The model was mostly successful but OpenAI concedes in a paper (PDF) that it occasionally generated inaccurate statements. Humans can still generally do a better job most of the time, but it’s an impressive showing nonetheless for an automated solution.
(Photo by Mikołaj on Unsplash)
Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.
[ad_2]
Source link