AI companies are desperate for data and they’ll go to any length to find it
It’s important to have an idea of the scale of the data that companies working generative artificial intelligence algorithms, and some recent articles I’ve come across can really help.
The Verge has this piece, “OpenAI transcribed over a million hours of YouTube videos to train GPT-4", which gives some insight into the level of desperation involved in trying to obtain data when just about anything of value on the internet has already been used in datasets. Which is why it broke YouTube’s rules.
Given the rush to offer more features, the companies involved are taking the “better to ask for forgiveness than permission” route: if they get found out, they’ll cut a deal or pay the fine when their generative algorithms are well-trained.
Getting data is now such a priority that pretty much anything goes, as this article in The New York Times explains, “Four takeaways on the race to amass data for A.I.”, which breaks down in a visual representation all the data used to train ChatGPT3, illustrating the magnitude of the data obtained since 2007 all over the internet using crawlers, which represent about 410 billion tokens, compared to the 3 billion tokens represented by the entirety of Wikipedia. On the other hand, book scanning involves a pair of collections of 12 billion and 55 billion tokens about which the company gives very little data and which are supposed to be millions of published books; or the 19 ,billion tokens obtained from Reddit by selecting those that have received three or more positive votes as an indicator of quality.
We have now reached the point where some companies are starting to use synthetic data taken from other algorithms. This is a risky practice, because it can lead to errors that are consolidated throughout the different training and inference processes, but because in theory it offers unlimited potential and will be difficult to regulate, it’s an attractive proposition for some players.
Algorithms generating data to train other algorithms? We’re approaching
Christopher Nolan territory here. In the meantime, companies working on generative algorithms will continue to cut deals with newspapers and any other organization capable of generating data. Machine learning was about working with data to obtain access to archives, eliminate unjustified outliers which produced efficient models; now we are in a full-on phase in which the only thing that matters is that the resulting algorithm seems to be of reasonable quality, without asking too many questions.
OpenAI transcribed over a million hours of YouTube videos to train GPT-4
A New York Times report details the ways big players in AI have tried to expand their data access.
Earlier this week, The Wall Street Journal reported that AI companies were running into a wall when it comes to gathering high-quality training data. Today, The New York Times detailed some of the ways companies have dealt with this. Unsurprisingly, it involves doing things that fall into the hazy gray area of AI copyright law.
The story opens on OpenAI which, desperate for training data, reportedly developed its Whisper audio transcription model to get over the hump, transcribing over a million hours of YouTube videos to train GPT-4, its most advanced large language model. That’s according to The New York Times, which reports that the company knew this was legally questionable but believed it to be fair use. OpenAI president Greg Brockman was personally involved in collecting videos that were used, the Times writes.
OpenAI spokesperson Lindsay Held told The Verge in an email that the company curates “unique” datasets for each of its models to “help their understanding of the world” and maintain its global research competitiveness. Held added that the company uses “numerous sources including publicly available data and partnerships for non-public data,” and that it’s looking into generating its own synthetic data.
he Times article says that the company exhausted supplies of useful data in 2021, and discussed transcribing YouTube videos, podcasts, and audiobooks after blowing through other resources. By then, it had trained its models on data that included computer code from Github, chess move databases, and schoolwork content from Quizlet.
Google spokesperson Matt Bryant told The Verge in an email the company has “seen unconfirmed reports” of OpenAI’s activity, adding that “both our robots.txt files and Terms of Service prohibit unauthorized scraping or downloading of YouTube content,” echoing the company’s terms of use. YouTube CEO Neal Mohan said similar things about the possibility that OpenAI used YouTube to train its Sora video-generating model this week. Bryant said Google takes “technical and legal measures” to prevent such unauthorized use “when we have a clear legal or technical basis to do so.”
Google also gathered transcripts from YouTube, according to the Times’ sources. Bryant said that the company has trained its models “on some YouTube content, in accordance with our agreements with YouTube creators.”
The Times writes that Google’s legal department asked the company’s privacy team to tweak its policy language to expand what it could do with consumer data, such as its office tools like Google Docs. The new policy was reportedly intentionally released on July 1st to take advantage of the distraction of the Independence Day holiday weekend.
Meta likewise bumped against the limits of good training data availability, and in recordings the Times heard, its AI team discussed its unpermitted use of copyrighted works while working to catch up to OpenAI. The company, after going through “almost available English-language book, essay, poem and news article on the internet,” apparently considered taking steps like paying for book licenses or even buying a large publisher outright. It was also apparently limited in the ways it could use consumer data by privacy-focused changes it made in the wake of the Cambridge Analytica scandal.
Google, OpenAI, and the broader AI training world are wrestling with quickly-evaporating training data for their models, which get better the more data they absorb. The Journal wrote this week that companies may outpace new content by 2028.
Possible solutions to that problem mentioned by the Journal on Monday include training models on “synthetic” data created by their own models or so-called “curriculum learning,” which involves feeding models high-quality data in an ordered fashion in hopes that they can use make “smarter connections between concepts” using far less information, but neither approach is proven, yet. But the companies’ other option is using whatever they can find, whether they have permission or not, and based on multiple lawsuits filed in the last year or so, that way is, let’s say, more than a little fraught.
Article