The “Dangerous” OpenAI Text Generator Recreate by Two Researchers
On Thursday, a pair of master graduates from Computer Science rolled out an AI text generator based upon GPT-2, an Elon Musk-backed OpenAI program that the company withheld from public release citing concerns over the societal impact of it.
However, the 2 researchers, Aaron and Vanya believe that the software does not possess any risk to society — not yet. According to Wired, the duo wanted to prove that anyone can develop the software, regardless of their economical status.
In order to replicate GPT-2, the duo used $50,000 worth of free cloud computing from Google. The research graduates also fed millions of webpages to the ML software, gathered by digging up links shared on Reddit.
Just like OpneAI GPT-2, the newly created software analyzed the language patterns and could be used up for many tasks — Translation, chatbots, coming up with unprecedented answers and more. However, the foremost alarming concern among specialists has been the creation of synthetic text, consequently, Fake news.
David Luan, vice president of engineering at OpenAI once told Wired, “It could be that someone who has malicious intent would be able to generate high-quality fake news”. Owing to this and other dangers, the team decided to withhold the model. However, it did put out a research paper.
Previously, there have been iterations of GPT-2. In fact, few people have released language models online based upon the OpenAI software. Of course, they’re not using the first model that used “8 million web pages”, however, it still uses the previous versions. You can try it out yourself.
While they’re smart for playing around, they don’t appear to provide logical statements. Wired, who tested out the original GPT-2 and new model as well, writes, “Machine learning software picks up the statistical patterns of language, not a true understanding of the world.”