Originally posted March 3, 2019
Not every new technology product hits the shelves.
Tech companies kill products and ideas all the time — sometimes it's because they don't work, sometimes there's no market.
Or maybe, it might be too dangerous.
Recently, the research firm OpenAI announced that it would not be releasing a version of a text generator they developed, because of fears that it could be misused to create fake news. The text generator was designed to improve dialogue and speech recognition in artificial intelligence technologies.
The organization's GPT-2 text generator can generate paragraphs of coherent, continuing text based off of a prompt from a human. For example, when inputted with the claim, "John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination," the generator spit out the transcript of "his acceptance speech" that read in part:
It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams.Considering the serious issues around fake news and online propaganda that came to light during the 2016 elections, it's easy to see how this tool could be used for harm.
The info is here.