Tuesday, March 19, 2019

AI Tools Could Legitimize "Fake News"


Last month, an A.I. startup backed by sometimes A.I. alarmist Elon Musk announced a new artificial intelligence they claimed was too dangerous to release to the public. While “only” a text generator, OpenAI’s GPT2 was reportedly capable of generating text so freakishly humanlike that it could convince people that it was, in fact, written by a real flesh and blood human being. To use GPT2, a user would only have to feed it the start of a document, before the algorithm would take over and complete it in a highly convincing manner. For instance, give it the opening paragraphs of a newspaper story and it would manufacture “quotes” and assorted other details.

Such tools are becoming increasingly common in the world of A.I. — and the world of fake news, too. The combination of machine intelligence and, perhaps, the distinctly human unintelligence that allows disinformation to spread could prove a dangerous mix. Fortunately, a new A.I. developed by researchers at MIT, IBM’s Watson A.I. Lab and Harvard University is here to help. And just like a Terminator designed to hunt other Terminators, this one — called GLTR — is uniquely qualified to spot bot impostors. As its creators explain in a blog post, text generation tools like GPT2 open up “paths for malicious actors to … generate fake reviews, comments or news articles to influence the public opinion. To prevent this from happening, we need to develop forensic techniques to detect automatically generated text.”



Credits:

No comments:

Post a Comment