After helping more than 750,000 players spot fake news, a game that became an unexpected viral hit is getting a reboot right in time for the midterm election.
“Factitious,” was launched with a simple question from AU Adjunct Professor Maggie Farley to Asst. Professor Bob Hone: “What if we could build a game to see if people could tell if an online story was real or fake?” With a team of AU Game Lab students and experienced professionals, Factitious was developed and released on July 3, 2017. In the first two days, the viral hit was played by 139,000 times; more than 339,000 times in the first month.
As demand grew, Hone realized the viral hit needed a reboot.
Coming October 1, “Factitious 2018” will have brand new articles, with updates coming every Monday through the midterm elections. For players who need to do more than compete with themselves, there will be a high score list to see which users spot fake news best.
“One of the great results of our success is that we can see players getting better at spotting fake news the more they play” said Bob Hone. “We want to extend this learning to produce an even bigger effect.”
Like its predecessor, “Factitious 2018” won’t be make spotting the real from the fake easy. Players of the original version were fooled by seeing stories from fake sources that appeared to be real, such as “TheMississippiHerald.com.” In the age of fake news and realistic looking, but still fake URL’s, Hone offers the following tips for spotting fake news in “Factitious 2018” and on the Internet.
“Fake new purveyors are upping their game so people need to adopt a skeptical view of online news. Is this a well-known source? Does the writer use flamboyant language? (real news articles don’t). Is it a singular opinion or a fact-based approach?”
“Factitious 2018” is available for play at: http://factitious.augamestudio.com/#/
Fast Facts About Factitious
- More than 70,000 people played the game on the first day of release.
- To date, players have rated nearly 8 million articles.
- Games are engaging when they’re “appropriately difficult”–not too easy to be boring and not too hard to be frustrating. In the testing phase, articles that most users guessed correctly were excluded (too easy); if half of the testers got an article wrong, it was also discarded (too hard or confusing).