A Book App Used AI to ‘Roast’ Its Users. It Went Anti-Woke Instead

Fable, a popular social media app that describes itself as a haven for “bookworms and bingewatchers,” created an AI-powered end-of-year summary feature recapping what books users read in 2024. It was meant to be playful and fun, but some of the recaps took on an oddly combative tone. Writer Danny Groves’ summary for example, asked if he’s “ever in the mood for a straight, cis white man’s perspective” after labeling him a “diversity devotee.”

Books influencer Tiana Trammell’s summary, meanwhile, ended with the following advice: “Don’t forget to surface for the occasional white author, okay?”

Trammell was flabbergasted, and she soon realized she wasn’t alone after sharing her experience with Fable’s summaries on Threads. “I received multiple messages,” she says, from people whose summaries had inappropriately commented on “disability and sexual orientation.”

Ever since the debut of Spotify Wrapped, annual recap features have become ubiquitous across the internet, providing users a rundown of how many books and news articles they read, songs they listened to, and workouts they completed. Some companies are now using AI to wholly produce or augment how these metrics are presented. Spotify, for example, now offers an AI-generated podcast where robots analyze your listening history and make guesses about your life based on your tastes. Fable hopped on the trend by using OpenAI’s API to generate summaries of the past 12 months of the reading habits for its users, but it didn’t expect that the AI model would spit out commentary that took on the mien of an anti-woke pundit.

Fable later apologized on several social media channels, including Threads and Instagram, where it posted a video of an executive issuing the mea culpa. “We are deeply sorry for the hurt caused by some of our Reader Summaries this week,” the company wrote in the caption. “We will do better.”

Groves concurs. “If individualized reader summaries aren’t sustainable because the team is small, I’d rather be without them than confronted with unchecked AI outputs that might offend with testy language or slurs,” he says. “That’s my two cents … assuming Fable is in the mood for a gay, cis Black man’s perspective.”

Generative AI tools already have a lengthy track record of race-related misfires. In 2022, researchers found that OpenAI’s image generator Dall-E had a bad habit of showing nonwhite people when asked to depict “prisoners” and all white people when it showed “CEOs.” Last fall, WIRED reported that a variety of AI search engines surfaced debunked and racist theories about how white people are genetically superior to other races.

Overcorrecting has sometimes become an issue, too: Google’s Gemini was roundly criticized last year when it repeatedly depicted World War II–era Nazis as people of color in a misguided bid for inclusivity. “When I saw confirmation that it was generative AI making those summaries, I wasn’t surprised,” Groves says. “These algorithms are built by programmers who live in a biased society, so of course the machine learning will carry the biases, too—whether conscious or unconscious.”

Source : Wired