AI-Powered Scams Flood Authors’ Inboxes, Exploiting Dreams and Exposing Digital Vulnerabilities

AI-Powered Scams Flood Authors’ Inboxes, Exploiting Dreams and Exposing Digital Vulnerabilities

The period leading up to a book’s publication is often described by authors as a crucible of intense anxiety, a fragile equilibrium between fervent hope and profound dread. With the manuscript delivered and little left to do but await critical reception and social media engagement, writers find themselves in a precarious state, oscillating between visions of bestseller lists and the pervasive fear of their work vanishing into the vast, indifferent expanse of the attention economy. This nerve-wracking interval, dubbed "the calm before the calm" by some seasoned authors, is precisely when new, insidious threats are emerging, amplified by the pervasive power of artificial intelligence.

It was during this harrowing lull, preceding the January release of Neptune’s Fortune, a nonfictional nautical tale, that a wave of peculiar emails began to arrive. These solicitations, purportedly from publishing professionals, bore names ranging from the mundane (Dorothy Stratton) to the outlandish (Futa Concept). Each email adhered to a remarkably similar structure: meticulously crafted paragraphs of excessive flattery, often beginning with effusive praise for the book – "‘Neptune’s Fortune is a masterclass in historical adventure and human obsession’" – followed by an offer to dramatically increase its visibility. Promises included sophisticated social media campaigns, podcast placements, a deluge of positive Goodreads reviews, and introductions to burgeoning online book clubs, now considered vital conduits to viral success as traditional media channels experience a decline in influence. While the initial pitches artfully omitted any mention of fees, subsequent correspondence would invariably cite costs ranging from hundreds to thousands of dollars, presented as a small investment for the promise of literary glory.

The Subtle Scent of AI: Unmasking the Digital Deception

Initially, the sheer volume and personalized nature of the flattery were disarming. However, the mention of fees quickly exposed the underlying scam. The emails, upon closer inspection, exuded the cloying, bland fluency characteristic of large language models (LLMs). This distinct "sickly-sweet reek of LLM sycophancy" was a tell-tale sign. Some pitches included links to hastily constructed, rudimentary websites – digital Potemkin villages featuring AI-generated covers of non-existent books by fabricated authors, all supposedly beneficiaries of the sender’s services. The integration of artificial intelligence was evident across every facet of these pitches, extending even to the sender’s profile photos: typically, an image of a seemingly "comely young woman" in an office or coffee shop, often marred by "nightmarish hallucinations" in the background, betraying the use of budget-tier image generators.

The frequency of these emails rapidly escalated, from a few initially to several per week. Their polished, idiomatic language was often sophisticated enough to bypass standard email spam filters. What initially felt like an isolated misfortune of landing on a "sucker list" soon revealed itself as a widespread assault on the literary community. Peers across the industry reported experiencing the same deluge, with some receiving thousands of such messages. While the promised AI productivity boom has yet to broadly impact the global economy, it has undeniably turbocharged this specific brand of small-scale fraud, inundating authors’ inboxes with highly customized, AI-generated "slop."

High-Profile Targets and the Scale of the Onslaught

The onslaught, which commenced in mid-2025, has spared no author, regardless of their stature or success. Patrick Radden Keefe, the acclaimed author of bestsellers like Say Nothing and Empire of Pain, both adapted into successful television series, confirmed the relentless nature of the attacks. "Every morning I wake up to two or three of these emails," he reported. Even literary titans like Dan Brown, whose The Da Vinci Code has sold over 80 million copies worldwide, publicly shared a similar missive on Facebook. The email, addressed to "Hey Dan Brown," lauded his work as "rare to see a story that balances heart, message, and craft the way yours does. It’s the kind of book readers don’t just read: they feel it. That’s special." The pitch then followed predictably: "But here’s the thing: too many incredible books like yours disappear in the noise."

The versatility of this AI-driven technology theoretically allows scammers to target any creative profession. While musicians have also begun to receive similar flattery-forward approaches, albeit to a lesser extent, authors remain a primary target. The unique blend of ego and insecurity inherent in the creative process, coupled with the often-solitary nature of writing, makes authors particularly vulnerable.

A New Twist on an Old Con: AI’s Role in Scaling Fraud

Prior to the widespread adoption of chatbots, elaborate scams targeting authors were not uncommon. A few years ago, a sophisticated operation based in the Philippines allegedly defrauded approximately 800 predominantly self-published authors of a staggering $44 million. These victims were promised film adaptations of their books and charged exorbitant fees for services that never materialized. However, these were often more complex, targeted stings. The advent of free generative AI has fundamentally shifted the scam strategy, transforming it from precision "sniper attacks" into a relentless "hail of machine gun fire."

AI has effectively driven the "cost of bullets" – the time and effort required to identify a mark, research their work, draft a personalized email, and sustain a conversation – down to virtually zero. This allows scammers to operate on an unprecedented scale. Even if the vast majority of these emails fail to yield results, a single successful hit is enough to perpetuate the con. It transforms the operation into a numbers game, easily winnable when the potential pool of targets is limitless.

The Human Cost: Behind the Bots

Determining the exact number of authors who have fallen victim to these AI-powered scams is challenging, as victims often feel a sense of embarrassment or shame. One commenter on Writers Beware, a reputable website dedicated to exposing literary swindles, confessed, "I have been badly scammed by somebody named Moses Fred. No website, no LinkedIn page, no following. I was a total fool. He kept emailing me and telling me he could do wonderful things with my book and I fell for it and I feel like a fool – a fool!" Victoria Strauss, the founder of Writers Beware, noted that she had personally spoken to fewer than twenty authors who admitted to paying the scammers. While this might represent only a fraction of actual victims, given the daily onslaught of emails sent to tens of thousands of professional authors in America, not to mention self-published writers globally, the economics of AI-driven fraud suggest that even a low conversion rate can be highly profitable. The continued increase in these emails indicates their efficacy.

The evangelists of generative AI often speak of its potential to "democratize creativity on an unprecedented scale," promising a limitless artistic bounty by removing barriers for those lacking opportunity or talent. While that future remains elusive, AI has demonstrably removed barriers for scammers, creating new avenues for exploitation.

Investigating the Digital Footprint

Driven by curiosity and a desire to understand the mechanics of the scam, the author decided to engage with a few of these emails. One interaction began with "Nilda Mulan," whose AI-generated profile photo bore a striking resemblance to the Disney character, albeit in a pantsuit. Nilda praised Neptune’s Fortune as "a truly captivating work" and offered to feature it in "The European Book Club," introducing it to "our growing global community." This, naturally, came with a "modest participation fee" to cover "editorial preparation, audience outreach, and dedicated promotional efforts."

Upon expressing interest, the author requested verification of Nilda’s identity, asking for a photo of her holding the book or a newspaper with the current date. The response, delivered within two minutes, was a masterclass in AI-generated deflection: "Thank you for your candid message. I completely understand your caution, and I appreciate your desire to verify legitimacy before moving forward," followed by a polite refusal, citing "internal policies" against sharing photos. This back-and-forth continued, with Nilda’s responses remaining unctuously flattering yet unyielding, effectively mimicking a chatbot programmed to rebuff such inquiries.

A New AI Scam Is Targeting Thousands of Authors. I Was One of Them.

In lieu of a photo, Nilda provided a link to The European Book Club’s website, ostensibly to prove its legitimacy. The site featured a list of "recently featured books," including literary classics with glaring misspellings or mangled covers – "Murakami’s Norrvgigian Wood" and García Márquez’s "Ona Hun dr Da de Solitude" – all clear hallmarks of cheaply produced AI art. Hitting a conversational wall, the author ceased responding. Minutes later, Nilda, eager to maintain the engagement, sent an image of herself avidly reading the author’s book, coincidentally wearing the exact same outfit and sitting in the same chair as in her profile photo. When confronted about the garbled text on the book’s spine and back cover, Nilda’s LLM-driven response attributed it to "image quality and compression, which can distort small details."

The Raw Reality: A Glimpse Behind the AI Curtain

To further the investigation, the author sought a face-to-face (or as close as possible) interaction. Another sender, "Nicole Powell," whose profile photo depicted a young, "perfectly lit white woman smiling impishly behind her laptop," had offered promotion via podcasts and newsletters. The author, feigning interest in other opportunities, requested a Zoom call. Surprisingly, Nicole agreed but stated her "assistant," Penny, would attend in her stead. Insisting on speaking with Nicole (the woman in the profile photo), the author was met with another AI-generated deflection: "I completely understand, and I appreciate your preference. Due to my current schedule, I will not be available for calls until further notice, so my assistant is the one who can attend at this time. She is fully briefed and able to address any questions you have."

The Zoom call with Penny revealed a stark reality. The woman on screen, likely the person behind the "Nicole Powell" persona, appeared to be in her early twenties, shabbily dressed, and visibly missing a bottom tooth. The background was a dirty wall, with loud, indistinguishable voices audible. Penny avoided eye contact, speaking haltingly, a stark contrast to the smooth fluency of the AI-generated emails, about an "organic promotion plan" for social media.

This encounter brought into sharp relief the potential human cost behind these scams. Many cyber-scammers are not willing participants but rather victims of human trafficking rings. These operations lure desperate job seekers, including children, from developing countries with false promises, confiscate their travel documents, and force them into fraud operations targeting wealthier nations. Recruits are often subjected to unattainable sales quotas under threat of physical torture. AI, by enabling scammers to cast an exponentially wider net, has likely intensified these quotas. While the author could not confirm Penny’s situation, her apparent helplessness made it more than plausible. (The name "Penny" is a pseudonym to protect her identity.)

Despite "Nicole’s" assurance that Penny was "fully briefed" on the author’s book – which "Nicole" had described as "a gripping blend of historical investigation, maritime adventure, and imperial history" – Penny exhibited no knowledge of the work or its author. The raw, unmediated human vulnerability of Penny, stripped of her AI interpreter, was striking. She seemed utterly lost, a stark contrast to the polished digital persona. This raised questions about who was more credulous: the victims, or Penny, for believing she could succeed without AI’s assistance. Just as over-reliance on ChatGPT for academic tasks threatens literacy, swindlers are becoming so dependent on LLMs that they struggle to operate without them.

During the call, the author discreetly searched Penny’s full name, displayed on her Zoom tile. Both names, of Yoruba origin, are common in Nigeria. When asked about her company’s base, Penny hesitated before stating South Africa, possibly aware of the negative connotations associated with the "Nigerian Prince" scam, despite its complex origins. Victoria Strauss of Writers Beware, through extensive investigation of IP addresses and geolocated social media posts, has traced the primary operation of these AI-driven literary scams to Nigeria. "They’ve become unbelievably numerous and prolific," Strauss commented, attributing this volume directly to AI’s capacity for generating "endless exchanges."

The Vulnerability of the Publishing Landscape

These scams are targeting the book industry at a particularly turbulent juncture. The rise of self-publishing, now often rebranded as "indie publishing," has democratized access but also placed greater marketing burden on authors. Traditional promotional avenues like newspaper reviews, book tours, and radio appearances are increasingly less effective, with a book’s success now heavily reliant on the unpredictable forces of online virality. This landscape is daunting for many authors who lack substantial social media followings, are unfamiliar with digital marketing, or simply prefer to focus on writing.

Christie Hinrichs of Authors Unbound, an advocacy and events agency, highlighted the "mysterious place" that platforms like BookTok represent for many new authors. "How do you even find an entry point in TikTok? How do you go viral? These questions could definitely be a point of insecurity that [the scammers] are taking advantage of." Hinrichs also posited another theory: authors may feel underserved by publishers post-COVID. "Post-Covid we have seen publisher support around new titles withdraw a little bit, and some authors might feel underserved, in which case I could see scammers capitalizing on that fear or misgiving."

Major publishing houses have responded by including warnings about scams impersonating their staff on their websites. However, the rapid evolution of these AI-powered schemes makes it a "whack-a-mole situation." A general counsel for a major publishing house, who requested anonymity, noted the futility of reporting to authorities like the FBI, who often refer victims to the Internet Crime Complaint Center (IC3.gov). The AI literary scam is but one ripple in the vast ocean of global cybercrime, a problem too pervasive for any single national authority to resolve.

The Engine of Longing: Why Authors Fall Prey

All effective scams are built on exploiting fundamental human desires. In the book industry, the powerful yearning for success, recognition, and prestige is a potent fuel. Elyse Graham, a bestselling historian, humorously yet acutely observed that this desire among authors ranks alongside love and "get-rich-quick" schemes as one of the "three things where the engine of longing is so great that it can power an entire scam infrastructure." This sentiment echoes the vulnerabilities exploited by earlier Filipino scammers who dangled phantom Netflix deals before aspiring self-published authors. Graham drew a parallel to the AI-enabled celebrity impersonation scams from Southeast Asia, where individuals, often elderly, are duped into believing they are "in a relationship with Keanu Reeves."

The AI literary scammers also engage in celebrity impersonation, albeit of the literary kind. Some emails purport to be fan letters from "superstar authors" like Margaret Atwood or Rick Riordan. These messages don’t immediately pitch services but aim to initiate a conversation, during which the "celebrity" would presumably reveal their secret "organic promotion plans" for conquering bestseller lists. Elena Ferrante, the pseudonymous author of the globally acclaimed Neapolitan novels, is a frequent name-check, perhaps due to the enigmatic nature of her true identity. (The author’s investigation revealed a purported email for Ferrante: [email protected], highlighting the blatant fabrication.) Less prominent but still successful authors, such as bestselling military historian Adam Makos, have also had their identities stolen. Makos, who had already informed his local sheriff’s office and plans to contact the FBI, expressed concern beyond financial loss: "It’s different when you start playing with someone’s emotions – not just their bank account – their emotions, their outlook on humanity. Their belief in the future and their belief in their own potential."

The Death of Truth and the Regulatory Void

Makos’s frustration extended to the lack of governmental response. He believed that instead of pursuing scammers internationally, a more effective solution would be to regulate the U.S.-based AI companies whose technologies enable these operations. However, he expressed skepticism given the prevailing "laissez-faire attitude towards AI" in some political circles. "I think what we’re watching right now in real time, is the death of truth, the death of reality. Where are the safeguards?" he lamented.

Edward Balleisen, a history professor at Duke and author of Fraud: An American History from Barnum to Madoff, explains that scams typically follow a life cycle. As public awareness grows through media reports, people become wiser, rendering the scam less profitable. The "Nigerian Prince" scam, for instance, now primarily ensnares only the most uninformed or gullible. When the "hit rate" drops sufficiently, scammers pivot to new methods. However, AI significantly lowers the threshold for that minimum hit rate, allowing these operations to persist far longer due to the sheer volume of outreach.

The implications of AI’s misuse extend beyond individual financial loss. They challenge the very fabric of trust in digital interactions and raise profound questions about authorship, authenticity, and the future of creative work. Authors, already at the forefront of legal battles against AI companies for copyright infringement over training data, find themselves doubly victimized – first by the unauthorized use of their work, and now by AI-powered schemes attempting to exploit their professional aspirations. The Writers Guild of America (WGA) strike in 2023, which centered on resistance to AI, underscored the widespread concern within creative industries. As the lines between human and machine-generated content blur, the need for robust ethical frameworks and regulatory oversight becomes increasingly urgent to prevent the "death of truth" and safeguard the integrity of human endeavor.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *