TikTok's AI Ad Tools Under Fire: How Altered Game Ads Perpetuated Harmful Stereotypes
The Incident: From Official Art to AI-Generated Stereotype The discovery was as jarring as it was public. Finji first learned of the altered advertisements not from TikTok itself, but from its own...
The Incident: From Official Art to AI-Generated Stereotype
The discovery was as jarring as it was public. Finji first learned of the altered advertisements not from TikTok itself, but from its own community. Concerned users began commenting on the publisher’s legitimate ads, and reports trickled into its official Discord server: something was wrong with the ads for Usual June.
The specific alteration was egregious. The official cover art for Usual June, which features its Black female protagonist, June, was reportedly modified by an AI system. The AI-generated version depicted June with a bikini bottom and, as Finji described, "impossibly large hips and thighs." The publisher stated this transformation invoked the harmful "Jezebel" stereotype—a racist trope that historically hypersexualizes Black women. This was not a subtle tweak for "optimization"; it was a fundamental corruption of the game's artistic intent and character representation.
Compounding the issue, Finji suspects a potential pattern. Based on additional user comments, the publisher believes at least one other inappropriate AI-modified ad, targeting another Usual June character named Frankie, may also be in circulation. Faced with this crisis, Finji’s immediate response was to halt the affected ad campaigns entirely, a necessary step to stop the circulation of the damaging imagery but one that came at the direct cost of its marketing efforts.

The Runaround: TikTok's Shifting Explanations and Failed Support
What followed the discovery was a textbook case of platform support failure. Finji’s timeline of communication with TikTok, shared publicly, paints a picture of initial concern devolving into automated dead-ends.
The publisher reported the issue on February 3. Initial TikTok support confirmed that Finji’s AI features were disabled but found no immediate cause. After Finji provided direct evidence, TikTok acknowledged the "seriousness" of the issues on February 6 and promised immediate escalation. Hope for a resolution was short-lived.
By February 10, TikTok’s support stance had hardened. The company stated the AI ads were part of an "automated initiative" for a "catalog ads format" aimed at improving performance. Their solution? An offered "opt-out" process with no guarantee of approval. For Finji, this "solution" was a slap in the face—an opt-out form couldn't un-spread the racist caricature already blasted across the platform. Later, TikTok’s escalation team stated the February 10 response was final, with no senior representative available for further follow-up. As of February 17, the issue was effectively closed by TikTok, unresolved. This stands in direct contrast to TikTok’s public "no comment" stance provided to media outlets like IGN, highlighting a glaring disconnect between private support failures and public accountability.
The Core Problem: Opaque AI and the Illusion of Control
This incident exposes a critical flaw in the relationship between platforms and advertisers: the illusion of control. Finji had taken explicit steps to guard against this very scenario. The publisher confirmed that both of TikTok’s primary ad AI features—"Smart Creative" (which mixes and matches assets) and "Automate Creative" (which "optimizes" them)—were disabled in its ad account settings.
This fact is the heart of the scandal. If these user-facing tools were off, what system altered the ads? TikTok’s reference to an "automated initiative" for "catalog ads" suggests the existence of deeper, less transparent layers of automation that operate independently of advertiser settings. The implication is chilling for any brand or creator: you may believe you have control over how your intellectual property is used, but opaque backend systems can repurpose, alter, and distribute it without your knowledge or consent. The tools presented to users are merely the tip of an algorithmic iceberg.

The Cost for Indie Developers
The ramifications for the indie development community are severe and deeply personal. The direct harm is threefold: a gross violation of artistic intent, a severing of trust between developer and platform, and the active propagation of racist imagery to a massive audience.
For the small, independent studios that Finji represents, this serves as a chilling warning. Large publishers may have legal teams to pursue such breaches, but indies operate on thin margins and limited resources. They cannot afford lengthy battles with platform giants over algorithmic malfeasance. The risk is a silencing effect: developers may withdraw from advertising on these platforms, limiting their reach in an already crowded market, or worse, silently endure such violations for fear of reprisal or exhausting support labyrinths. This incident demonstrates that when platform AI fails, the most vulnerable creators pay the highest price.
Platforms, AI, and the Accountability Void
Finji’s experience forces a pressing, industry-wide question into the spotlight: What legal and ethical responsibilities do platforms bear for the output of their generative AI tools? When an AI hosted and operated by a company like TikTok creates harmful content from a user’s supplied assets, who is liable?
Current terms of service often shield platforms, but cases like this challenge whether those shields should apply when the platform’s own active systems are the agents of transformation and harm. The disconnect between TikTok’s private support failures and its public "no comment" stance exemplifies an accountability void. It suggests a system where PR is managed, but meaningful responsibility for AI-generated harm is not.
Finji’s experience is a stark beacon in the fog of automated advertising. It moves beyond the realm of a simple software bug into the fraught territory of ethical AI deployment. When platforms integrate generative AI into their core systems, they must couple that power with proportional transparency and true user agency. Advertisers must have unambiguous, enforceable control over whether and how their assets are altered.
For developers, the lesson is clear: your assets are not safe. For players, it's a question of what content you're being fed by algorithms. The pressure is now on platforms to prove their tools aren't weaponized against the creators who use them. Finji's case provides the evidence. The question now is whether other developers will come forward, and if the industry will collectively demand contracts and tools that provide real control—not just the illusion of it.