The Ghibli Effect Is Just the Beginning…

Reflections prompted by Luiza Jarovsky’s recent commentary on AI’s legal and ethical challenges, “The Ghibli Effect and AI Copyright

What do a whimsical animation filter and global data privacy have in common?
More than you might think.

In recent weeks, social media has been overtaken by images of people rendered in the soft, nostalgic style of Studio Ghibli or Sesame Street. At first glance, it's just another charming AI trend. Playful, harmless, viral. But as AI ethicist Luiza Jarovsky argues in her recent analysis, this “Ghibli Effect” may mark a subtle but significant turning point in the evolution of generative AI: not just in what it creates, but in how it collects (and legally legitimises) the data it learns from.

Jarovsky highlights two key concerns: first, the privacy implications of users voluntarily uploading facial data, and second, the copyright implications of an AI system that can so faithfully reproduce artistic styles.

From a privacy law perspective, when companies scrape personal images from public websites, they typically invoke “legitimate interest” as a lawful basis under regulations such as the EU’s GDPR (Article 6.1.f). This basis requires balancing individual rights and implementing safeguards. But when users willingly upload personal images, they effectively provide explicit consent (Article 6.1.a), removing the need for such a balancing act. As Jarovsky notes, this “bypass” effectively streamlines the path for OpenAI and similar companies to obtain vast, high-quality training data, and without the legal friction associated with scraping.

From a copyright standpoint, the situation is just as complex. Generating work “in the style” of a specific artist does not, in most jurisdictions, constitute direct infringement. Style, unlike specific content, is generally not protected. However, when a model can replicate that style so precisely as to be indistinguishable from the original… particularly over a sequence of consistent images… it raises a substantive question: how was that capability acquired? Likely through training on the same works it now mimics. This invites further legal scrutiny, especially in jurisdictions grappling with whether “fair use” applies to training data.

Yet the implications extend far beyond legal doctrine.

This moment represents an evolution in how AI systems are socially trained, not just through data but through culture. Users are not passive subjects scraped from the web. They are willing participants, prompted by design, social proof, and platform virality to contribute their own data. Not only that, their expectations are changed. Play becomes participation. Contribution becomes consent.

From a Leximancer standpoint, this trend raises further questions about authorship, context, and ownership. Tools like ours are designed to augment researcher insight, not overwrite it. But what happens when the broader ecosystem begins to normalise full substitution of human creativity and labour?

What happens when users no longer notice what they’re giving up?

The Ghibli Effect is a case study in what Jarovsky has aptly described as “privacy theatre.” Although appearing harmless, the performance sets the stage for broader, more consequential transformations. Namely:

  • A move toward data voluntarism, where users become primary data sources without fully informed consent

  • A new economic asymmetry, where AI companies obtain rare, high-value training data for free, while creators and users receive no compensation or rights over the output

  • The erosion of authorial integrity, where style can be generated without attribution, and creative labour becomes disassociated from its human origin

This moment is an early indicator of a broader paradigm shift.

In future iterations, we may see users invited to upload their writing style, their voice, or even their cognitive patterns (via writing prompts or video generation) in exchange for a novelty output. AI companies will collect this data not through scraping, but through engagement. Every click, every share, every upload will be another line in a dataset.

What becomes of originality when every style is reproducible?

What becomes of memory and ownership when every idea is remixable?

And perhaps most urgently, what does it mean to contribute to the development of AI systems when we don’t realise we’re participating?

Reflections like Jarvosky’s help us see these developments as part of a broader strategy… where data acquisition, copyright navigation, and user psychology intersect in ways that reshape the digital commons.

As researchers, technologists, and educators, we must begin articulating what AI should and should not be allowed to do, and on whose terms.

The Ghibli Effect is, in many ways, just the beginning.

Previous
Previous

Scaling Qualitative Research Without Losing Its Soul? Lessons from Using R on Big Qual Data

Next
Next

Your Brain on ChatGPT: What a New MIT Study Tells Us About the Cost of Convenience