SOCIAL
CONTACT

After Slop: Taste and Craft in the Age of AI
February 11, 2026
Why authorship, discernment and human judgment are becoming cultural infrastructure
Karyna Tatevosyan - Senior Strategist
Slop Recap: How Did We Get Here?
In 2025, slop was named Word of the Year by Merriam-Webster, a response to the surge of low-effort, algorithmically generated content spreading across digital and visual culture. The dictionary pointed to a familiar mix: uncanny AI videos, off-kilter ad imagery, junk AI-written books, propaganda, and fake news wrapped in a sheen of realism.
But slop did not emerge overnight. It was the result of three converging shifts.
Production Became Frictionless
Generative tools removed many of the material limits that once shaped visual output. Time, cost, and skill barriers, the constraints that created distinctiveness, began to dissolve. Making became easier than deciding.
2. Volume Overtook Intention
When output is rewarded by platforms and pipelines, scale starts to stand in for meaning. Content multiplies, but perspective thins. The system optimises for more, not for better.
3. Authorship Became Blurred
As tools automate aesthetics, language, and visual structure, the trace of the maker fades. Work still circulates, but it feels unheld, as if it could have been made by anyone, or no one.
Historically, slop referred to something made quickly and cheaply to meet a basic need. It did the job, but nothing more. It was not meant to be remembered. That definition now describes a growing share of visual and media output. What surfaced last year was not simply a wave of new technology. It was a creative condition, production without authorship, output without consequence.
When everything is made using the same shortcuts, visual culture stops moving forward. It loops.
The Context Shift for Brands and Their Audiences
The key shift is not technological. It is perceptual. We are moving from a media environment where what we see is assumed to be real, to one where scepticism is the default starting point.
“We’re going to move from assuming what we see is real by default, to starting with scepticism”
Adam Mosseri, Instagram CEO
Trust in images, video, and even written language has softened. Audiences no longer encounter visual content with passive belief, but with quiet doubt. This changes the conditions on both sides of communication.
For audiences, the cognitive load of interpretation has increased. People are constantly, if subtly, assessing whether what they are seeing is constructed, synthetic, exaggerated, or manipulated. The act of viewing now includes a layer of verification.
For brands, this means visual output carries a different weight. It is no longer just about impact or reach, but about whether work feels intentional, coherent, and authored.
In this environment, authorship matters more, not less. Carelessness is easy to recognise. It shows up as:
Flattened visual aesthetics
Generic composition and styling
Language that feels assembled rather than written
Messages that could belong to anyone
You do not need to identify the tool behind a piece of work to sense when no one was meaningfully behind the work.
The issue is not that AI was used. The issue is when intention is missing. When visual systems begin to look interchangeable, distinctiveness erodes. And when distinctiveness erodes, trust follows. What is at stake is not simply originality, but credibility. The sense that an image, a film, or a message was made on purpose, by someone making decisions, rather than generated as output.
A Visible Break in Trust
Late in 2025, McDonald’s Netherlands released a fully AI-generated Christmas film. Viewers described it as uncanny and emotionally hollow. The ad was pulled within days.
The backlash was not about the use of AI itself. It was about the absence of visible authorship. The work felt technically complete but humanly unheld.
Moments like this signal a broader shift. Audiences are not rejecting synthetic tools. They are reacting to work that feels like output rather than intention.

This is the emerging divide. Not AI versus human.
It is discernment versus automation.
Taste as Filter
If automation increases what can be produced, taste determines what should be.
Taste is not preference or style. In this environment, it functions as a filtering system. A way of reducing possibility into coherence.
When visual output becomes abundant, discernment becomes scarce. The advantage shifts from production to selection.
Taste operates as:
A consistent point of view across imagery
A logic for what belongs and what does not
A mechanism for refusing the obvious
Automation generates options. Taste eliminates them.
This is what makes taste difficult to automate. It introduces judgment into systems designed for flow, and coherence into environments driven by volume.
Some of the most resilient visual players operate this way. A24 is often cited not because it avoids technology, but because its identity is governed by selection. The name itself signals a specific tone, authorship model, and creative risk profile.
In a landscape of visual abundance, recognisable taste becomes a shortcut to trust.
Craft as Commitment
If taste determines what enters a visual world, craft determines how it stays there.
Traditionally, craft emerged from constraint. Time, skill, material resistance, and the possibility of error shaped the outcome. These limits slowed production and, in doing so, embedded care into the work. Effort was visible. Decisions left traces.
AI removes many of those constraints. Images can be produced instantly. Variations are infinite. Friction disappears by default.
Which is why craft now functions less as a technique and more as a stance.
To apply the logic of craft in an AI-enabled environment is to reintroduce forms of commitment where none are required. It means:
Spending time in refinement rather than stopping at first output
Making visible choices rather than accepting generic resolution
Treating generation as a starting point, not a finish line
Allowing human judgment to interrupt speed
Craft slows the process down, even when the tools are fast. It shifts the question from “Can this be made?” to “Is this worth making, and in this way?” That shift is what separates visual work that holds attention from work that dissolves into the stream.
Craft also makes the maker legible.
In an environment where tools are invisible and outputs can look similar, the trace of decision-making becomes a differentiator. Work feels held. It feels shaped rather than surfaced. That sensation, subtle but perceptible, is closely tied to trust.
This is why craft is not nostalgia for pre-digital making. It is a method for maintaining meaning under conditions of acceleration.
When taste and craft operate together, they create structure around AI rather than allowing AI to define the structure. The technology becomes a component within a human system, not the system itself.
And that is what keeps visual worlds from flattening into sameness.
A Working Framework for AI in Visual Culture and Brand
As AI becomes embedded in visual production, the question is no longer whether it will be used, but under what conditions it produces meaningful work rather than interchangeable output. Across creative and brand environments, a more durable set of principles is emerging.
1. AI Supports Ideas. It Does Not Originate Them.
Generative tools are powerful at expansion, variation, and simulation. They are weak at intent. Work that resonates typically begins with a clear authored direction. A point of view, a narrative logic, or a visual thesis exists before generation starts. AI enters once the idea has shape, not as a substitute for having one. When generation precedes direction, output tends to feel generic because it is shaped by probability, not perspective.
2. AI Must Operate Inside a Defined World
Strong visual systems behave like worlds, not collections of assets. They are governed by internal logic. This includes visual language, casting logic, narrative tone, symbolic codes, and spatial and stylistic boundaries. AI output that ignores these structures introduces drift. The work may be technically impressive, but it feels disconnected. When AI operates inside an established world, it extends that world rather than diluting it. Consistency becomes a sign of authorship.
3. Use AI Where Imagination Is Explicit
The most fragile territory for AI is hyper-real simulation. When synthetic imagery closely mimics documentary or lived reality, audiences can feel misled, even if no deception was intended. Trust erodes when realism is ambiguous. AI is more culturally effective where its constructed nature is clear. This includes abstraction, transformation, speculative or impossible environments, and symbolic or narrative world-building. In these spaces, AI amplifies imagination rather than competing with reality. The work feels expressive rather than deceptive.
4. Keep the Human Signal Legible
In a high-volume visual environment, audiences look for cues of intention, whether consciously or not. Every piece of work should be able to answer: Who made this? Why does it exist? What is it saying, specifically? If those signals are not perceptible, the work risks becoming background noise. It may be seen, but it is not registered. Legibility of intention is increasingly tied to credibility.
The Larger Shift
The defining creative skill of the next decade may not be production. It may be judgment. When images can be generated endlessly, the primary act of authorship becomes selection, shaping, refusal, and refinement. The value shifts from making more to deciding better. This is where discernment outweighs automation. AI will continue to expand what is possible. But the work that endures will be the work where human intention remains visible at the centre, guiding what is made, what is shown, and what is left out.
Because the question audiences are increasingly asking is not, “How was this made?” It is, “Was anyone really there?”
Let's talk about your next project
From social-first to global output at scale – discover our strategy, creative and production capabilities


