Is Slopaganda the Latest Weapon of War?

In conversation with LOVE, Al Hassan Elwan and Ruba Al-Sweel, the co-creators of POSTPOSTPOST, discuss the rise of AI-generated slop as a tool of wartime propaganda, the Instagram account @thewartoon, and the broader power of internet culture and memes.

WORDS BY SELMA NOURI, IN CONVERSATION WITH AL HASSAN ELWAN AND RUBA AL-SWEEL

In the endless scroll of the feed, it's hard to know what you’ll encounter next. Between photos of people you met once at a bar and Coachella recaps, political messaging and informational clips about weapons slip in almost unnoticed. But they’re not coming from the BBC or CNN. They’re spawning by the thousands, disguised as friendly cartoons. An American Tomahawk looks up at you from the screen with the wide-eyed innocence of a Dory-like character, while an Iranian Sejjil missile smiles and dances as it narrates its role in war. Watching them, you almost forget these aren’t just animated characters but real weapons that have killed millions and continue to inflict lasting harm. But maybe that’s exactly the point?

War hasn’t just been normalised; it's been cartoonised. When instruments of destruction are rendered cute, they become easier to accept or, perhaps, even easier to consume. Beyond the obscure corners of the internet or anonymous Reddit users, these videos are being produced and amplified by the governments themselves. They have become the new shitposters.

In conversation with LOVE, Al Hassan Elwan and Ruba Al-Sweel, the co-creators of POSTPOSTPOST, a provocative internet publication at the frayed edge of contemporary culture, and self-described “avant-gardeners,” unpack what all of this means. They offer their perspective on the rise of AI-generated slop as a tool of wartime propaganda, the Instagram account @thewartoon, and the broader power of internet culture and memes. As they remind us, “for propaganda to be considered effective, it cannot rely on circulation or ragebait alone. It needs to move the persuasion needle.” So, when does political propaganda actually work? Who really sparked this war of “slop?” And what happens when governments begin shitposting?

What are the origins of Wartoons and other forms of “slopaganda”? What exactly are they?

Wartoons (@thewartoon) is a rapidly growing Instagram and TikTok account that produces educational, AI-generated cartoons about the US and Israel’s war on Iran. Its creator appears to be anonymous, though they once DM’d me something along the lines of, “Are you excited for episode 2?” after I shared one of their videos a couple weeks ago with the caption, “This is my Fruit Love Island.” The name, however, feels broad enough that it could refer to the wider ecosystem of AI-generated videos that render the war and current geopolitical events in caricature form. These viral videos have become historical objects, inseparable from war commentary and discourse. The widely shared Lego-themed videos, for instance, recently featured in the BBC and The New Yorker, are produced by Explosive Media, another account that utilises the same medium. A representative from Explosive Media had previously described the operation as an independent media outlet, before publicly acknowledging in a recent BBC interview that the Iranian regime is a “customer.”

Consensus about what “Slopaganda” means began forming after its first Urban Dictionary entry in February 2024 and was later expanded on by an academic paper published in March 2025. To me, it is an attempt not just to describe but explore the nascent relationship between political propaganda and generative AI content.

Are AI slop/memes now an effective form of propaganda?

I hate to risk sounding like a Redditor here, but AI slop and memes are too different, at least to me, to be lumped into one category. Memes, in the broadest sense, have proven to be superbly effective political tools over the past decade. Slop, in the sense of low-effort creative production (hotel paintings, soap operas, gentrified apartment complexes, elevator music) has always been there. It’s AI slop that is relatively new.

There are also various interpretations of “effectiveness.” If wide circulation is the goal, then AI slop can be very effective, especially with incentivised algorithms. But for propaganda to be considered effective, it cannot rely on circulation or ragebait alone. It needs to move the persuasion needle. This is why I believe “slopaganda” is inherently ineffective, and why I don’t consider Wartoons or the Lego videos slopaganda, or even slop.

Recently, YouTube and Instagram, platforms with a long history of incentivising AI slop, took down Explosive Media’s accounts, a move that is unprecedented for AI accounts with comparable levels of visibility. It underscores a broader systemic conundrum in which unregulated, profit-driven social platforms reward certain behaviours until they are forced to punish speech that begins to threaten them.

In your observation, how has the use of “slop” or low-quality/AI-generated media evolved since the US/Israeli attacks on Iran?

The most crucial development is the public reception of pro-Iran AI-generated content. AI content, which has rightfully become synonymous with “slop,” hardly ever receives such positive feedback and/or enthusiasm. A common comment on pro-Iran videos is, “There is AI slop, then there is AI CINEMA,” such praise used to be reserved for very rare AI cat or raccoon videos. Now it’s becoming a much more frequent sentiment. Even Fruit Love Island enjoyers still felt somewhat ashamed to admit it, but very few hesitated to unironically share the “One Vengeance for All” pro-Iran AI video. This can partly be attributed to the rapid advancement of the generative tools themselves. They are increasingly capable of generating more captivating and compelling visuals. We’ve definitely come a long way since the infamous Will Smith eating spaghetti video (which was only 3 years ago).

There is, however, another reason that I believe better explains the success of the pro-Iran AI videos I’ve seen: they articulate and affirm a popular, almost universally held “truth.” Joseph Goebbels, a notorious pioneer of selective truth, argued that for a lie to be effective, it had to be sprinkled with some truth. This requires craft, which is the antithesis of slop. Here’s what Explosive Media (the Lego video creators) said when asked about their process: “Every scene, every frame, every hidden detail, and every idea in our work feels like our own children.” They quoted the Persian proverb سخنی که از دل برآید، بر دل نشیند which roughly translates to “A word that rises from the heart, sits upon the heart.” They said the team hopes that their videos will offer viewers “a glimpse into a different kind of spirit—something more poetic, more human, maybe a bit more gentle.” This hardly suggests “low-effort.” If anything, it points to sincere belief, which stands in stark contrast to the transactional, monetisation-first cynicism typical to slop production pipelines. “Working full time, we can produce a two-minute video in about 24 hours,” their representative said. It’s difficult to view this type of work as slop because no one takes slop this seriously.

This seriousness drives commitment to craftsmanship, so whether the propaganda is true or false, it certainly isn’t sloppy.

What role has the Trump administration played in popularising or amplifying this form of propaganda?

I think it’s important to define slopaganda as weak or ineffective propaganda, not slap the label on any type of AI-generated propaganda. Propaganda is very objective-oriented. Its results, whether propagation or influence, are measurable. It can have KPI’s. It’s a lot closer to design than art. This is why the Trump administration is the true slopaganda pioneer.

After a year in office, Trump’s net approval sank by -19%, and specifically dropped 42 points among Gen Z, the very audience his slopaganda is largely aimed at. That points to a lack of both effort and commitment to the craft of persuasion itself. Overt agenda-pushing, repeated tone-deaf flops, failed attempts at relatability, and a disregard for pop-cultural nuance all point to poor manipulation skills. While it may have disillusioned a large portion of the public, it also produced a hardened bubble of self-serving cultists. This is where slopaganda can become truly dangerous.

Adam Morton distinguishes three levels of social organisation that can help us understand the structures of propaganda. At the lowest level are the foot soldiers—those who do things. For them, propaganda provides a reason for doing, especially when the doing involves violence. Think Nazi brownshirts or January 6ers. The next tier up are the bureaucrats—who authorise things. They normalise and shift overtone windows, but they don’t necessarily need to be persuaded themselves. Big tech algorithm engineers fall into this category. At the top, we have the orchestrators. These are the propagandists who formulate the rhetoric and drive its spread. They only care about the form, the tools, the craft, never the message itself. It doesn't matter if they believe in it themselves. Think Steve Bannon, Tucker Carlson or Vladislav Surkov.

Slopaganda unlocks massive potential for the lowest tier because generative AI’s defining power is its near-infinite scalability. Slop has always been about the endless churn to exploit the lowest common denominator, the best strategy for recruiting foot soldiers. Slopaganda feed floods reaffirm what millions already believe until a fringe forms, and they begin to believe they are the “chosen” ones. So while the MAGA base shrinks, more are prone to doing. This is why the Trump admin’s slopaganda leaned into white nationalist dog whistles and racist imagery. They would rather recruit than convince. The goal is to give purpose to a few rather than convert the many. Slopaganda may be a weaker form of propaganda, but that does not make it any less dangerous.

Another slopaganda case worth noting is the Dilley Meme Team, who describe themselves as “Trump’s Online War Machine.” They could be seen as the Trump administration’s equivalent of Explosive Media, but the comparison also sheds light on how craft and commitment are crucial for effective propaganda. Sure, their bad taste and sixth-grade sense of humour may have repelled many, but their main failure is that they gave the game away. Their founder, Brenden Dilley, has publicly said, "it doesn't have to be true; it just has to go viral," and "I don't give a fuck about being factual"—statements that only a novice, unskilled manipulator would say. Only Trump himself was able to run with the “raw, unfiltered” act because he knew its limits. Even Goebbels, contrary to popular belief, never broke character, never admitted to lying, and remained committed to the bit till his last breath.

Courtesy of Truth Social

What do you think the use of slop or memes as a form of propaganda signals for the future of political communication and public trust? What does it suggest about society more broadly?

AI use for propaganda will certainly intensify, but slop or slopaganda will likely remain ineffective as a tool of persuasion. Effective propaganda requires craft, which, as I've said, is the antithesis of slop. This reinforces my hunch that no matter how entertaining slop may be, it is inherently, as a mode of production, incapable of being transformative or truly memorable. This is why I am unable to categorise some of the pro-Iran videos as slop. It is impossible to view a Wartoon video as an inferior ‘creative’ production compared to a betting-market commercial or a Tubi original. Humans have always produced slop, long before AI. At the end of the day, I’m admittedly a slop enjoyer, but I unfortunately still care about drawing distinctions.

Where do you draw the line between slop, conspiracy theories, and more intellectual memes? In the wake of the genocide in Gaza and the release of the Epstein files, many memes have emerged questioning what was once dismissed as conspiracy. How has internet culture contributed to exposing truths, and how can we preserve that potential without falling into the traps of propaganda?

I often find myself opposed to the framing of propaganda as a dirty word. Propaganda is a medium that is neutral at its core. It’s a language, and it requires fluency. Falsehood and truth are just two colours in its palette. Ancient Greek sophistry got a bad rep for the same reasons, but that just proves we’ve been mad about the wrong thing for thousands of years. Rhetoric is a skill that is unfortunately not obliged to care about truth.

The internet has definitely started a new era of rhetorical weaponization. Luckily, it still produces a healthy amount of hive-mind scepticism that allows us to expose truths and dismantle decades-old false narratives. The role of memes and online culture in this kind of dismantling has been widely written about, but the key takeaway is that they are just another form of propaganda. Ultimately, the problem with slopaganda is that it is bad propaganda, not evil propaganda. Bad propaganda can still lead to pretty evil consequences. Yet good (as in well-made, not benevolent) propaganda is often the best antidote. Before you disagree, I recommend reading Roland Barthes' Mythology (1957) and Keller Easterling’s Medium Design (2018).

Can you describe the work of POSTPOSTPOST? What interests you most about AI slop and memes?

POSTPOSTPOST is a brand that produces publications, films, and provocations at the frayed edge of contemporary culture while simultaneously building an art movement. The Instagram account started as a rant-infested finsta. But it surprisingly amassed an engaged niche following as a post-anonymous, faux-oracle that memes and posts contemporary cultural commentary alongside avant-garde theory—so I had to remove the selfies I had on there.

I’ve written a lot about memes and slop, but I have never thought about what draws me to them. I guess, it could be an irrational or immature impulse to make sense of the world I find myself in, with these online artefacts acting as signals that I feel compelled to follow. I have been brainwashed to obsess over the “avant-garde,” so my only way out is often to make fun of it. I always remind my followers that I am not a serious thinker.

What is your goal through POSTPOSTPOST? What impact do you hope to have through your publication/account?

I really don’t want to have any impact at all—at least until I get a foreign passport.

I always try to be a liability for my followers. I’m also never sure if I am doing a bit or if the bit is doing me, which is a nice spot to be in. I am, however, working very seriously, alongside my co-editor Ruba Al-Sweel, on a second volume of the POSTPOSTPOST publication. It will hit the shelves this summer, and it might not be real.