On April 12, 2025, I found myself engulfed in the endless scroll of Reddit, a ritual many are familiar with nowadays, referred to as doomscrolling. Each post felt like a lackluster engagement with a fragmented reality, where political ragebait, recycled cat videos, and morally dubious questions such as, 'Am I the asshole for divorcing my husband after he killed our two children while drunk and high?' dominated the landscape. To add to the garish mix, there were the tired wojak memes that seemed to mock rather than entertain.

Suddenly, amidst this digital chaos, a post stood out. Its title, 'Anyone else feel like the internet is just broken now?' resonated with an unsettling authenticity. With nearly 6,000 upvotes and hundreds of comments, it captured a palpable sense of despair that felt all too familiar: 'Everything online feels either like an ad, a hustle, or someone desperately trying to go viral. Nothing feels real anymore.'

This sentiment struck a deep chord within me. I could relate; however, as I delved deeper into the comments, a nagging feeling began to creep in. The post's wording and rhythm seemed almost too perfect, meticulously crafted to evoke maximum relatability. Curiosity overtook me, prompting a click on the poster's username. What I found shocked me: a barren profile filled with karma-farming content, from viral pet clips to recycled feel-good stories, and an unending stream of reposted memes. This user was churning out multiple threads daily yet never engaged in any discussionsclassic bot behavior.

As I returned to the original post, I noticed a final sentence that caught my eye: 'This was written in 1928its incredible how it predicted the moment were living in today and where we're heading.' Underlined and highlighted in blue, it was a hyperlink, leading to a domain that seemed suspicious'rddit.org'. My instincts kicked in, as I recalled that legitimate platforms like Twitter and Facebook use their own shortened URLs for tracking. It struck me as odd that I had never seen Reddit use 'rddit.org' before.

Fueled by both curiosity and a tinge of paranoia, I decided to conduct a WHOIS lookup. To my dismay, I discovered that 'rddit.org' was not owned by Reddit at all; it was registered anonymously through a cheap link shortener, leaving its destination uncertain. I knew the risks associated with clicking dubious links on Reddit, yet I followed through and clicked it.

Just like that, I was redirected to Amazon, where I found a listing for a modern illustrated edition of Edward L. Bernays' classic work, 'Propaganda.' The URL bar revealed something unsettling: an affiliate tag, 'manwithhairwe-20.' This was not just an innocent link; it was a calculated market maneuver. The book had thirteen vaguely generic reviews, which only added to the unease. Upon exploring further, I discovered that the seller specialized in selling classic literature, freshly 'enhanced' with AI-generated art.

I sat back in disbelief: an AI-powered bot masquerading as a human, lamenting the rise of AI-powered bots that mimic human emotion for the sole purpose of gaining trust and covertly marketing AI-illustrated books. The implication was staggeringa Trojan horse nestled within the realm of late-stage capitalism, hinting at a cyberpunk dystopia where artificial intelligence weaves a complicated web of deception.

Scrolling back through the comments, I noted that hundreds of users were engaging, many seemingly oblivious to the ruse. One comment stood out, discussing the so-called 'Dead Internet Theory,' which posits that most online interactions are merely automated loops of bots communicating with one another. I found the irony overwhelmingly striking. Did anyone realize they were engaging with a bot designed to monetize their empathy? Perhaps, in an even more bizarre twist, many of them might also be bots, spiraling endlessly into their algorithmically optimized oblivion.

Sorting the comments by controversy, I stumbled upon a lone, tiny comment buried amidst the discussions: 'Is this commenter the only other surviving human? Or just another bot, designed to provoke more comments, more engagement?' My head spun with questions. My browser tabs multiplied as my search for answers spiraled out of control. I dove deeper into conspiracy forums, more WHOIS lookups, and archived Amazon profiles. The realization of what I was witnessing painted a grim picture.

A bot selling fake empathy to promote fake products through a fabricated sense of community. Was this spiraling paranoia precisely what they wanted? To drive me to the brink of insanity, generating more frantic clicks that would boost engagement metrics? Were they monitoring my every reaction, calibrating tomorrow's psychological operation based on my responses?

In the end, quoting Orwell felt almost clich: 'If you want a picture of the future, imagine a boot stamping on a human face.' But here I was, living in a world where bots made us sad and angry, all to farm engagement and sell us AI-generated slopforever caught in an endless loop of digital despair.