By: Thomas Stahura
LinkedIn banned me. I was running a scraper to enrich a dataset for Ascend and triggered its aggressive bot detection. Frustrating, but a right of passage for any automation enthusiast. (I was back after 24 hours in the digital penalty box.)
Moving beyond my personal digital hiccup, a far more significant disruption is unfolding online, sending me down the rabbit hole of the internet’s growing bot problem and the serious questions about the future of interaction itself.
In recent news, researchers at the University of Zurich secretly deployed AI bots across Reddit over the last four months to test whether artificial intelligence could sway public opinion on polarizing topics.
The study drew heavy criticism after it came out that the researchers had their AI bots pose as rape victims, Black men opposed to BLM, and workers at a domestic violence shelter. The bots targeted the subreddit r/changemyview and wrote more than 1,700 personalized comments designed to be as persuasive as possible.
The results show AI-generated comments are significantly more effective (three to six times more effective) at changing users' opinions compared to human-generated comments. And none of the users were able to detect the presence of AI bots in their subreddit.
Reddit’s Chief Legal Officer condemned the research as “deeply wrong on both a moral and legal level,” and the company banned all accounts associated with the University of Zurich. Despite the condemnation, Reddit's data deal with OpenAI indicates it's providing the foundation for even more persuasive digital manipulators. And OpenAI itself is considering launching its own social network to feed its data hungry models.
The dead internet theory is an online conspiracy that’s been around for years but hit the collective consciousness in the wake of ChatGPT’s launch in late 2022. The internet became "dead," the theory goes, as authentic human engagement has been largely replaced by automated algorithm-driven content and interactions.
Afterall, Google is built off the backs of thousands of crawlers storing every known site, while other bots crawled the internet since its birth. Imperva, which only started tracking bots in 2013, clocked them at 38.5% of all internet traffic. Bots surged to 59% the following year and slowly dropped back down to 37.2% in 2019 (the same year human traffic peaked at 62.8%). Since then, bot traffic has been crawling back up. And, in 2024, surpassed human traffic for the first time in a decade. Today, it’s reasonable to assume bots are responsible for more than half of global internet traffic.
But again, this is nothing new. It happened in 2014 and all the largest websites have built serious defenses around their valuable data. How many captchas have you had to solve? I’ve personally done too many to count, and I still managed to get my LinkedIn suspended for “the use of software that automates activity.”
The central question of the “dead internet” and the AI revolution as a whole is: “Is this time different?”
Yes, in the sense that humanity will remain below 50% internet traffic for the foreseeable future. But also no, in the sense that human generated data is and will always be the most valuable commodity online. So there exists incentives to protect and foster it, though the influx of bots is already upon us. LLM-powered agents are actively exploring the web in exponential numbers. Deep research agents visit hundreds of websites with a single query. IDE agents like Cursor and Cline now search the web for documentation. And agents are already booking AirBnBs, hailing Ubers, and ordering pizzas.
These agents can buy things but aren't influenced by ads. They masquerade as real humans but don’t generate authentic human activity. This is a whole new paradigm that websites will have to adapt to or risk losing business to sites who do. Allow the good bots, block the bad ones. Sounds easy enough, but how can you tell? The solution isn’t entirely clear yet. Thus enabling Swiss grad students to gaslight thousands of people for science.
The challenge for startups lies in balancing automation with authenticity. While AI can and should handle repetitive tasks and scale development, startups thrive on genuine connection with their early adopters and customers. Blindly automating every interaction could alienate the very people they need to build a real following.
There are tens of thousands of automated Facebook attention farm accounts. But I doubt images of shrimp Jesus are influencing people. The fear is rampant disinformation and targeted persuasion. And it's warranted. I spot fake-seeming Youtube comments all the time, and I'm certain DeepSeek-powered disinformation is rampant on Weibo.
The Head of TED, Chris Anderson, during his talk with Sam Altman, put it best. He said: “It struck me as ironic that a safety agency might be what we want, yet agency is the very thing that is unsafe.”
I believe there is a way to authenticate agents and build a web that works for both bots and humans alike. I’ll talk more about what that looks like in the next edition.
But if it wasn't clear already, don’t automatically trust everything you see online. The next time LinkedIn sends you a push notification saying “so and so” viewed your profile — they may be a bot in disguise.