<
https://www.techdirt.com/2025/08/04/substacks-algorithm-accidentally-reveals-what-we-already-knew-its-the-nazi-bar-now/>
"Back in April 2023, when Substack CEO Chris Best refused to answer basic
questions about whether his platform would allow racist content, I noted that
his evasiveness was essentially hanging out a “Nazis Welcome” sign. By
December, when the company doubled down and explicitly said they’d continue
hosting and monetizing Nazi newsletters, they’d fully embraced their reputation
as the Nazi bar.
Last week, we got a perfect demonstration of what happens when you build your
platform’s reputation around welcoming Nazis: your recommendation algorithms
start treating Nazi content as more than worth tolerating, to content worth
promoting.
As Taylor Lorenz reported on
User Mag’s Patreon account, Substack sent push
notifications to users encouraging them to subscribe to “NatSocToday,” a
newsletter that “describes itself as ‘a weekly newsletter featuring opinions
and news important to the National Socialist and White Nationalist Community.'”
As you can see, the notification included the newsletter’s swastika logo,
leading confused users to wonder why they were getting Nazi symbols pushed to
their phones.
“I had [a swastika] pop up as a notification and I’m like, wtf is this? Why
am I getting this?” one user said. “I was quite alarmed and blocked it.”
Some users speculated that Substack had issued the push alert intentionally
in order to generate engagement or that it was tied to Substack’s recent
fundraising round. Substack is primarily funded by Andreessen Horowitz, a
firm whose founders have pushed extreme far right rhetoric.
“I thought that Substack was just for diaries and things like that,” a user
who posted about receiving the alert on his Instagram story told User Mag.
“I didn’t realize there was such a prominent presence of the far right on
the app.”
Substack’s response was predictable corporate damage control:
“We discovered an error that caused some people to receive push
notifications they should never have received,” a spokesperson told User
Mag. “In some cases, these notifications were extremely offensive or
disturbing. This was a serious error, and we apologize for the distress it
caused.”
But here’s the thing about algorithmic “errors”—they reveal the underlying
patterns your system has learned. Recommendation algorithms don’t randomly
select content to promote. They surface content based on engagement metrics:
subscribers, likes, comments, and growth patterns. When Nazi content
consistently hits those metrics, the algorithm learns to treat it as successful
content worth promoting to similar users.
There may be some randomness involved, and algorithms aren’t perfectly
instructive of how a system has been trained, but it at least raises some
serious questions about what Substack thinks people will like based on its
existing data."
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics