Is culture stagnating, or is it just me? Music, film, fashion, and even scientific research all appear to be stuck in a rut, endlessly recycling past trends instead of producing anything original.
Critics like The New Yorker’s Kyle Chayka argue that algorithms—the opaque, automated systems governing what we watch, listen to, and buy—are flattening creativity into predictable patterns. But the problem runs deeper than just homogenized culture. Algorithms now dictate life-altering decisions in criminal justice, housing, employment, and politics, often without accountability. The real question isn’t just whether algorithms are making culture boring—it’s who designs them, what biases they encode, and how they reshape society in ways we barely understand.
Streaming platforms like Spotify and Netflix don’t just recommend content—they manufacture tastes. Ted Gioia points out that the music industry increasingly invests in old catalogs (think Bruce Springsteen and David Bowie) rather than new artists, while Hollywood churns out endless superhero sequels and algorithmically optimized Netflix shows. Even fashion brands replicate the same designs because data shows "that’s what sells."
This isn’t entirely new. Aldous Huxley warned in 1923 that mass-produced entertainment dulls creativity. But today, the gatekeepers—critics, curators, and independent record stores—have often been replaced by black-box algorithms that prioritize engagement over originality. The result? A feedback loop where people consume what the machine feeds them, reinforcing sameness.
Yet blaming algorithms alone misses the bigger picture. Economic forces—skyrocketing rents, corporate consolidation, and dwindling funding for the arts—have gutted independent venues, bookstores, and galleries, leaving tech platforms as the dominant cultural arbiters. Algorithms amplify stagnation but didn’t create it.
The most serious danger comes from the unseen algorithms that determine high-impact choices. In Philadelphia, a University of Pennsylvania-developed algorithm sets probation conditions. It wasn’t until a journalist explained it that Darnell Gates, a man on probation, understood his “high-risk” designation stemmed from a software program, leaving him wondering how to fight a pre-determined fate.” he asked.
- Housing & Finance: Mortgage approvals once relied on human judgment; now, risk-assessment algorithms perpetuate "digital redlining," denying loans based on ZIP codes and other biased data.
- Employment & Welfare: The Netherlands used an algorithm to flag welfare fraud—until a court ruled it violated human rights. In the U.S., Walmart’s dynamic pricing software adjusts costs in real time, often prioritizing profit over fairness.
Unlike past injustices (e.g., housing discrimination outlawed in 1968), algorithmic harms lack clear legal remedies. Their inner workings are corporate secrets, making accountability nearly impossible.
Algorithms are not neutral. They reflect the biases of their creators—often white, male, tech-industry engineers—and the flawed data they’re trained on. ProPublica found that a criminal risk-assessment algorithm falsely labeled Black defendants as future criminals at twice the rate of white defendants. Similarly, facial recognition systems misidentify people of color more often, leading to wrongful arrests.
Governments and corporations argue algorithms reduce human bias, but in reality, they automate and obscure it. As Bristol, England, uses predictive software to identify "at-risk" youths, critics warn it could reinforce racial profiling. Even when humans oversee the process (as in Bristol’s weekly review meetings), the algorithm’s recommendations carry undue weight.
People know algorithms manipulate them—yet feel powerless to resist. Spotify users complain about repetitive playlists but keep listening. Shoppers dislike surge pricing but still buy from Amazon. This resignation, dubbed techno-fatalism, extends to democracy itself. Social media algorithms amplify outrage and misinformation because it drives engagement, distorting politics.
Are we stumbling ‘zombie-like into a digital-welfare dystopia?’
Without transparency laws we can’t challenge automated decisions.
But change is possible—though requires systemic shifts:
1. Regulation: to make algorithms transparent and auditable.
2. Public Alternatives: to support non-algorithmic spaces, work and archives.
3. Digital literacy: to empowers people to use algorithms to their advantage.
Unchecked algorithms can re-enforce biases, worsen inequality, suppress creativity, and undermine human control. The challenge isn’t simply to protect culture from becoming monotonous but to ensure algorithms don’t dictate our future without our say.
Recommended Reading:
Filterworld - Kyle Chayka
Weapons of Math Destruction - Cathy O’Neil
The Age of Surveillance Capitalism - Shoshana Zuboff