Bad Blood is the history of Theranos. It’s written by John Carreyrou. John is not just some random journalist-turned-novelist — he’s the same Wall Street Journal reporter who blew Theranos open like a microwaved egg, with his bombshell yet hilariously understated expose in 2015:
“Hot Startup Theranos Has Struggled With Its Blood-Test Technology”
(understatement of the year, albeit only in hindsight). It’s a great story, and a fascinating window into the nuts-and-bolts of investigative journalism.
For anyone living under a cinderblock for the past decade, the tl,dr of Theranos:
College drop-out Elizabeth Holmes founds biotech startup Theranos
Theranos claims it has technology which could, from a drop of blood from a finger prick (instead of traditional blood draws), diagnose hundreds of medical conditions from low Vitamin D to Herpes.
(Spoiler: they didn’t, and it couldn’t)
Despite having no working technology, Elizabeth compensates with a closet of black Steve Jobsian turtlenecks and a strangely husky voice, and the company raised hundreds of millions in venture funding at a peak valuation of $10 billion
Theranos fakes it, but forgot the second half, and never makes it. The company collapses, everyone loses their money, and the founders face criminal trials.
My whole career, I’ve lived in deep tech, and had to deal with the semi-literate catastrophe of “tech news”. I went into the book with as much respect for tech journalism as I have for pond slime, so my prior on the Theranos storywas:
“Ambitious young founder drops out of Stanford with good idea. Media builds up young (female) founder into unicorn founder superhero. When technology fizzles out, the founder, unable to admit defeat due to immaturity and media adulation, accidentally digs grave for self with good-intentioned but compounding exaggerations. Lies build up until the company collapses. Finito.
While it was technically ‘fraud’, nobody got hurt except investors who didn’t do due diligence, so… so what?”
Well, I was wrong. Theranos — and Theranos was, indisputably, a physical manifestation of Elizabeth Holmes’s psyche — lied from the beginning, and was doing Bad Shit well before Elizabeth became a Young Female Founder icon on the cover of Forbes.
And when I say “Bad Shit”, I mean:
Lying, outright, in the press and to partners, about what technology was being used to run tests.
Completely inventing revenue projections. This is what got them to unicorn status. The lying didn’t come “post-unicorn”
Completely disregarding employee feedback, even when being told outright “these devices are random number generators, but we’re using them to provide clinical results, and should probably stop”
Lying, outright, to the board of directors about basic things “our devices are being used in Afghanistan”
Giving patients clinical results based on clearly malfunctioning experimental devices. And like wildly bad results. Giving patients Potassium readings which classified them as “obviously deceased”.
I don’t want to go too deep into the details. Pretty much every part of the story is equally wild, and you should just read it, if you’re at all interested in reading about biotech trainwrecks.
One of the craziest part about the story (to me) is how barely it happened. There were several points in the story where the breakthrough hinged on absolutely tiny connections or revelations — and usually, those connections were tech-enabled.
First, one of the key connections — the one which actually connected the whistleblower to John Carreyrou, was a LinkedIn profile view notification (!):
“While checking his emails a few days later, Fuisz saw a notification from LinkedIn alerting him that someone new had looked up his profile on the site. The viewer’s name—Alan Beam—didn’t ring a bell but his job title got Fuisz’s attention: laboratory director at Theranos. Fuisz sent Beam a message through the site’s InMail feature asking if they could talk on the phone. He thought the odds of getting a response were very low, but it was worth a try. He was in Malibu taking photos with his old Leica camera the next day when a short reply from Beam appeared in his iPhone in-box.”
In case you haven’t logged into LinkedIn recently, that’s the stupid little notification that shows up right before a recruiter tries to connect with you:
This case breaking open hinged on Fuisz being notified that someone had viewed his LinkedIn profile. This connected a whistleblower former employee with a disgruntled legal rival, who knew a guy who ran a pathology blog. That blogger just happened to know an investigative WSJ reporter.
And that brought down a $10B startup.
It wasn’t the only case where tech-connectivity was critical to breaking open this case. John was able to use Yelp to find doctors to attest to Theranos’s unreliability:
“I had another lead, though, after scanning Yelp to see if anyone had complained about a bad experience with Theranos. Sure enough, a woman who appeared to be a doctor and went by “Natalie M.” had. Yelp has a feature that allows you to send messages to reviewers, so I sent her a note with my contact information. She called me the next day. ”
(This is still a thing, by the way — you can still find irate customers in Phoenix on Yelp dealing with the repercussions of randomized Theranos test results):
There’s the obvious stealth tech too, of course — burner phones, burner emails, email backups, and all the other digital tools which make it impossible to permanently hide internet-connected information in the 21st century.
I don’t mean to imply that the internet (and all the weird stuff we’ve layered on top of the web) made the journalism easy — clearly this story was a grind from start to finish against brutal legal pressure by Theranos. It’s entirely John would have broken the story open without all the newly available digital tricks of the trade.
Or, maybe not.
Theranos certainly wouldn’t have lasted forever, one way or another. The technology simply didn’t work. Safeway or Walgreens, once they had rolled out commercial partnerships, would have figured this out… eventually.
But it seems likely it would have lasted long enough to kill a lot of people.
Or: I Thought I was a Programmer but now I’m worried I’m a High Modernist.
Seeing like a State by James C. Scott is a rallying cry against imperialist high modernism. Imperialist high modernism, in the language of the book, is the thesis that:
Big projects are better,
organized big is the only good big,
formal scientific organization is the only good system, and
it is the duty of elites leading the state to make these projects happen — by force if needed
The thesis sounds vague, but it’s really just big. Scott walks through historical examples to flesh out his thesis:
scientific forestry in emerging-scientific Europe
land reforms / standardization in Europe and beyond
the communist revolution in Russia
agricultural reforms in the USSR and Tanzania
modernist city planning in Paris, Brazil, and India
The conclusion, gruesomely paraphrased, is that “top-down, state-mandated reforms are almost never a win for the average subject/victim of those plans ”, for two reasons:
Top-down “reforms” are usually aimed not at optimizing overall resource production, but at optimizing resource extraction by the state.
Example: State-imposed agricultural reforms rarely actually produced more food than peasant agriculture, but they invariably produced more easily countable and taxable food
Top-down order, when it is aimed at improving lives, often misfires by ignoring hyper-local expertise in favor of expansive, dry-labbed formulae and (importantly) academic aesthetics
Example: Rectangular-gridded, mono-cropped, giant farms work in certain Northern European climates, but failed miserably when imposed in tropical climates
Example: Modernist city planning optimized for straight lines, organized districts, and giant apartment complexes to maximize factory production, but at the cost of cities people could actually live in.
However.
Scott, while discussing how Imperial High Modernism has wrought oppression and starvation upon the pre-modern and developing worlds, neglected (in a forgivable oversight), to discuss how first-world Software Engineers have also suffered at the hands of imperial high modernism.
Which is a shame, because the themes in this book align with the most vicious battles fought by corporate software engineering teams. Let this be the missing chapter.
The Imperial High Modernist Cathedral vs The Bazaar
Imperial high modernist urban design optimizes for top-down order and symmetry. High modernist planners had great trust in the aesthetics of design, believing earnestly that optimal function flows from beautiful form.
Or, simpler: “A well-designed city looks beautiful on a blueprint. If it’s ugly from a birds-eye view, it’s a bad city.”
The hallmarks of high modernist urban planning were clean lines, clean distinctions between functions, and giant identical (repeating) structures. Spheres of life were cleanly divided — industry goes here, commerce goes here, houses go here. If this reminds you of autism-spectrum children sorting M&Ms by color before eating them, you get the idea.
Le Corbusier is the face of high modernist architecture, and SlaS focuses on his contributions (so to speak) to the field. While Le Corbusier actualized very few real-world planned cities, he drew a lot of pictures, so we can see his visions of a perfect city:
True to form, the cities were beautiful from the air, or perhaps from spectacularly high vantage points — the cities were designed for blueprints, and statelegibility. Wide, open roads, straight lines, and everything in an expected place. Shopping malls in one district, not mixed alongside residences. Vast apartment blocks, with vast open plazas between.
Long story short, these cookie-cutter designs were great for urban planners, and convenient for governments. But they were awful for people.
The reshuffling of populations from living neighborhoods into apartment blocks destroyed social structures
Small neighborhood enterprises — corner stores and cafes — had no place in these grand designs. The “future” was to be grand enterprises, in grand shopping districts.
Individuals had no ownership of the city they lived in. There were no neighborhood committees, no informal social bonds.
Fundamentally, the “city from on high” imposed an order upon how people were supposed to live their lives, not even bothering to first learn how the “masses” were already living; he swept clean the structures, habits and other social lube that made the “old” city tick.
In the end, the high modernist cities failed, and modern city planning makes an earnest effort to work with the filthy masses, accepting as natural a baseline of disorder and chaos, to help build a city people want to live in.
If this conclusion makes you twitch, you may be a Software Engineer. Because the same aesthetic preferences which ground Le Corbusier’s gears also are the foundation of “good” software architecture; namely:
Good code is pretty code
Good architecture diagrams visually appear organized
Software devs don’t draft cityscapes, but they do draw Lucidchart wireframes. And a “good” service architecture for a web service would look something like this:
We could try to objectively measure the “good” parts of the architecture:
Each service has only a couple clearly defined inputs and outputs
Data flow is (primarily) unidirectional
Each service appears to do “one logical thing”
But software engineers don’t use a checklist to generate first impressions. Often before even reading the lines, the impression of a good design is,
Yeah, that looks like a decent clean, organized, architecture
In contrast, a “messy” architecture… looks like a mess:
We could likewise break down why it’s a mess:
Services don’t have clearly defined roles
The architecture isn’t layered (the user interacts with backend services?)
There are a lot more service calls
Data flow is not unidirectional
But most software architects wouldn’t wade through the details on first glance. The first reaction is:
Why are there f******* lines everywhere??? What do these microservices even do? How does a user even… I don’t care, burn it.
In practice, most good engineers are ruthless high modernist fascists. Unlike the proto-statist but good-hearted urban planners of the early 1900s (“workers are dumb meat and need to be corralled like cattle, but I want them to be happy cows!”), we wrench the means of production from our code with blood and iron. Inasmuch as the subjects are electrons, this isn’t a failing of the system — it’s the system delivering.
Where this aesthetic breaks down is when these engineers have to coordinate with other human beings — beings who don’t always share the same vision of a system’s platonic ideals. To a perfectionist architect, outside contributions risk tainting the geometric precision with which a system was crafted.
Eric S Raymond famously summarized the two models for building collaborative software in his essay (and later, book): The Cathedral and the Bazaar
Unlike in urban planning, the software Cathedral came first. Every man dies alone, and every programmer codes solo. Corporate, commercial cathedrals were run by a lone (or small team) of ruthless God Emperors, carefully vetting contributions for coherence to a grander plan. The essay summaries the distinctions better than I can rehash, so I’ll quote in length.
The Cathedral model represents mind-made-matter diktat from above:
I believed that the most important software (operating systems and really large tools like Emacs) needed to be built like cathedrals, carefully crafted by individual wizards or small bands of mages working in splendid isolation, with no beta to be released before its time.
The grand exception to this pattern was an upstart open-source Operating System you may have heard of — Linux. Linux took a different approach to design, welcoming with open arms external contributions and all the chaos and dissent they brought:
Linus Torvalds’s style of development – release early and often, delegate everything you can, be open to the point of promiscuity – came as a surprise. No quiet, reverent cathedral-building here – rather, the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who’d take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles.
Eric predicted that the challenges of working within the chaos of the Bazaar — the struggle of herding argumentative usenet-connected cats in a common direction — would be vastly outweighed by the individual skills, experience, and contributions of those cats:
I think the future of open-source software will increasingly belong to people who know how to play Linus’ game, people who leave behind the cathedral and embrace the bazaar. This is not to say that individual vision and brilliance will no longer matter; rather, I think that the cutting edge of open-source software will belong to people who start from individual vision and brilliance, then amplify it through the effective construction of voluntary communities of interest.
Eric was right — Linux dominated, and the Bazaar won. In the open-source world, it won so conclusively that we pretty much just speak the language of the bazaar:
“Community contributions” are the defining measure of health for an Open Source project. No contributions implies a dead project.
“Pull Requests” are how outsiders contribute to OSS projects. Public-editable project wikis are totally standard documentation. Debate (usually) happens on public mailing lists, public Slacks, public Discord servers. Radical transparency is the default.
I won’t take this too far — most successful open-source projects remain a labor of love by a core cadre of believers. But very few successful OSS projects reject outside efforts to flesh out the core vision, be it through documentation, code, or self-machochistic user testing.
The ultimate victory of the Bazaar over the Cathedral mirrors the abandonment of high modernist urban planning. But here it was a silent victory; the difference between cities and software, is that dying software quietly fades away, while dying cities end up on the evening news and on UNICEF donation mailers. The OSS Bazaar won, but the Cathedral faded away without a bang.
Take that, Le Corbusier!
High Modernist Corporate IT vs Developer Metis
At risk of appropriating the suffering of Soviet peasants, there’s another domain where the impositions of high modernism parallel closely with the world of software — in the mechanics of software development.
First, a definition: Metis is a critical but fuzzy concept in SlaS, so I’ll attempt to define it here. Metis is the on-the-ground, hard-to-codify, adaptive knowledge workers use to “get stuff done”. In context of farming, it’s:
“I have 30 variants of rice, but I’ll plant the ones suited to a particular amount of rainfall in a particular year in this particular soil, otherwise the rice will die and everyone will starve to death”
Or in the context of a factory, it’s,
“Sure, that machine works, but when it’s raining and the humidity is high, turning it on will short-circuit, arc through your brain, and turn the operator into pulpy organic fertilizer.”
and so forth.
In the context of programming, metis is the tips and tricks that turn a mediocre new graduate into a great (dare I say, 10x) developer. Using ZSH to get git color annotation. Knowing that, “yeah Lambda is generally cool and great best practice, but since the service is connected to a VPC fat layers, the bursty traffic is going to lead to horrible cold-start times, customers abandoning you, the company going bankrupt, Sales execs forced to live on the streets catching rats and eating them raw.” Etc.
Trusting developer metis means trusting developers to know which tools and technologies to use. Not viewing developers as sources of execution independent of the expertise and tools which turned them into good developers.
Corporate IT — especially at large companies— has an infamous fetish for standardization. Prototypical “standardizations” could mean funneling every dev in an organization onto:
the same hardware, running the same OS (“2015 Macbook Airs for everyone”)
the same IDE (“This is a Visual Studio shop”)
an org-wide standard development methodology (“All changes via GitHub PRs, all teams use 2-week scrum sprints”)
org-wide tool choices (“every team will use Terraform V 0.11.0, on AWS”)
If top-down dev tool standardization reminds you of the Holodomor, the Soviet sorta-genocide side-effect of dekulakizatizing Ukraine, then we’re on the same page.
To be fair, these standardizations are, in the better cases, more defensible than the Soviet agricultural reforms in SlaS. The decisions were (almost always) made by real developers elevated to the role of architect. And not just developers, but really good devs. This is an improvement over the Soviet Union, where Stalin promoted his dog’s favorite groomer to be your district agricultural officer and he knows as much about farming as the average farmer knows about vegan dog shampoo.
But even good standards are sticky, and sticky standards leave a dev team trapped in amber. Recruiting into a hyper-standardized org asymptotically approaches “take and hire the best, and boil them down to high-IQ, Ivy+ League developer paste; apply liberally to under-staffed new initiatives”
When tech startups win against these incumbents, it’s by staying nimble in changing times — changing markets, changing technologies, changing consumer preferences.
To phrase “startups vs the enterprise” in the language of Seeing Like a State: nimble teams — especially nimble engineering teams — can take advantage of metis developer talent to quickly reposition under changing circumstances, while high modernist companies (let’s pick on IBM), like a Soviet collectivist farm, choose to excel at producing standardized money-printing mainframe servers — but only until the weather changes, and the market shifts to the cloud.
Overall
The main thing I struggled with while reading Seeing like a State is that it’s a book about history. The oppression and policy failures are real, but real in a world distant in both space and time — I could connect more more concretely to a discussion of crypto-currency, contemporary public education, or the FDA. Framing software engineering in the language of high modernism helped me ground this book in the world I live in.
Takeaways for my own life? Besides the concrete (don’t collectivize Russian peasant farms, avoid monoculture agriculture at all costs) it will be to view aesthetic simplicity with a skeptical eye. Aesthetic beauty is a great heuristic which guides us towards scalable designs — until it doesn’t.
And when it doesn’t, a bunch of Russian peasants starve to death.
Blueprint by Nicholas Christakis posits that humans are all fundamentally the same. Except under unusual circumstances, humans build societies full of good people, with instincts inclined towards kindness and cooperation. I read it.
This book is a grab-bag which combines lab-grown sociology (much of it from Nicholas’s own team) with down-and-dirty research about common foundational elements across human societies — both “natural” ones (tribes more-or-less undisturbed by modern society) and “artificial” ones (religious sects and shipwrecked crews).
tl,dr:
First, the book gives a tour of artificial and real communities, and their defining features:
Pre-industrial societies (such as the Hadza in Tanzania)
Hippie communes in the 70s
Utopian communities in the 1800s
Sexual mores in uncommon cultures (non-monogamous or polygynous)
Religious sects (Shakers)
Shipwrecked crews (some successful, some disasters)
Nicholas takes findings from these communities and references them against his own research of human behavior in controlled circumstances (think, using Amazon Mechanical Turk, MMORPG-esque games, and other controlled sociological experiments to test human social behavior given variations of the prisoners’ dilemma), and against our behavior as compared to other intelligent primates (Chimps and Bonobos), and comes up with a central theme:
“Humans are all genetically hard-wired to construct certain types of societies, based on certain foundational cultural elements. These societies trend towards “goodness”, with predispositions towards:
Kindness, generosity, and fairness
Monogamy (or light polygyny)
Friendship
Inclination to teach
Leadership
There are differences between people, and possibly across cultures, based on genetic differences, but these distinctions are trivial when measured against the commonalities in the societies we build”
Or, in his own words:
“We should be humble in the face of temptations to engineer society in opposition to our instincts. Fortunately, we do not need to exercise any such authority in order to have a good life. The arc of our evolutionary history is long. But it bends toward goodness.”
It’s an all-encompassing statement. Given the breadth of human experience, it’s a hard one to either negate or endorse without begging a thousand counterexamples.
(This summary comes out sounding like I’m accusing Blueprint of being primarily hand-wavy sociology, which wasn’t intentional. The research and historical errata are fairly hard science. But the conclusion is markedly less concrete than the research behind it.)
To be honest, I had more fun with the world tour — the fun anecdotes like “Hippie urban communes in the 70s actually did fewer drugs than the norm”, or “certain Amazon tribes believe that children are just large balls of semen, and children can have five fathers if the mother sleeps with every dude in the tribe” — than I had any “aha” moments with regards to the actual thesis.
My guess is that the book is a decade before its time — in 2020, we know enough to confidently state that “genes matter”, but are only beginning to get the faintest glimpse of “which genes matter”. Until the biology research catches up with the sociology (I never expected myself to type that) it’s hard to separate out “humans, because of thesespecific genes, organize ourselves into monogamous or lightly-polygynous societies with altruism, friends, respect for elders, sexual jealousy and love of children from “any complex society inherently will develop emergent properties like friends, altruism and sexual jealousy”.
I did find one interesting, tangible, take-away: the examples in Blueprint suggest a common recurring theme of physical ritual, like ceremonial dances and singing, in successful “artificial” communities.
Obviously, song & dance are a central theme in pretty much every natural community (eg, civilizations which developed over thousands of years) as well, but it’s easier to use artificial communities as a natural experiment, because many of these “new” communities completely failed — we generally don’t get to observe historical cultures fail in real-time.
(to be clear, this was not even slightly a central theme of the book — I’m extrapolating it from the examples he detailed)
In the chapter on ‘Intentional communities’ (that is, constructed societies, a la communes or utopian sects), Nicholas discusses the remarkable success of the Shaker sect. Why remarkable? Because the sect endured, and even grew, for a hundred years, despite some obvious recruiting challenges:
Shakers worked hard, all the time
Shakers didn’t (individually) own possessions
Shakers were utterly, absolutely, celibate
Much of the appeal of the Shaker communities to converts was the camaraderie and in some ways progressive values, like equality between the sexes. But a lot of the success seems to stem from kinship and closeness from ritual:
“Religious practice involved as many as a dozen meetings per week with distinctive dances and marches.”
Wikipedia adds to this story with contemporary illustrations; here, “Shakers during worship”:
I’m sure that economic and cultural aspects of Shaker communities also attracted converts and retained members, but I have to wonder whether part of the success of Shaker-ism (despite the extreme drawbacks of membership) was due to the closeness engendered by… essentially constant, physical ritual.
The second example was from Ernest Shackleton’s Imperial Trans-Antarctic Expedition. The tl,dr of Shackleton’s expedition is:
28 men were shipwrecked in Antarctica (aboard the Endurance)
For the better part of a year, they were stuck on an ice-bound boat, with no obvious exit plan
There was absolutely no fighting or tension in the crew. Nobody was killed, left to die, or recycled as dinner. In fact, nobody died, at all.
(3) is a remarkable achievement, given the other shipwrecked “societies” described in Blueprint — shipwrecked crews were wont to fall prey to violence, infighting, and occasionally cannibalism. Blueprint quotes survivors though, as they describe how the crew of the Endurance… endured:
“Strikingly, the men spent a lot of time on organized entertainment, passing the time with soccer matches, theatrical productions, and concerts… On the winter solstice, another special occasion, Hurley reported a string of thirty different “humorous” performances that included cross-dressing and singing. In his journal from the ordeal, Major Thomas Orde-Lees (who later became a pioneer in parachuting) noted: “We had a grand concert of 24 turns including a few new topical songs and so ended one of the happiest days of my life.”
It’s hard to separate cause and effect — a crew already inclined towards murdering each other over scarce Seal-jerky is unlikely to put on a musical production — but it seems likely that the “ritual” entertainment was a reinforcing element of the camaraderie as much as it was an artifact.
It’s hard to conjure up many strong feelings about Blueprint. It’s worth reading for the anecdota and history, but my main take from the descriptions of “in-progress research”, is that in a decade, we’ll be able to actually tie human behavior back to the genetic underpinnings, and won’t have to speculate quite as much.
Blueprint is a good read, but the sequel will (hopefully) prove an even better one.
I recently read (well, absorbed via Audible) The Decadent Society by Ross Douthat. tl,dr (summary, not opinion):
We are stuck in a civilizational rut, and have been there since either the 1980s or early 2000s, depending on how you count.
Technological progress has stalled since the early 2000s. We’ve made no meaningful progress on the “big” initiatives (space exploration, fixing aging, flying cars, or AI) since the 2000s.
Culture has not really innovated since the 1980s. New art is derivative and empty, movies are mostly sequels, music is lousy covers, etc.
Politics has entrenched into two static camps bookended by rehashed politics from the 80s (neoliberal free trade vs soviet central planning and redistribution)
Even religion is fairly stagnant. Splinter sects and utopian communes are creepy and usually turn into weird sex cults, but represent spiritual dynamism. Their decline indicates a stagnation in our attempts to find spiritual meaning in life.
A sustained fertility rate decline in the developed world either indicates, causes, or in an unvirtuous cycle reinforces risk-aversion in both the economic and cultural planes.
In summary: Everything kinda sucks, for a bunch of reasons, and there’s a decent chance we’ll be stuck in the self-stabilizing but boring ShittyFuture™ for a long, long time. The Decadent Society is not an optimistic book, even when it pays lip service to “how we can escape” (spoiler: deus ex deus et machina).
While TDS doesn’t really make any strong claims about how we got into this mess, Douthat suggests that fertility declines, standard-of-living comforts, and the internet act as mechanisms of stasis, holding us in “decadence”. I want to talk about the last one — the internet.
Revisited opinion: the Internet might not actually be a net force for change
My pre-TDS stance on the internet as a force for social change was:
“The internet is a change accelerator, because it massively increases the connection density between individuals. On the bright side, this can accelerate scientific progress, give voice to unpopular but correct opinions, and give everyone a place to feel heard and welcome.
But the dark side of social media is an analog to Slotin and the demon core — Twitter slowly turns the screwdriver, carefully closing the gap between the uranium hemispheres for noble reasons, but sooner or later Dorsey will slip and accidentally fatally dose all onlookers with 10,000 rad(ical)s of Tweet poisoning.
Traditional society (with social pressure, lack of information transmission fidelity, slow communications) acted as a control rod, dampening feedback and suppressing unpopular opinions, for better or for worse, but are irrelevant in 2020. Net-net, the world moves faster when we are all connected.”
TDS disagrees, contesting (paraphrased, only because I can’t grab a quote from Audible):
“No, the internet is a force against social change. Instead of marching in the street, rioting, and performing acts of civil disobedience, the young and angry yell on Twitter and Facebook. Social media is an escape valve, allowing people to cosplay actual social movements by imitating the words and energy of past movements without taking actual risks or leaving home.
But entrenched political structures don’t actually care what people are yelling online, and can at best pay lip service to the words while changing nothing. While a lot of people get angry, nobody actually changes anything.”
The presented evidence is:
The core topics of online debate haven’t really changed since the 1980s. The left is still bookended by socialists and political correctness, and the right bookended by neoliberalism and reactionary religion.
Non-violent protests, (marches and sit-ins) while not uncommon, are sanctioned, short, safe, and more akin to parades than true efforts at change. No movement is even close to akin to Martin Luther King Jr’s march on Washington.
Un-civil acts of disobedience (rioting, unsanctioned protests, bombings, etc) are nearly non-existent, even among radical groups, by historical standards.
(this is a short and summarized list, but the book obviously fleshes these points out in vastly greater and more effective depth)
The last point is at first take difficult to square with BLM protests, Occupy Wall Street, and occasional Mayday riots. Media coverage makes them feel big. But as Ross Douthat points out, in 1969, there were over 3,000 bombings in the United States (!!!), by a variety of fringe and radical groups (ex, the Weather Underground, the New World Liberation Front and the Symbionese Liberation Army). Even the tiniest fraction of this unrest would be a wildly radical departure from protests of the 2020s, and would dominate news cycles for weeks or months.
On the nonviolent side, the Civil Rights and anti-Vietnam-war movements were driven to victory by public demonstrations and mass protests. Popular opinion and voting followed enthusiastic-but-minority protests and acts of nonviolent civil disobedience (ex, Rosa Parks).
Conclusion: activists in the 1960s, 70s and 80s engaged in physical, real-world acts of resistance, in a way the protests of the 2010s do not. Why? Suspect #1 is the internet: would-be activists can now use the internet as a safety-valve for toxic (but fundamentally ineffective) venting. But instead of these voices instigating social change, the voices stay online while the speakers pursue safe, uneventful daily lives.
I’m not 100% converted. The magnifying glass of social media does change behavior in meaningful, conformist, ways, and I don’t think we’ve reached the endgame of internet culture.
But put in the context of the radical (or at minimum, society-transforming) movements America experienced every decade until the 2000s, TDS makes a compelling case that the ease of yelling online without leaving home comes at a hidden cost — real-world political change.