The corporate / academic / public AI value gap

There is a huge gap between the benefits of Artificial Intelligence the public is being sold, the benefits of AI which are being marketed to corporate adopters, and the actual motivations of AI researchers.

  • Tech providers pitch AI as a driver of innovation (self-driving cars) and global good (mitigating global warming).  But the B2B case-studies pitched to corporate clients more often pitch AI solutions as better automation, mostly enabling cost-reduction (specifically, reducing human-in-the-loop labor).
  • While many AI researchers are motivated by genuine interest in improving the human condition, other motivations diverge — a desire to push the bounds of what we can do, a genuine belief in transhumanism (the desire for AI to replace, or transform into something entirely unrecognizable, humanity), or simply because AI pays bigly.

These drivers — replacing human employment, and perhaps humans themselves — are, to put it mildly — not visions the public has bought into.

But these internal motivations are drowned out by the marketing AI pitch by which AI is sold to the public: “AI will solve [hunger/the environment/war/global warming]”.  This leaves the people not “in the know” about AI progress — 99% of the population — not even thinking to use democracy to direct AI research towards a world the (average) person actually wants to live in.

This is not particularly fair.

Marketed AI vs Profitable AI

To the public, the tech giants selling AI solutions (Google, Microsoft, and Apple) pitch visions of AI for good.  

The public face of these advertising campaigns is usually brand advertising, perhaps pitching consumers on a software ecosystem (Android, iOS), but rarely selling any specific product.  This makes it easy to sell the public a vision of the future in HDR, backed by inspirational hipster soundtracks.

You all know what I’m talking about — you’ve seen them on TV and in movie theaters — but the imagery is so honed, so heroic, that we should look at the pictures anyway.

Google’s AI will do revolutionary things, like fix farming, improve birthday parties, and help us not walk off cliffs: 

Microsoft’s AI is safe.  You can tell because this man is looking very thoughtfully into the distance:

But if that is not enough to convince you, here is a bird:

Microsoft goes into detail on their “AI for Good” page.  The testimonials highlight the power of AI as applied to:

  • Environmental sustainability (image recognition of land use, wildlife tracking, maximizing farm yields)
  • Healthcare (dredging through data to find diseases)
  • Accessibility (machine translation, text to speech)
  • Humanitarian action and Cultural Heritage preservation

Even the Chinese search engine Baidu, not exactly known for their humanitarian work, has joined the OpenAI “safe AI” consortium, which is nominally dedicated to developing and selling only safe AI.

The theme among all these public “good AI” initiatives — the sales pitch to the public — is:

“We’re developing advanced AI, but we’re partnering with NGOs, hospitals, and more, to make this AI work for people, not against them.  Look at all the good we can do!”

This isn’t fake.  Microsoft is working with nonprofits, NGOs, and more, to deploy for-the-people AI.  But these applications don’t get us closer to the real question:

“What solutions are normal companies actually deploying with AI-as-a-service cloud technology?”

We can peek behind the curtain at Amazon.  Amazon’s AWS has been for the last decade synonymous with “the cloud”, and still has a full 50% market share.  The bleeding edge of AWS are plug-and-play machine learning and AI tools: Amazon Forecast (machine learning), Amazon Polly (text to speech), Amazon Rekognition (video object recognition), Amazon Comprehend (natural language processing), and more.

And Amazon, alone and refreshingly among tech giants, doesn’t even pretend to care why their customers use AI:

“We certainly don’t want to do evil; everything we’ve released to customers to innovate [helps] to lift the bar on what’s actually happening in the industry. It’s really up to the individual organisation how they use that tech”

Amazon sells AI to C-suites, and we know what the hooks are, because the marketing pitches are online.  AWS publishes case studies about how their plug-and-play AI and ML solutions are used by customers. 

We can look at a typical example here, outlining how DXC used AWS’s ML and AI toolkits to improve customer service call center interactions.  Fair warning:  the full read is catastrophically boring — which is to be expected when AI used not to expand the horizon of what is possible… but instead used to excise human labor from work which is already being done:

“DXC has also reduced the lead time to edit and change call flow messaging on its IVR system. With its previous technology, it took two months to make changes to IVR scripts because DXC had to retain a professional voice-over actor and employ a specialized engineer to upload any change. With Amazon Polly, it only takes hours”

Using Amazon Connect, DXC has been able to automate password resets, so the number of calls that get transferred to an agent has dropped by 30–60 percent.

DXC anticipates an internal cost reduction of 30–40 percent as a result of implementing Amazon Connect, thanks to increased automation and improved productivity on each call.

In total, what did DXC do with its deployed AI solution?  AI is being used to:

  • Replace a voice-over actor
  • Eliminate an operations engineer
  • Eliminate customer service agents

There’s nothing evil in streamlining operations.  But because of the split messaging being used to sell AI research to the public vs to industry — on one hand, visions of environmental sustainability and medical breakthroughs, and on the other hand, the mundane breakthrough of applying a scalpel to a call center’s staffing — the public has little insight (other than nagging discomfort) into automation end-game.  

The complete lack of (organized) public anger or federal AI policy — or even an attempt at a policy — speaks to the success of this doublespeak.

Research motivations

So why are actual engineers and researchers building AI solutions?

I could dredge forums and form theories, but I decided to just ask on reddit, in a quick and completely unscientific test.  Feel free to read all the responses — I’ve tried to aggregate them here and distill them into the four main themes.  Weighted by upvotes, here’s the summary:

Preface: none of these are radical new revelations.  They match, in degrees, what you’d find with a more exhaustive dragnet of public statements, blogs, or after liquoring up the academic research mainstream.

Walking down the list:

1. Improving the human condition

A plurality goal is to better the human condition, which is promising.  An archetypal response is a vision of a future without work (or at least, without universal work):

“I believe the fundamental problem of the human race is that pretty much everyone has to work for us to survive.

So I want to work on fixing that.”

It’s not a vision without controversy — it’s an open question whether people can really live fulfilled lives in a world where they aren’t really needed — but at minimum it’s a vision many could get behind, and is at root predicated in a goal of human dignity.

2. It pays

Close behind are crude economics.  Top comment:

“Dat money”

I don’t intend to sound negative — capitalism is the lever which moves the world, and in capitalism, money follows value.  But as shown by AWS, value can come from either revolutionary invention (delivering novel value), or cost excision (delivering cheaper value).

Either direction pays the bills (and engineers), and few megacorp engineers care to peek behind the curtain at which specific aspect of the AI product delivered to clients pays the bills.

3. Transhumanism

Here’s where the interests of true believers in AI diverge from the mainstream.  Top comment:

“I don’t really care about modern ML solutions, I am only concerned with AGI. Once we understand the mechanisms behind our own intelligence, we move to the next phase in our species’ evolution. It’s the next paradigm shift. Working on anything else wouldn’t be worth it since the amount of value it brings is so vast.”

“I’m in it for the money” is just realism.  “A world without work” and “making cheddar” are motivations which appeal to the mainstream, and is at least comprehensible (if frustrating) to those whose jobs are on the line.  

Transhumanism is different.  There’s a prevalent (although possibly not majority) philosophy among many AI researchers, practitioners, and enthusiasts, that the goal of developing strong (human-level) AI is not a tool for humans, but an end unto itself.  The goal is the creation of a grander intelligence beyond our own:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Or, step-by-step:

  • Humans create AI 1.0 with IQ human + 1
  • AI 1.0 creates AI 2.0, which is slightly smarter
  • AI 2.0 creates AI 3.0, which is WAY smarter
  • AI 3.0 creates AI 4.0, which is incomprehensibly smarter

And whatever comes next… we can’t predict.

This is not a complete summary of transhumanism.  There’s a spectrum of goals, and widespread desire for AI which can integrate with humans — think, nanobots in the brain, neural augmentation, or wholesale digital brain uploads.  But either way — whether the goal is to retrofit or replace humans — the end goal is at minimum a radically transformed concept of humanity.

Given that we live in a world stubbornly resistant to even well-understood technological revolutions — nuclear power, GMOs, and at times even vaccines — it’s fair to say that transhumanism is not a future the average voter is onboard for.

4. Just to see if we can 

And just to round it out, a full 16% of the votes could be summarized (verbatim) as:

“Why not?”

Researchers — and engineers — want to build AI, because building AI is fun.  And there’s nothing unusual about Fun Driven Development.  Most revolutionary science doesn’t come from corporate R&D initiatives; it comes from fanatical, driven, graduate students, startups, or bored engineers hacking on side projects.

Exploration for the sake of exploration (or with a thin facade of purpose) is what got us out of trees and into lamborghinis.

But at the end of the day, “for the fun” is an intrinsic motivation akin to “for the money”.  The motivation gives one engineer satisfaction and purpose, but doesn’t weight heavily on the scales when answering “should this research exist?” — in the same way we limit fun of experimental smallpox varietals, DIY open-heart surgery, and backyard nuclear bombs

Misaligned motivations

The public has been sold a vision of AI for Good; AI as an ex machina for some (or all) of our global crises, now and future:

These initiatives aren’t fake, but they also represent a small fraction of actual real-world AI deployments, many if not most of which focus on selling cost-reductions to large enterprises (implicitly and predominantly, via headcount reductions).

AI researchers and implementers, in plurality, believe in the potential good of AI, but more frequently are in it for the money, to replace (or fundamentally alter) humans, or just for the fun of it.

The public, and their elected governments, can’t make informed AI policy if they are being sold only one side of the picture — with the unsavory facts hidden, and the deployment goals obscured.   These mixed messages are catastrophically unfair to the 99% of humanity not closely following AI developments, but whose lives will be, one way or another, changed by the release of even weak (much less, strong) AI.

GPT-3 is the Elephant, not the Rider

The Righteous Mind by Jonathan Haidt explains the link between our conscious, calculating mind and our subconscious, instinctive mind with a metaphor: The Elephant and the Rider:

  • The rider is our “conscious”, reasoning mind, which uses explainable logic to reason about the world, our own behavior, and our preferences 
  • The elephant is the momentum of pre-trained and pre-wired preferences with which we make “snap” decisions about preferences or morality.

The rider — homo logicus — believes itself to be in control of the elephant, but this is only about 10% true.  In truth, when the rider and elephant disagree about which direction to ride, the elephant almost always wins.   The rider instead spends time making excuses to justify why it really intended to go that direction all along!

Or, non-metaphorically: the vast majority of the time, we use our “thinking” mind to explain and generate justifications for our snap judgements — but our thinking mind only rarely is able to actually redirect our pre-trained biases into choices we really don’t want to make.  

Occasionally, if it’s a topic we don’t have strong pre-trained preferences about (“What’s your opinion on the gold standard?”), the rider has control — but possibly only until the elephant catches a familiar scent (“The gold standard frees individuals from the control of governmental fiat”) and we fall back to pre-wired beliefs.

Most of the time, the rider (our thinking brain)’s job is to explain why the elephant is walking the direction it is — providing concrete explainable justifications for beliefs whose real foundation is genetic pre-wiring (“Why are spiders scary?”) or decades of imprinting (“Why is incest bad?”)

But even though the rider isn’t, strictly speaking, in control, it’s the glue which helped us level up from smart apes to quasi-hive organisms with cities, indoor plumbing, and senatorial filibusters.  By nudging our elephants in roughly the right direction once in a while, we can build civilizations and only rarely atomize each other.


Traditional visions of AI — and the AI still envisioned by popular culture —  is cold, structured, logic incarnate.  

Most early game AIs performed a minimax search when choosing a move, methodically evaluating the search space.  The AI would calculate for each move how to counter the best possible move the opponent could make, and then would perform these calculations as deep as computing power permitted:

This is still the AI portrayed in popular media.  In a positive portrayal, the AI is precise, logical, and (somewhat) useful:

C-3PO : Sir, the possibility of successfully navigating an asteroid field is approximately 3,720 to 1

Han Solo : Never tell me the odds.

In a negative portrayal, AI is cold and calculating, but never pointlessly cruel.  In 2001: A Space Odyssey, if HAL 9000 opened the pod bay doors, it would prove (in worst case) a potential risk to HAL 9000 (itself), and the mission.  The rational move was to eliminate Dave.

Bowman: Open the pod bay doors, HAL.

HAL 9000: I’m sorry, Dave. I’m afraid I can’t do that.

HAL 9000 was simply playing chess against Dave.

NLP and structured knowledge extraction operated similarly.  NLP techniques  were built to turn sentences into query-able knowledge bases via structured information extraction.  Facts were extracted from natural-language sentences and stored in knowledge bases:

Decisions made by AI systems which used information extraction techniques were fully explainable, because they were built from explicit extracted facts. 

These visions of AI all envisioned artificial agents as the elephant riders, in which decisions were made upon cold facts.  Perhaps we first tried to build explainable AI because we preferred to see ourselves as the riders — a strictly logical agent in firm control of our legacy animal instincts.


But modern AI is the elephant.

Neural networks have replaced traditional structured AI in almost every real application — in both academia and industry.  These networks are fast, effective, dynamic, easy to train (for enough money), and completely unexplainable.

Neural networks imitate animal cognition by modeling computation as layers of connected neurons, each neuron connected to downstream neurons with varying strength:

There’s a huge amount of active research into how to design more effective neural networks, how to most efficiently train neural networks, and how to build hardware which most effectively simulates neural networks (for example, Google’s Tensor Processor Units).  

But none of this research changes the fact that neural networks are (perhaps by design) not explainable — training produces networks which are able to answer questions quickly and often correctly, but the trained network is just a mathematical array of weighted vectors which cannot be meaningfully translated into human language for inspection.  The only way to evaluate the AI is to see what it does.

This is the elephant.  And it is wildly effective.


GPT-3 is the world’s most advanced neural network (developed by the OpenAI consortium), and an API backed by GPT-3 was soft-released over the past couple weeks to high-profile beta users.   GPT-3 is a neural network with 175 billion trained parameters (by far the world’s largest publicly documented neural network).  It was trained on a wide range of internet-available text sources.

GPT-3 is a predictive model — that is, provide it the first part of a block of text, and it will generate the text which it predicts should come next.  The simplest application of text prediction is writing stories, which GPT-3 excels at (the prompt is in bold, generated text below):

But text prediction is equally applicable to everyday conversation.  GPT-3 can, with prompting, answer everyday questions, and even identify when questions are nonsensical (generated answers at the bottom):

Gwern has generated GPT-3 responses on a wide range of prompts, categorizing where it does well and where it does poorly.  Not every response is impressive, but many are, and the conclusion is that GPT-3 is a huge leap forward from GPT-2 (which used 1.5B parameters, vs GPT-3’s 175B).


GPT-2 and GPT-3 have no model of the world, but that doesn’t stop them from having opinions when prompted.

GPT-2/3 are trained on the internet, and are thus the aggregated voice of anyone who has written an opinion on the internet.  So they are very good at quickly generating judgements and opinions, even though they have absolutely no logical or moral framework backing those judgements.

Huggingface provides a publicly-accessible playground to test GPT-2’s predictions on your own text inputs (GPT-3 is, for now, available only to internet celebrities and VCs).  We can prompt GPT-2 for opinions on a variety of emotionally charged topics, like incest:

abortion:

and other topics likely to provoke an emotional response:

These are elephant responses, generated by volume of training data, not clever logical deduction .  GPT-* has absolutely no model of the world or moral framework by which it generates logical responses — and yet the responses read as plausibly human.

Because, after all, we are 90% elephant.


What does this mean for AI, and for us?

Most people have no idea what modern AI is, and that makes effective oversight of AI research by the public completely impossible.  Media depictions of AI have only shown two plausible futures:

  • Hyper-logical, explainable, “Friendly AI”: Data from Star Trek.  Alien, but because of the absence of emotion
  • Hyper-logical, explainable, “Dangerous AI”: Terminator.  Deadly, but for an explainable reason: the AI is eliminating a threat (us)

These visions are so wildly far from the future we are in, that the public is less informed for having been shown them  

The AIs we’ll actually interact with tomorrow — on Facebook, Reddit, Twitter, or a MMORPG —  are utterly un-logical.  They are the pure distilled emotions of anyone who has ever voiced their opinions on the internet, amplified a thousandfold (and perhaps filtered for anger or love for particular targets, like China, Russia, or Haribo Gummy Bears). 

If we want the public to have any informed opinion about what, how, and where AI is deployed (and as GPT-3/4/5 seem poised to obviate all creative writing, among other careers, this seems like a reasonable ask), the first step is to stop showing them an accurate picture of what Google, Microsoft and OpenAI have actually built.

And second: if we do want to ever get the AI we saw in Star Trek (but hopefully not Terminator), we need to actually build a model-based, logical elephant rider, and not just the elephant itself — even though it’s much, much, harder than downloading 20 billion tweets of training data and throwing them at a trillion parameter neural network.

Or maybe we should figure out how to do it ourselves, first.

Bad Blood: Theranos, Yelp reviews, and LinkedIn profile views

Bad Blood is the history of Theranos.  It’s written by John Carreyrou.  John is not just some random journalist-turned-novelist — he’s the same Wall Street Journal reporter who blew Theranos open like a microwaved egg, with his bombshell yet hilariously understated expose in 2015:

“Hot Startup Theranos Has Struggled With Its Blood-Test Technology”

(understatement of the year, albeit only in hindsight).  It’s a great story, and a fascinating window into the nuts-and-bolts of investigative journalism.

For anyone living under a cinderblock for the past decade, the tl,dr of Theranos:

  • College drop-out Elizabeth Holmes founds biotech startup Theranos
  • Theranos claims it has technology which could, from a drop of blood from a finger prick (instead of traditional blood draws), diagnose hundreds of medical conditions from low Vitamin D to Herpes.
  • (Spoiler: they didn’t, and it couldn’t)
  • Despite having no working technology, Elizabeth compensates with a closet of black Steve Jobsian turtlenecks and a strangely husky voice, and the company raised hundreds of millions in venture funding at a peak valuation of $10 billion 
  • Theranos fakes it, but forgot the second half, and never makes it.  The company collapses, everyone loses their money, and the founders face criminal trials.

My whole career, I’ve lived in deep tech, and had to deal with the semi-literate catastrophe of “tech news”. I went into the book with as much respect for tech journalism as I have for pond slime, so my prior on the Theranos story was:

“Ambitious young founder drops out of Stanford with good idea.  Media builds up young (female) founder into unicorn founder superhero.  When technology fizzles out, the founder, unable to admit defeat due to immaturity and media adulation, accidentally digs grave for self with good-intentioned but compounding exaggerations.  Lies build up until the company collapses.  Finito.

While it was technically ‘fraud’, nobody got hurt except investors who didn’t do due diligence, so… so what?”

Well, I was wrong.  Theranos — and Theranos was, indisputably, a physical manifestation of Elizabeth Holmes’s psyche — lied from the beginning, and was doing Bad Shit well before Elizabeth became a Young Female Founder icon on the cover of Forbes.

And when I say “Bad Shit”, I mean:

  • Lying, outright, in the press and to partners, about what technology was being used to run tests.
  • Completely inventing revenue projections.  This is what got them to unicorn status.  The lying didn’t come “post-unicorn”
  • Completely disregarding employee feedback, even when being told outright “these devices are random number generators, but we’re using them to provide clinical results, and should probably stop”  
  • Lying, outright, to the board of directors about basic things “our devices are being used in Afghanistan”
  • Giving patients clinical results based on clearly malfunctioning experimental devices.  And like wildly bad results.  Giving patients Potassium readings which classified them as “obviously deceased”.

I don’t want to go too deep into the details.  Pretty much every part of the story is equally wild, and you should just read it, if you’re at all interested in reading about biotech trainwrecks.  


One of the craziest part about the story (to me) is how barely it happened.  There were several points in the story where the breakthrough hinged on absolutely tiny connections or revelations — and usually, those connections were tech-enabled.

First, one of the key connections — the one which actually connected the whistleblower to John Carreyrou, was a LinkedIn profile view notification (!): 

“While checking his emails a few days later, Fuisz saw a notification from LinkedIn alerting him that someone new had looked up his profile on the site. The viewer’s name—Alan Beam—didn’t ring a bell but his job title got Fuisz’s attention: laboratory director at Theranos. Fuisz sent Beam a message through the site’s InMail feature asking if they could talk on the phone. He thought the odds of getting a response were very low, but it was worth a try. He was in Malibu taking photos with his old Leica camera the next day when a short reply from Beam appeared in his iPhone in-box.”

In case you haven’t logged into LinkedIn recently, that’s the stupid little notification that shows up right before a recruiter tries to connect with you: 

This case breaking open hinged on Fuisz being notified that someone had viewed his LinkedIn profile.  This connected a whistleblower former employee with a disgruntled legal rival, who knew a guy who ran a pathology blog.  That blogger just happened to know an investigative WSJ reporter.    

And that brought down a $10B startup.

It wasn’t the only case where tech-connectivity was critical to breaking open this case.  John was able to use Yelp to find doctors to attest to Theranos’s unreliability:

“I had another lead, though, after scanning Yelp to see if anyone had complained about a bad experience with Theranos. Sure enough, a woman who appeared to be a doctor and went by “Natalie M.” had. Yelp has a feature that allows you to send messages to reviewers, so I sent her a note with my contact information. She called me the next day. ”

(This is still a thing, by the way — you can still find irate customers in Phoenix on Yelp dealing with the repercussions of randomized Theranos test results):  

There’s the obvious stealth tech too, of course — burner phones, burner emails, email backups, and all the other digital tools which make it impossible to permanently hide internet-connected information in the 21st century.

I don’t mean to imply that the internet (and all the weird stuff we’ve layered on top of the web) made the journalism easy — clearly this story was a grind from start to finish against brutal legal pressure by Theranos.  It’s entirely John would have broken the story open without all the newly available digital tricks of the trade.  

Or, maybe not.


Theranos certainly wouldn’t have lasted forever, one way or another.  The technology simply didn’t work. Safeway or Walgreens, once they had rolled out commercial partnerships, would have figured this out… eventually.

But it seems likely it would have lasted long enough to kill a lot of people. 

The Imperial High Modernist Cathedral vs The Bazaar

Or: I Thought I was a Programmer but now I’m worried I’m a High Modernist.

Seeing like a State by James C. Scott is a rallying cry against imperialist high modernism.  Imperialist high modernism, in the language of the book, is the thesis that:

  • Big projects are better,
  • organized big is the only good big,
  • formal scientific organization is the only good system, and
  • it is the duty of elites leading the state to make these projects happen — by force if needed

The thesis sounds vague, but it’s really just big.  Scott walks through historical examples to flesh out his thesis:

  • scientific forestry in emerging-scientific Europe
  • land reforms / standardization in Europe and beyond
  • the communist revolution in Russia
  • agricultural reforms in the USSR and Tanzania
  • modernist city planning in Paris, Brazil, and India

The conclusion, gruesomely paraphrased, is that  “top-down, state-mandated reforms are almost never a win for the average subject/victim of those plans ”, for two reasons:

  1. Top-down “reforms” are usually aimed not at optimizing overall resource production, but at optimizing resource extraction by the state.

    Example: State-imposed agricultural reforms rarely actually produced more food than peasant agriculture, but they invariably produced more easily countable and taxable food
  1. Top-down order, when it is aimed at improving lives, often misfires by ignoring hyper-local expertise in favor of expansive, dry-labbed formulae and (importantly) academic aesthetics

    Example: Rectangular-gridded, mono-cropped, giant farms work in certain Northern European climates, but failed miserably when imposed in tropical climates

    Example: Modernist city planning optimized for straight lines, organized districts, and giant  apartment complexes to maximize factory production, but at the cost of cities people could actually live in.

However.

Scott, while discussing how Imperial High Modernism has wrought oppression and starvation upon the pre-modern and developing worlds, neglected (in a forgivable oversight), to discuss how first-world Software Engineers have also suffered at the hands of imperial high modernism.

Which is a shame, because the themes in this book align with the most vicious battles fought by corporate software engineering teams.  Let this be the missing chapter.

The Imperial High Modernist Cathedral vs The Bazaar

Imperial high modernist urban design optimizes for top-down order and symmetry.  High modernist planners had great trust in the aesthetics of design, believing earnestly that optimal function flows from beautiful form.   

Or, simpler: “A well-designed city looks beautiful on a blueprint.  If it’s ugly from a birds-eye view, it’s a bad city.”

The hallmarks of high modernist urban planning were clean lines, clean distinctions between functions, and giant identical (repeating) structures.  Spheres of life were cleanly divided — industry goes here, commerce goes here, houses go here.  If this reminds you of autism-spectrum children sorting M&Ms by color before eating them, you get the idea.

Le Corbusier is the face of high modernist architecture, and SlaS focuses on his contributions (so to speak) to the field.  While Le Corbusier actualized very few real-world planned cities, he drew a lot of pictures, so we can see his visions of a perfect city:

True to form, the cities were beautiful from the air, or perhaps from spectacularly high vantage points — the cities were designed for blueprints, and state legibility.  Wide, open roads, straight lines, and everything in an expected place.  Shopping malls in one district, not mixed alongside residences.  Vast apartment blocks, with vast open plazas between.

Long story short, these cookie-cutter designs were great for urban planners, and convenient for governments.  But they were awful for people.

  • The reshuffling of populations from living neighborhoods into apartment blocks destroyed social structures
  • Small neighborhood enterprises — corner stores and cafes — had no place in these grand designs.  The “future” was to be grand enterprises, in grand shopping districts. 
  • Individuals had no ownership of the city they lived in.  There were no neighborhood committees, no informal social bonds.

Fundamentally, the “city from on high”  imposed an order upon how people were supposed to live their lives, not even bothering to first learn how the “masses” were already living; he swept clean the structures, habits and other social lube that made the “old” city tick.

In the end, the high modernist cities failed, and modern city planning makes an earnest effort to work with the filthy masses, accepting as natural a baseline of disorder and chaos, to help build a city people want to live in.


If this conclusion makes you twitch, you may be a Software Engineer.  Because the same aesthetic preferences which ground Le Corbusier’s gears also are the foundation of “good” software architecture; namely:

  • Good code is pretty code
  • Good architecture diagrams visually appear organized

Software devs don’t draft cityscapes, but they do draw Lucidchart wireframes.  And a “good” service architecture for a web service would look something like this:  

We could try to objectively measure the “good” parts of the architecture:

  • Each service has only a couple clearly defined inputs and outputs
  • Data flow is (primarily) unidirectional
  • Each service appears to do “one logical thing”

But software engineers don’t use a checklist to generate first impressions.  Often before even reading the lines, the impression of a good design is,

Yeah, that looks like a decent clean, organized, architecture

In contrast, a “messy” architecture… looks like a mess:

We could likewise break down why it’s a mess:

  • Services don’t have clearly defined roles
  • The architecture isn’t layered (the user interacts with backend services?)
  • There are a lot more service calls
  • Data flow is not unidirectional

But most software architects wouldn’t wade through the details on first glance.  The first reaction is: 

Why are there f******* lines everywhere???  What do these microservices even do? How does a user even… I don’t care, burn it.

In practice, most good engineers are ruthless high modernist fascists.  Unlike the proto-statist but good-hearted urban planners of the early 1900s (“workers are dumb meat and need to be corralled like cattle, but I want them to be happy cows!”), we wrench the means of production from our code with blood and iron.  Inasmuch as the subjects are electrons, this isn’t a failing of the system — it’s the system delivering.

Where this aesthetic breaks down is when these engineers have to coordinate with other human beings — beings who don’t always share the same vision of a system’s platonic ideals.  To a perfectionist architect, outside contributions risk tainting the geometric precision with which a system was crafted.

Eric S Raymond famously summarized the two models for building collaborative software in his essay (and later, book): The Cathedral and the Bazaar

Unlike in urban planning, the software Cathedral came first.  Every man dies alone, and every programmer codes solo.  Corporate, commercial cathedrals were run by a lone (or small team) of ruthless God Emperors, carefully vetting contributions for coherence to a grander plan.  The essay summaries the distinctions better than I can rehash, so I’ll quote in length. 

The Cathedral model represents mind-made-matter diktat from above:

I believed that the most important software (operating systems and really large tools like Emacs) needed to be built like cathedrals, carefully crafted by individual wizards or small bands of mages working in splendid isolation, with no beta to be released before its time.

The grand exception to this pattern was an upstart open-source Operating System you may have heard of — Linux.  Linux took a different approach to design, welcoming with open arms external contributions and all the chaos and dissent they brought:

Linus Torvalds’s style of development – release early and often, delegate everything you can, be open to the point of promiscuity – came as a surprise. No quiet, reverent cathedral-building here – rather, the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who’d take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles.

Eric predicted that the challenges of working within the chaos of the Bazaar — the struggle of herding argumentative usenet-connected cats in a common direction — would be vastly outweighed by the individual skills, experience, and contributions of those cats: 

I think the future of open-source software will increasingly belong to people who know how to play Linus’ game, people who leave behind the cathedral and embrace the bazaar. This is not to say that individual vision and brilliance will no longer matter; rather, I think that the cutting edge of open-source software will belong to people who start from individual vision and brilliance, then amplify it through the effective construction of voluntary communities of interest.

Eric was right — Linux dominated, and the Bazaar won.  In the open-source world, it won so conclusively that we pretty much just speak the language of the bazaar:

  • “Community contributions” are the defining measure of health for an Open Source project.  No contributions implies a dead project.
  • “Pull Requests” are how outsiders contribute to OSS projects.  Public-editable project wikis are totally standard documentation.  Debate (usually) happens on public mailing lists, public Slacks, public Discord servers.  Radical transparency is the default.

I won’t take this too far — most successful open-source projects remain a labor of love by a core cadre of believers.  But very few successful OSS projects reject outside efforts to flesh out the core vision, be it through documentation, code, or self-machochistic user testing.

The ultimate victory of the Bazaar over the Cathedral mirrors the abandonment of high modernist urban planning.  But here it was a silent victory; the difference between cities and software, is that dying software quietly fades away, while dying cities end up on the evening news and on UNICEF donation mailers.  The OSS Bazaar won, but the Cathedral faded away without a bang.

Take that, Le Corbusier!

High Modernist Corporate IT vs Developer Metis

At risk of appropriating the suffering of Soviet peasants, there’s another domain where the impositions of high modernism parallel closely with the world of software — in the mechanics of software development.

First, a definition: Metis is a critical but fuzzy concept in SlaS, so I’ll attempt to define it here.  Metis is the on-the-ground, hard-to-codify, adaptive knowledge workers use to “get stuff done”.   In context of farming, it’s:

“I have 30 variants of rice, but I’ll plant the ones suited to a particular amount of rainfall in a particular year in this particular soil, otherwise the rice will die and everyone will starve to death”

Or in the context of a factory, it’s,

“Sure, that machine works, but when it’s raining and the humidity is high, turning it on will short-circuit, arc through your brain, and turn the operator into pulpy organic fertilizer.”

and so forth.  

In the context of programming, metis is the tips and tricks that turn a mediocre new graduate into a great (dare I say, 10x) developer.  Using ZSH to get git color annotation.  Knowing that,  “yeah Lambda is generally cool and great best practice, but since the service is connected to a VPC fat layers, the bursty traffic is going to lead to horrible cold-start times, customers abandoning you, the company going bankrupt, Sales execs forced to live on the streets catching rats and eating them raw.”  Etc.

Trusting developer metis means trusting developers to know which tools and technologies to use.  Not viewing developers as sources of execution independent of the expertise and tools which turned them into good developers.

Corporate IT — especially at large companies— has an infamous fetish for standardization.  Prototypical “standardizations” could mean funneling every dev in an organization onto:

  • the same hardware, running the same OS (“2015 Macbook Airs for everyone”)
  • the same IDE (“This is a Visual Studio shop”)
  • an org-wide standard development methodology (“All changes via GitHub PRs, all teams use 2-week scrum sprints”)
  • org-wide tool choices (“every team will use Terraform V 0.11.0,  on AWS”)

If top-down dev tool standardization reminds you of the Holodomor, the Soviet sorta-genocide side-effect of dekulakizatizing Ukraine, then we’re on the same page. 

To be fair, these standardizations are, in the better cases, more defensible than the Soviet agricultural reforms in SlaS.  The decisions were (almost always) made by real developers elevated to the role of architect.  And not just developers, but really good devs.  This is an improvement over the Soviet Union, where Stalin promoted his dog’s favorite groomer to be your district agricultural officer and he knows as much about farming as the average farmer knows about vegan dog shampoo.

But even good standards are sticky, and sticky standards leave a dev team trapped in amber.  Recruiting into a hyper-standardized org asymptotically approaches “take and hire the best, and boil them down to high-IQ, Ivy+ League developer paste; apply liberally to under-staffed new initiatives”

When tech startups win against these incumbents, it’s by staying nimble in changing times — changing markets, changing technologies, changing consumer preferences.  

To phrase “startups vs the enterprise” in the language of Seeing Like a State: nimble teams — especially nimble engineering teams — can take advantage of metis developer talent to quickly reposition under changing circumstances, while high modernist companies (let’s pick on IBM), like a Soviet collectivist farm, choose to excel at producing standardized money-printing mainframe servers — but only until the weather changes, and the market shifts to the cloud.

Overall

The main thing I struggled with while reading Seeing like a State is that it’s a book about history.  The oppression and policy failures are real, but real in a world distant in both space and time —  I could connect more more concretely to a discussion of crypto-currency, contemporary public education, or the FDA.  Framing software engineering in the language of high modernism helped me ground this book in the world I live in.

Takeaways for my own life? Besides the concrete (don’t collectivize Russian peasant farms, avoid monoculture agriculture at all costs) it will be to view aesthetic simplicity with a skeptical eye.  Aesthetic beauty is a great heuristic which guides us towards scalable designs — until it doesn’t.

And when it doesn’t, a bunch of Russian peasants starve to death.

Blueprint

Blueprint by Nicholas Christakis posits that humans are all fundamentally the same. Except under unusual circumstances, humans build societies full of good people, with instincts inclined towards kindness and cooperation. I read it.

This book is a grab-bag which combines lab-grown sociology (much of it from Nicholas’s own team) with down-and-dirty research about common foundational elements across human societies — both “natural” ones (tribes more-or-less undisturbed by modern society) and “artificial” ones (religious sects and shipwrecked crews).

tl,dr: 

First, the book gives a tour of artificial and real communities, and their defining features:

  • Pre-industrial societies (such as the Hadza in Tanzania)
  • Hippie communes in the 70s
  • Utopian communities in the 1800s
  • Sexual mores in uncommon cultures (non-monogamous or polygynous)
  • Religious sects (Shakers)
  • Shipwrecked crews (some successful, some disasters)

Nicholas takes findings from these communities and references them against his own research of human behavior in controlled circumstances (think, using Amazon Mechanical Turk, MMORPG-esque games, and other controlled sociological experiments to test human social behavior given variations of the prisoners’ dilemma), and against our behavior as compared to other intelligent primates (Chimps and Bonobos), and comes up with a central theme: 

“Humans are all genetically hard-wired to construct certain types of societies, based on certain foundational cultural elements.  These societies trend towards “goodness”, with predispositions towards:

  • Kindness, generosity, and fairness
  • Monogamy (or light polygyny)
  • Friendship
  • Inclination to teach
  • Leadership

There are differences between people, and possibly across cultures, based on genetic differences, but these distinctions are trivial when measured against the commonalities in the societies we build”

Or, in his own words:

“We should be humble in the face of temptations to engineer society in opposition to our instincts. Fortunately, we do not need to exercise any such authority in order to have a good life. The arc of our evolutionary history is long. But it bends toward goodness.”

It’s an all-encompassing statement. Given the breadth of human experience, it’s a hard one to either negate or endorse without begging a thousand counterexamples.

(This summary comes out sounding like I’m accusing Blueprint of being primarily hand-wavy sociology, which wasn’t intentional.  The research and historical errata are fairly hard science.  But the conclusion is markedly less concrete than the research behind it.)

To be honest, I had more fun with the world tour — the fun anecdotes like “Hippie urban communes in the 70s actually did fewer drugs than the norm”, or “certain Amazon tribes believe that children are just large balls of semen, and children can have five fathers if the mother sleeps with every dude in the tribe” — than I had any “aha” moments with regards to the actual thesis.

My guess is that the book is a decade before its time — in 2020, we know enough to confidently state that “genes matter”, but are only beginning to get the faintest glimpse of “which genes matter”.  Until the biology research catches up with the sociology (I never expected myself to type that)  it’s hard to separate out “humans, because of these specific genes, organize ourselves into monogamous or lightly-polygynous societies with altruism, friends, respect for elders, sexual jealousy and love of children from “any complex society inherently will develop emergent properties like friends, altruism and sexual jealousy”.

I did find one interesting, tangible, take-away: the examples in Blueprint suggest a common recurring theme of physical ritual, like ceremonial dances and singing, in successful “artificial” communities.

Obviously, song & dance are a central theme in pretty much every natural community (eg, civilizations which developed over thousands of years) as well, but it’s easier to use artificial communities as a natural experiment, because many of these “new” communities completely failed — we generally don’t get to observe historical cultures fail in real-time.

(to be clear, this was not even slightly a central theme of the book — I’m extrapolating it from the examples he detailed)

In the chapter on ‘Intentional communities’ (that is, constructed societies, a la communes or utopian sects), Nicholas discusses the remarkable success of the Shaker sect.  Why remarkable?  Because the sect endured, and even grew, for a hundred years, despite some obvious recruiting challenges:

  • Shakers worked hard, all the time
  • Shakers didn’t (individually) own possessions
  • Shakers were utterly, absolutely, celibate

Much of the appeal of the Shaker communities to converts was the camaraderie and in some ways progressive values, like equality between the sexes.  But a lot of the success seems to stem from kinship and closeness from ritual:

“Religious practice involved as many as a dozen meetings per week with distinctive dances and marches.”

Wikipedia adds to this story with contemporary illustrations; here, “Shakers during worship”:

I’m sure that economic and cultural aspects of Shaker communities also attracted converts and retained members, but I have to wonder whether part of the success of Shaker-ism (despite the extreme drawbacks of membership) was due to the closeness engendered by… essentially constant, physical ritual.

The second example was from Ernest Shackleton’s Imperial Trans-Antarctic Expedition.  The tl,dr of Shackleton’s expedition is:

  1. 28 men were shipwrecked in Antarctica (aboard the Endurance)
  2. For the better part of a year, they were stuck on an ice-bound boat, with no obvious exit plan
  3. There was absolutely no fighting or tension in the crew.  Nobody was killed, left to die, or recycled as dinner.  In fact, nobody died, at all.

(3) is a remarkable achievement, given the other shipwrecked “societies” described in Blueprint — shipwrecked crews were wont to fall prey to violence, infighting, and occasionally cannibalism.  Blueprint quotes survivors though, as they describe how the crew of the Endurance… endured:

“Strikingly, the men spent a lot of time on organized entertainment, passing the time with soccer matches, theatrical productions, and concerts… On the winter solstice, another special occasion, Hurley reported a string of thirty different “humorous” performances that included cross-dressing and singing. In his journal from the ordeal, Major Thomas Orde-Lees (who later became a pioneer in parachuting) noted: “We had a grand concert of 24 turns including a few new topical songs and so ended one of the happiest days of my life.”

It’s hard to separate cause and effect — a crew already inclined towards murdering each other over scarce Seal-jerky is unlikely to put on a musical production — but it seems likely that the “ritual” entertainment was a reinforcing element of the camaraderie as much as it was an artifact.


It’s hard to conjure up many strong feelings about Blueprint.  It’s worth reading for the anecdota and history, but my main take from the descriptions of “in-progress research”, is that in a decade, we’ll be able to actually tie human behavior back to the genetic underpinnings, and won’t have to speculate quite as much.

Blueprint is a good read, but the sequel will (hopefully) prove an even better one.

Ethically Consistent Cryonics

John woke.  First wet, then cold.  

“Hello John.  I’m Paul Mmmdcxxvi”

Paul’s face drifted into focus.  

“You froze to death (by the standards of the time) climbing Everest in 2005.  You were chipped out of a glacier last week.  Thanks to recent medical advances, you defrosted with minimal tissue damage.”

Hrrrngh.  “Hrrrrng” 

“It is no longer 2005” Paul helpfully added.  “But I need to explain a few things, to frame the real date.  Take your time.  Take a drink.”  Paul gestured at a bottle on a nearby table.

John sat upright, drank, and after a few minutes, looked around.  

The room was dimly lit, but vast.  Behind Paul, extending as far as John could see, stood row upon row of barrels, stacked upwards into complete darkness.  Nearby barrels misted slightly.  Between rows, up in the air, crawled… something?  Somethings?  

Behind Paul blinked a single standard LCD monitor — the only clear light.

“How are you feeling, John?”  Paul prompted.

“Better.” Surprisingly well, John realized.  “So it’s not 2005.  When is it?”

“You missed a lot, so I’ll need to start where you left off.  I apologize if I’m scattered; please do interrupt”  Paul paused, and started:

“Our civilization had two great breakthroughs in the middle of the 21st century.  

“The first was moral.  The earth struggled with ethics in the early 21st century.  We had made great advances in mathematics and physics, but our moral calculus was stagnant by comparison.   Good people put together moral frameworks like Utilitarianism, concepts like Quality Years of Life saved, and tools to compute the time-decay discounts for future lives.  

“Gradually, a formal field of research emerged — computational ethics.  Many of our best minds went to work, and researchers constructed some truly heroic Google Sheet formulas.   But at the end of the day, they couldn’t avoid the inherent contradictions.”

“Contradictions?” John objected.  “I’m no mathematician, but before I died, I lived a decent life.  I still feel bad about stealing Pokemon cards in 5th grade.”

“Not surprising, for a 20th century mind.   But you were optimizing for observable, personal moral impact.  Computational ethics tried to be scientific about optimizing human morality.  For example: how could you justify eating premium Ostrich burgers, while UNICEF struggled to fundraise for famine relief?”

“Well” John admitted, “I assumed my tax dollars mostly took care of that.”

“Naturally.  And that’s just the tip of the iceberg.  We started with the “easy” optimization, maximizing Quality Years of Life.  It worked well at first; we eliminated Polio, hunger, and male-pattern-baldness.  But we got stuck.  It turned out there was no way to optimize out suicide, but leave BASE jumping as a optional joie de vivre.  Or leave potato chips.  Or leave the fun game where teenagers shoot fireworks at each other.”

John mindlessly scratched at the old burns where his left ear once grew.  “That’s a shame, I had a really fun childhood.”

“But it got even worse.  When we ran the numbers, there was no consistent system which condemned murder but condoned voluntary vasectomies.  The average vasectomy destroyed billions of QALYs, by eliminating children, children of children, their grandchildren…”

Wait, what?  “Well, that’s ridiculous.  One is killing a real person, and one is just eliminating a hypothetical future.  If you evaluate your moral utility as a delta from what you could have done instead, you’re going to go crazy.”

“That’s what we eventually figured out.  Science would have moved much faster if gifted with clear thinkers like you.  So we cut all that ‘possible futures’ stuff out, and settled on a simple axiom.”

“Axiom?”  Is this some geometry thing?  “ I just figure that dying is bad, much worse than the other bad stuff.”

“Exactly.  That’s where we landed: ‘People dying is bad’.  It’s been our guiding law, in the few thousand centuries since.  We’ve rebuilt society accordingly.”

That seems fine.  John figured.  Wait.  The “thousand centuries” thing is concerning.  Paul seems like a nice guy, but storytelling isn’t his forte.  “Speaking of which, where is everyone?  Why is it so cold?  And what’s with all the storage tanks?  Why’d you defrost me in an Amazon warehouse?

“That gets me to the second great breakthrough: cryo-sleep.  

“You were incredibly lucky — most people who fell headfirst and naked into a glacier in 2005 ended up a decomposed mess.  But in the 2030s, we perfected supercooled nitrogen bath immersion, and could reliably freeze freshly-dead bodies, with minimal to no risk.”

“Minimal?”

“Cosmetic tissue loss, and nobody’s going to win any Nobel prizes.  But the point is, we can stop people from fully dying.  Once we figure out how to fix what killed someone, we can defrost them, fix them, and send them back out to enjoy a full life.

Huh.  “That’s really the dream, then.  Live fast, die young, and …”  

“Well…”

“Well?”

“Eventually, perhaps.  But if you die too fast, nobody can put you back together.  We could save the gunshot victims, and stop some heart attacks.  But you know what they say — all the king’s horses and all the king’s men, can’t really do much, when Humpty’s parachute strings get tangled… so we couldn’t justify letting thrill-seekers skydive.”

Ok, so the future is kind of boring.  I can live with that, I guess. “So what do people do for fun now-days, if the risky stuff is off the table?”

“I’m getting there.  There was a bigger unsolved problem, a nut we haven’t cracked yet.  I’ve personally been working on it for the last forty years.”

“Oh?”

“Old age.  Even if you eliminate the dumb deaths like snakes, train crashes, and undeployed parachutes, eventually, people just break down.  When a person makes it to the end — around 120 is the best we ever did— we run out of ways to keep them ticking.

“It’s likely fixable.  But it’s possible that we won’t ever be able to reverse aging, only forestall it.   Thermodynamics is a bitch.  So we decided it’s ethically unsound to ever let a person die of old age.  Cryo-sleep isn’t forever, but death from old age might be.   So we head it off.  When someone hits 80, they go into storage, and stay there until we’ve figured out how to stop aging”

That’s neat, but I’m only 30, and I’m also recently defrosted, cold, and hungry.  This doesn’t seem super important.  “I really appreciate this backstory, but I’d appreciate the cliff notes.  Is there anyone else who could swap in?”

“I’m getting there.  There isn’t anyone else.”

… fuck?

“We almost figured it out too late — by the late 21st century, the vast majority of our energy infrastructure was dedicated to cryogenics.  The more people who kept breathing, the more people who kept getting older.  When they hit 80, they filled up scarcer and scarcer cryo tanks.  

“We only had a few decades left before we hit Peak Cryo.  And if we run out of places to store people, it’ll be a catastrophe — we’ll have to pick and choose who to defrost and recycle.  We can’t let that happen!

“Obviously we can’t just give up on fixing ageing.  Everyone would end up dead!  But it doesn’t make sense to parallelize that research — haven’t you read The Mythical Man-Month?  We couldn’t waste effort — every hour counts.”

Ugh, I get it.  “So you froze everyone.  To stop the clock.”

“Precisely.  Some people were unhappy, but most understood the logic — that it was the only possible choice, given our consistent moral foundation.”

Being dead had benefits.  “I suppose that’s an insane but rational choice.  So how close are we…  you…  to actually solving ‘old age’?”

“Me? Honestly, I gave it an honest effort for a decade, but found the biology all very confusing.  I was just a furniture salesman before this, you know?  I’ve spent the last 25 years mostly organizing my predecessor’s notes, in the hopes I could give you a head start.”

John blinked several times, and then several more.  “Me?”

“Of course, you.  It’s really just dumb luck the excavator-bots dug you up (while freeing up some storage space on the north slope) right before I retired.  You’re the youngest person alive —or whatever — by a solid decade, so you’re next in line.

“It’s totally natural to not feel up to the task.  But don’t sweat it — it’s not a huge setback if you fail.  Once you’ve put in 50 years of the good-old-college-try, you’ll get retired, someone fresh will swap in.”

Uh, yeah, prepare for disappointment.   “And if it takes longer than that?  What if nobody currently ‘alive’ can solve it?”

“Worst case, we have a system for making new people, of course.  It was the first thing we developed.  But we won’t start unless it’s a true catastrophe, and run out of existing people.  Given storage constraints, it’s irresponsible to go around making more people except as a last resort.”

I read some sci-fi back in college.  Surely there’s a less-stupid way out of this.  “What about storing people off-world?  Maybe on the moon?”

“Dangerous.  Here let me — ah, I have the notes”  Paul swiveled back to the monitor.   “My predecessor spent a few years on this, and couldn’t figure out how to deal with the potential micrometeorite damage.  But I’d certainly encourage you to take a shot.  Fresh neurons, fresh ideas!”

Well.  “And if I say no?”

“I can tell you’re struggling.  It’s ok.  This is why decisions in the 21st century were so hard.  Your people had no real moral axis!

“If you say no, obviously there are backup plans.  The automation” Paul gestured up at the tangle of arms and wires menacingly crawling between stacks of barrels — cryo-tanks, John realized —  “would simply re-freeze you and put you in the back of the queue.  It would be quite unpleasant, but you have my promise you’d survive.  Someday we’ll all get defrosted and have a good laugh about it.”

Paul slowly rose, his right arm on a cane, waving his left arm as if to hail a cab.  “I don’t like to cut this off so soon.  But I’ve already given you far more time than the norm thanks to the unusual circumstances of your defrosting, and I really shouldn’t linger.  It’s far too dangerous, at my age.”

The closest mess of wires enveloped and lifted Paul as he shook John’s hand.

“Take your time and mull it over.  Just shout, and the help” (gesturing to the nearest arm) “will point you to a shower and a hot meal.  The research can wait until you’re ready.   One nice benefit of our ethical framework, is that no decision ever has to be rushed.”  

The wires disappeared quietly in the dark.  Uncomfortably alone, John stared at the monitor blinking in front of him.  This is the stupidest possible futureAt least Terminator had an awesome robot uprising.  

But at the same time, and what have I got to lose?  The future is already stupid, and I certainly can’t make it any worse.  I deserve a shower and breakfast first.  I can take a crack at this for at least a day.  I can try anything for a day.  It can’t be much worse than being properly dead.   

And it’s not like a bit of hard work is going to kill me John admitted, because dying is absolutely not an option anymore.

Peacetime Bureaucracy / Wartime Bureaucracy

Peacetime bureaucracy forces all tests through the CDC.  Wartime bureaucracy allows hospitals to run their own tests.

Peacetime bureaucracy priorities food labeling rules over functional supply chains.  Wartime bureaucracy gets food out of fields before it rots.

Peacetime bureaucracy creates ethical review boards.  Wartime bureaucracy allows volunteers to infect themselves to test vaccines.

Peacetime bureaucracy inventories ICU beds.   Wartime bureaucracy builds hospitals.

Peacetime bureaucracy certifies medical-purpose N95 respirators.  Wartime bureaucracy uses construction masks.

Peacetime bureaucracy waits for double-blind, controlled efficacy studies. Wartime bureaucracy tells shoppers to wear masks.  

Peacetime bureaucracy prioritizes fraud prevention.  Wartime bureaucracy writes checks.

Peacetime bureaucracy plays diplomatic games to stay funded.  Wartime bureaucracy takes advice from Taiwan.

Peacetime bureaucracy considers every life sacred.  Wartime bureaucracy balances QALYs saved against the price tag.

Peacetime bureaucracy prioritizes racially sensitive nomenclature.  Wartime bureaucracy stops international flights.

Peacetime bureaucracy requires HIPAA certification for telemedicine.  Wartime bureaucracy lets doctors use Skype.

Peacetime bureaucracy optimizes for eventual correctness.  Wartime bureaucracy treats time as the enemy.

Peacetime bureaucracy optimizes for public support in the next election cycle.  Wartime bureaucracy has a long-term plan.


Investors know the difference between peacetime CEOs and wartime CEOs, and trade them out when times demand change.  How do we build institutions which quickly exchange peacetime bureaucracy for wartime bureaucracy? 

Two months into COVID-19, we’re barely halfway there. Next decade (or next year) there will be a next disaster, war, or pandemic.  When that happens, we need wartime officials ready to act — not peacetime officials reluctantly easing into the role.  These public officials must be able to make hard choices with incomplete information.

We need to learn from COVID-19, but the preparation can’t stop at “stockpile masks and ventilators”. Preparation means having officials ready to eliminate red tape, make new rules, and make hard choices on day 1 — not day 30.

We got lucky this time, a trial-run on a pandemic whose victims are (predominantly) the old and sick. To fail utterly at curbing COVID-19 precipitates an ethical, but not civilizational, failure.

We’re unlikely to be so lucky next time. The future is full of darker, bloodier pandemics than COVID-19 — both natural ones, and man-made ones. When one strikes (and it will) we need a wartime bureaucracy and a playbook ready, telling us which of the old rules still matter — and which rules will not.

The Decadent Society: Maybe the internet isn’t actually a force for change

I recently read (well, absorbed via Audible) The Decadent Society by Ross Douthat.  tl,dr (summary, not opinion):

We are stuck in a civilizational rut, and have been there since either the 1980s or early 2000s, depending on how you count.

  • Technological progress has stalled since the early 2000s.  We’ve made no meaningful progress on the “big” initiatives (space exploration, fixing aging, flying cars, or AI) since the 2000s. 
  • Culture has not really innovated since the 1980s.  New art is derivative and empty, movies are mostly sequels, music is lousy covers, etc.
  • Politics has entrenched into two static camps bookended by rehashed politics from the 80s (neoliberal free trade vs soviet central planning and redistribution) 
  • Even religion is fairly stagnant.  Splinter sects and utopian communes are creepy and usually turn into weird sex cults, but represent spiritual dynamism.  Their decline indicates a stagnation in our attempts to find spiritual meaning in life. 
  • A sustained fertility rate decline in the developed world either indicates, causes, or in an unvirtuous cycle reinforces risk-aversion in both the economic and cultural planes.

In summary: Everything kinda sucks, for a bunch of reasons, and there’s a decent chance we’ll be stuck in the self-stabilizing but boring ShittyFuture™ for a long, long time. The Decadent Society is not an optimistic book, even when it pays lip service to “how we can escape” (spoiler: deus ex deus et machina).

While TDS doesn’t really make any strong claims about how we got into this mess, Douthat suggests that fertility declines, standard-of-living comforts, and the internet act as mechanisms of stasis, holding us in “decadence”. I want to talk about the last one — the internet.

Revisited opinion: the Internet might not actually be a net force for change

My pre-TDS stance on the internet as a force for social change was:  

“The internet is a change accelerator, because it massively increases the connection density between individuals.  On the bright side, this can accelerate scientific progress, give voice to unpopular but correct opinions, and give everyone a place to feel heard and welcome.

But the dark side of social media is an analog to Slotin and the demon core — Twitter slowly turns the screwdriver, carefully closing the gap between the uranium hemispheres for noble reasons, but sooner or later Dorsey will slip and accidentally fatally dose all onlookers with 10,000 rad(ical)s of Tweet poisoning.

Traditional society (with social pressure, lack of information transmission fidelity, slow communications) acted as a control rod, dampening feedback and suppressing unpopular opinions, for better or for worse, but are irrelevant in 2020.  Net-net, the world moves faster when we are all connected.”

TDS disagrees, contesting (paraphrased, only because I can’t grab a quote from Audible): 

“No, the internet is a force against social change.   Instead of marching in the street, rioting, and performing acts of civil disobedience, the young and angry yell on Twitter and Facebook.  Social media is an escape valve, allowing people to cosplay actual social movements by imitating the words and energy of past movements without taking actual risks or leaving home. 

But entrenched political structures don’t actually care what people are yelling online, and can at best pay lip service to the words while changing nothing.  While a lot of people get angry, nobody actually changes anything.”

The presented evidence is:

  1. The core topics of online debate haven’t really changed since the 1980s.  The left is still bookended by socialists and political correctness, and the right bookended by neoliberalism and reactionary religion.
  2. Non-violent protests, (marches and sit-ins) while not uncommon, are sanctioned, short, safe, and more akin to parades than true efforts at change.  No movement is even close to akin to Martin Luther King Jr’s march on Washington.
  3. Un-civil acts of disobedience (rioting, unsanctioned protests, bombings, etc) are nearly non-existent, even among radical groups, by historical standards.

(this is a short and summarized list, but the book obviously fleshes these points out in vastly greater and more effective depth)

The last point is at first take difficult to square with BLM protests, Occupy Wall Street, and occasional Mayday riots.  Media coverage makes them feel big.  But as Ross Douthat points out, in 1969, there were over 3,000 bombings in the United States (!!!), by a variety of fringe and radical groups (ex, the Weather Underground, the New World Liberation Front and the Symbionese Liberation Army). Even the tiniest fraction of this unrest would be a wildly radical departure from protests of the 2020s, and would dominate news cycles for weeks or months.

On the nonviolent side, the Civil Rights and anti-Vietnam-war movements were driven to victory by public demonstrations and mass protests.  Popular opinion and voting followed enthusiastic-but-minority protests and acts of nonviolent civil disobedience (ex, Rosa Parks).

Conclusion: activists in the 1960s, 70s and 80s engaged in physical, real-world acts of resistance, in a way the protests of the 2010s do not.  Why?  Suspect #1 is the internet: would-be activists can now use the internet as a safety-valve for toxic (but fundamentally ineffective) venting. But instead of these voices instigating social change, the voices stay online while the speakers pursue safe, uneventful daily lives.


I’m not 100% converted.  The magnifying glass of social media does change behavior in meaningful, conformist, ways, and I don’t think we’ve reached the endgame of internet culture.

But put in the context of the radical (or at minimum, society-transforming) movements America experienced every decade until the 2000s, TDS makes a compelling case that the ease of yelling online without leaving home comes at a hidden cost — real-world political change.

Reading

It’s easy to think without reading, but also easy to read without thinking.

I’ve started reading nonfiction again.  I have a good reason for stopping: I was stuck halfway through Proofs and Refutations for about a year and a half, and as a committed completionist, I couldn’t start any other books until it was done.  After powering through the dregs of P&R, I know a lot about…. proofs, less about polyhedrons, and I’m free to re-engage in educational literature.

It’s easy to read without reflecting, though.  I’d venture that 90% of “consumed content by volume” — especially online content — functions only to:

  1. Reinforce biases, in an (optimistically) intellectual circlejerk
  2. Get the reader frothing mad when they Read Stupid Opinions by Stupid People

I don’t think I’m uniquely bad at “intellectually honest reading” —  but “median human” is a low bar, and not one I’m confident I always clear.  If I’m going to go through the motions of reading brain books, I need a forcing function to ensure the input actually adjusts my priors;  if after having read a book, I haven’t changed my mind about anything, I’m wasting my time on comfortable groupthink.

My forcing function — until I get tired of doing it — will be to write something here.  There may be inadvertent side-effects (like accidentally reviewing the book, although I hope not), but my only commitment is: to outline at least one stance, large or small, the book has changed my mind on.  Or, lacking that, forced me into an opinion on a topic I hadn’t bothered to think about.

If I can’t find one updated stance, I’m wasting my time. Committing that stance to writing forces crystallization, and committing that writing to a (marginally) public audience forces me to make the writing not entirely stupid.

I make no commitment to keeping this up, but I waited to write this until I had actually written an un-review, so at least n=1, and by publicly declaring a plan, I can (hopefully) guilt myself into maintaining the habit.

Schrödinger’s Gray Goo

Scott Alexander’s review of The Precipice prompted me to commit to keyboard an idea I play with in my head: (1) The biggest risks to our humanity the ones we can’t observe, because they are too catastrophic to survive, and (2) we do ourselves a disservice by focusing on preventing the catastrophes we have observed.

Disclaimer: I Am Not A Physicist, and I’m especially not your physicist.

1. Missing bullet holes

The classic parable of survivorship bias comes from the Royal Air Force during WWII.  The story has been recounted many times

Back during World War II, the RAF lost a lot of planes to German anti-aircraft fire. So they decided to armor them up. But where to put the armor? The obvious answer was to look at planes that returned from missions, count up all the bullet holes in various places, and then put extra armor in the areas that attracted the most fire.

Obvious but wrong. As Hungarian-born mathematician Abraham Wald explained at the time, if a plane makes it back safely even though it has, say, a bunch of bullet holes in its wings, it means that bullet holes in the wings aren’t very dangerous. What you really want to do is armor up the areas that, on average, don’t have any bullet holes.

Why? Because planes with bullet holes in those places never made it back. That’s why you don’t see any bullet holes there on the ones that do return.

The wings and fuselage look like high-risk areas, on account of being full of bullet holes.  They are not. The engines and cockpit only appear unscathed because they are the weakest link.  

2. Quantum interpretations

The thought-experiment of Schrödinger’s cat explores possible interpretations of quantum theory:

The cat is penned up in a steel chamber, along with the following device: In a Geiger counter, there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. 

Quantum theory posits that we cannot predict individual atomic decay; the decay is an unknowable quantum event, until observed.  The Copenhagen interpretation of quantum physics declares that the cat’s state is collapsed when the chamber is opened — until then, the cat remains both alive and dead.

The many-worlds interpretation declares the opposite — that instead, the universe bifurcates into universes where the particle did not decay (and thus the cat survives)  and those where it did (and thus the cat is dead).

The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wavefunction collapse. This implies that all possible outcomes of quantum measurements are physically realized in some “world” or universe.

The many-worlds interpretation implies that there is a very large—perhaps infinite—number of universes. It is one of many multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realised. 

3. The view from inside the box

The quantum suicide thought-experiment imagines Schrödinger’s experiment from the point of view of the cat.  

By the many-worlds interpretation, in one universe (well, several universes) the cat survives.  In the others, it does. But a cat never observes universes in which it dies. Any cat that walked out of the box, were it a cat prone to self-reflection, would comment upon its profound luck. 

No matter how likely the particle was to decay — even if the outcome was rigged 100 to 1 — the outcome remains the same.  The cat walks out of the box grateful to its good fortune.

4. Our box

Or perhaps most dangerously, the cat may conclude that since the atom went so long without decaying, even though all the experts predicted decay, the experts must have used poor models which overestimated the inherent existential risk.

Humans do not internalize observability bias.  It is not a natural concept. We only observe the worlds in which we — as humans — exist to observe the present.  Definitionally, no “humanity-ending threat” has ended humanity.   

My question is: How many extinction-level threats have we avoided not through calculated restraint and precautions (lowering the odds of disaster), but through observability bias?

The space of futures where nanobots are invented is (likely) highly bimodal; if self-replicating nanobots are possible at all, they will (likely) prove a revolutionary leap over biological life.  Thus the “gray goo” existential threat posited by some futurists:

Gray goo (also spelled grey goo) is a hypothetical global catastrophic scenario involving molecular nanotechnology in which out-of-control self-replicating machines consume all biomass on Earth while building more of themselves

If self-replicating nanobots strictly dominate biological life, we won’t spend long experiencing a gray goo apocalypse.  The reduction of earth into soup would take days, not centuries:

Imagine such a replicator floating in a bottle of chemicals, making copies of itself…the first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of ten hours, there are not thirty-six new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth

Imagine a world in which an antiBill Gates stands with a vial of grey goo in one hand, and in the other a geiger counter pointed at an oxygen-14 molecule — “Schrödinger’s gray goo”.  Our antiBill commits to releasing the gray goo the second the oxygen-14 molecule decays and triggers the geiger counter.

In the Copenhagen interpretation, there’s a resolution.  The earth continues to exist for a minute (oxygen-14 has a half-life of 1.1 minutes), perhaps ten minutes, but sooner or later the atom decays, and the earth is transformed into molecular soup, a giant paperclip, or something far stupider.  This is observed from afar by the one true universe, or perhaps by nobody at all.   No human exists to observe what comes next. [curtains]

In the many-worlds interpretation, no human timeline survives in which the oxygen-14 model decays. antiBill stands eternal vigil over that oxygen-14 atom: the only atom in the universe for which the standard law of half-life decay does not apply.

5. Our world

As a species we focus on preventing and averting (to the extent that we avert anything), the risks we are familiar with:

  • Pandemics
  • War (traditional, bloody)
  • Recessions and depressions
  • Natural disasters — volcanoes, earthquakes, hurricanes 

These are all bad.  As a civilization, we occasionally invest money and time to mitigate the next natural disaster, pandemic, or recession.

But we can agree that while some of these are civilizational risks, none of them are truly species-level risks.  Yet we ignore AI and nanotechnology risks, and to a lesser but real degree, we ignore the threat of nuclear war.  Why though?

  • Nuclear war seems pretty risky
  • Rogue AI seems potentially pretty bad
  • Nanobots and grey goo (to the people who think about this kind of thing) seem awful

The reasoning (to the extent that reasoning is ever given) is: “Well, those seem plausible, but we haven’t seen any real danger yet.  Nobody has died, and we’ve never even had a serious incident”

We do see bullet holes labeled “pandemic”, “earthquake”, “war”, and we reasonably conclude that if we got hit once, we could get hit again.  Even if individual bullet holes in the “recession” wing are survivable, the cost to human suffering is immense, and worth fixing.  Enough recession/bullets may even take down our civilization/plane. 

But maybe we are missing the big risks, because they are too big.  Perhaps there exist fleetingly few timelines with a “minor grey goo incident” which atomizes a million unlucky people.  Perhaps there are no “minor nuclear wars”, “annoying nanobots” or “benevolent general AIs”. Once those problems manifest, we cease to be observers.

Maybe these are our missing bullet holes.

6. So, what?

If this theory makes any sense whatsoever — which is not a given — the obvious followup is that we should make a serious effort to evaluate the probability of Risky Things happening, without requiring priors from historical outcomes. Ex:

  • Calculate the actual odds — given what we know of the fundamentals — that we will in the near-term stumble upon self-replicating nanotechnology
  • Calculate the actual odds — given the state of research — that we will produce a general AI in the near future?
  • Calculate the actual odds that a drunk Russian submariner will trip on the wrong cable, vaporize Miami, and start WWLast?

To keep things moving, we can nominate Nicholas Taleb to be the Secretary of Predicting and Preventing Scary Things.  I also don’t mean to exclude any other extinction-level scenarios. I just don’t know any others off the top of my head.  I’m sure other smart people do.

If the calculated odds seem pretty bad, we shouldn’t second guess ourselves — they probably are bad.  These calculations can help us guide, monitor, or halt the development of technologies like nanotech and general AI, not in retrospect, but before they come to fruition.

Maybe the Copenhagen interpretation is correct, and the present/future isn’t particularly dangerous.  Or maybe we’ve just gotten really lucky.  While I’d love for either of these to put this line of thought to bed, I’m not personally enthused about betting the future on it.