The corporate / academic / public AI value gap

There is a huge gap between the benefits of Artificial Intelligence the public is being sold, the benefits of AI which are being marketed to corporate adopters, and the actual motivations of AI researchers.

  • Tech providers pitch AI as a driver of innovation (self-driving cars) and global good (mitigating global warming).  But the B2B case-studies pitched to corporate clients more often pitch AI solutions as better automation, mostly enabling cost-reduction (specifically, reducing human-in-the-loop labor).
  • While many AI researchers are motivated by genuine interest in improving the human condition, other motivations diverge — a desire to push the bounds of what we can do, a genuine belief in transhumanism (the desire for AI to replace, or transform into something entirely unrecognizable, humanity), or simply because AI pays bigly.

These drivers — replacing human employment, and perhaps humans themselves — are, to put it mildly — not visions the public has bought into.

But these internal motivations are drowned out by the marketing AI pitch by which AI is sold to the public: “AI will solve [hunger/the environment/war/global warming]”.  This leaves the people not “in the know” about AI progress — 99% of the population — not even thinking to use democracy to direct AI research towards a world the (average) person actually wants to live in.

This is not particularly fair.

Marketed AI vs Profitable AI

To the public, the tech giants selling AI solutions (Google, Microsoft, and Apple) pitch visions of AI for good.  

The public face of these advertising campaigns is usually brand advertising, perhaps pitching consumers on a software ecosystem (Android, iOS), but rarely selling any specific product.  This makes it easy to sell the public a vision of the future in HDR, backed by inspirational hipster soundtracks.

You all know what I’m talking about — you’ve seen them on TV and in movie theaters — but the imagery is so honed, so heroic, that we should look at the pictures anyway.

Google’s AI will do revolutionary things, like fix farming, improve birthday parties, and help us not walk off cliffs: 

Microsoft’s AI is safe.  You can tell because this man is looking very thoughtfully into the distance:

But if that is not enough to convince you, here is a bird:

Microsoft goes into detail on their “AI for Good” page.  The testimonials highlight the power of AI as applied to:

  • Environmental sustainability (image recognition of land use, wildlife tracking, maximizing farm yields)
  • Healthcare (dredging through data to find diseases)
  • Accessibility (machine translation, text to speech)
  • Humanitarian action and Cultural Heritage preservation

Even the Chinese search engine Baidu, not exactly known for their humanitarian work, has joined the OpenAI “safe AI” consortium, which is nominally dedicated to developing and selling only safe AI.

The theme among all these public “good AI” initiatives — the sales pitch to the public — is:

“We’re developing advanced AI, but we’re partnering with NGOs, hospitals, and more, to make this AI work for people, not against them.  Look at all the good we can do!”

This isn’t fake.  Microsoft is working with nonprofits, NGOs, and more, to deploy for-the-people AI.  But these applications don’t get us closer to the real question:

“What solutions are normal companies actually deploying with AI-as-a-service cloud technology?”

We can peek behind the curtain at Amazon.  Amazon’s AWS has been for the last decade synonymous with “the cloud”, and still has a full 50% market share.  The bleeding edge of AWS are plug-and-play machine learning and AI tools: Amazon Forecast (machine learning), Amazon Polly (text to speech), Amazon Rekognition (video object recognition), Amazon Comprehend (natural language processing), and more.

And Amazon, alone and refreshingly among tech giants, doesn’t even pretend to care why their customers use AI:

“We certainly don’t want to do evil; everything we’ve released to customers to innovate [helps] to lift the bar on what’s actually happening in the industry. It’s really up to the individual organisation how they use that tech”

Amazon sells AI to C-suites, and we know what the hooks are, because the marketing pitches are online.  AWS publishes case studies about how their plug-and-play AI and ML solutions are used by customers. 

We can look at a typical example here, outlining how DXC used AWS’s ML and AI toolkits to improve customer service call center interactions.  Fair warning:  the full read is catastrophically boring — which is to be expected when AI used not to expand the horizon of what is possible… but instead used to excise human labor from work which is already being done:

“DXC has also reduced the lead time to edit and change call flow messaging on its IVR system. With its previous technology, it took two months to make changes to IVR scripts because DXC had to retain a professional voice-over actor and employ a specialized engineer to upload any change. With Amazon Polly, it only takes hours”

Using Amazon Connect, DXC has been able to automate password resets, so the number of calls that get transferred to an agent has dropped by 30–60 percent.

DXC anticipates an internal cost reduction of 30–40 percent as a result of implementing Amazon Connect, thanks to increased automation and improved productivity on each call.

In total, what did DXC do with its deployed AI solution?  AI is being used to:

  • Replace a voice-over actor
  • Eliminate an operations engineer
  • Eliminate customer service agents

There’s nothing evil in streamlining operations.  But because of the split messaging being used to sell AI research to the public vs to industry — on one hand, visions of environmental sustainability and medical breakthroughs, and on the other hand, the mundane breakthrough of applying a scalpel to a call center’s staffing — the public has little insight (other than nagging discomfort) into automation end-game.  

The complete lack of (organized) public anger or federal AI policy — or even an attempt at a policy — speaks to the success of this doublespeak.

Research motivations

So why are actual engineers and researchers building AI solutions?

I could dredge forums and form theories, but I decided to just ask on reddit, in a quick and completely unscientific test.  Feel free to read all the responses — I’ve tried to aggregate them here and distill them into the four main themes.  Weighted by upvotes, here’s the summary:

Preface: none of these are radical new revelations.  They match, in degrees, what you’d find with a more exhaustive dragnet of public statements, blogs, or after liquoring up the academic research mainstream.

Walking down the list:

1. Improving the human condition

A plurality goal is to better the human condition, which is promising.  An archetypal response is a vision of a future without work (or at least, without universal work):

“I believe the fundamental problem of the human race is that pretty much everyone has to work for us to survive.

So I want to work on fixing that.”

It’s not a vision without controversy — it’s an open question whether people can really live fulfilled lives in a world where they aren’t really needed — but at minimum it’s a vision many could get behind, and is at root predicated in a goal of human dignity.

2. It pays

Close behind are crude economics.  Top comment:

“Dat money”

I don’t intend to sound negative — capitalism is the lever which moves the world, and in capitalism, money follows value.  But as shown by AWS, value can come from either revolutionary invention (delivering novel value), or cost excision (delivering cheaper value).

Either direction pays the bills (and engineers), and few megacorp engineers care to peek behind the curtain at which specific aspect of the AI product delivered to clients pays the bills.

3. Transhumanism

Here’s where the interests of true believers in AI diverge from the mainstream.  Top comment:

“I don’t really care about modern ML solutions, I am only concerned with AGI. Once we understand the mechanisms behind our own intelligence, we move to the next phase in our species’ evolution. It’s the next paradigm shift. Working on anything else wouldn’t be worth it since the amount of value it brings is so vast.”

“I’m in it for the money” is just realism.  “A world without work” and “making cheddar” are motivations which appeal to the mainstream, and is at least comprehensible (if frustrating) to those whose jobs are on the line.  

Transhumanism is different.  There’s a prevalent (although possibly not majority) philosophy among many AI researchers, practitioners, and enthusiasts, that the goal of developing strong (human-level) AI is not a tool for humans, but an end unto itself.  The goal is the creation of a grander intelligence beyond our own:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Or, step-by-step:

  • Humans create AI 1.0 with IQ human + 1
  • AI 1.0 creates AI 2.0, which is slightly smarter
  • AI 2.0 creates AI 3.0, which is WAY smarter
  • AI 3.0 creates AI 4.0, which is incomprehensibly smarter

And whatever comes next… we can’t predict.

This is not a complete summary of transhumanism.  There’s a spectrum of goals, and widespread desire for AI which can integrate with humans — think, nanobots in the brain, neural augmentation, or wholesale digital brain uploads.  But either way — whether the goal is to retrofit or replace humans — the end goal is at minimum a radically transformed concept of humanity.

Given that we live in a world stubbornly resistant to even well-understood technological revolutions — nuclear power, GMOs, and at times even vaccines — it’s fair to say that transhumanism is not a future the average voter is onboard for.

4. Just to see if we can 

And just to round it out, a full 16% of the votes could be summarized (verbatim) as:

“Why not?”

Researchers — and engineers — want to build AI, because building AI is fun.  And there’s nothing unusual about Fun Driven Development.  Most revolutionary science doesn’t come from corporate R&D initiatives; it comes from fanatical, driven, graduate students, startups, or bored engineers hacking on side projects.

Exploration for the sake of exploration (or with a thin facade of purpose) is what got us out of trees and into lamborghinis.

But at the end of the day, “for the fun” is an intrinsic motivation akin to “for the money”.  The motivation gives one engineer satisfaction and purpose, but doesn’t weight heavily on the scales when answering “should this research exist?” — in the same way we limit fun of experimental smallpox varietals, DIY open-heart surgery, and backyard nuclear bombs

Misaligned motivations

The public has been sold a vision of AI for Good; AI as an ex machina for some (or all) of our global crises, now and future:

These initiatives aren’t fake, but they also represent a small fraction of actual real-world AI deployments, many if not most of which focus on selling cost-reductions to large enterprises (implicitly and predominantly, via headcount reductions).

AI researchers and implementers, in plurality, believe in the potential good of AI, but more frequently are in it for the money, to replace (or fundamentally alter) humans, or just for the fun of it.

The public, and their elected governments, can’t make informed AI policy if they are being sold only one side of the picture — with the unsavory facts hidden, and the deployment goals obscured.   These mixed messages are catastrophically unfair to the 99% of humanity not closely following AI developments, but whose lives will be, one way or another, changed by the release of even weak (much less, strong) AI.

2 thoughts on “The corporate / academic / public AI value gap

Leave a comment