A Chat About Supremacy

This is the text of the book discussion I led for the Chautauqua Literary & Scientific Circle Alumni Association on June 26, 2025.

Introduction

Last year, I led a discussion about a fictional tale called, “The Measure”.

I chose that particular book because it surfaced important questions about life, death and how humans co-exist and treat each other; questions that don’t have easy answers.

Today’s book is NOT fiction, but equally intriguing in that it also poses some interesting and important questions that don’t have easy answers.

While last year was a fun thought experiment, the questions that arise from this year’s book REALLY DO need to be answered in a thoughtful way.

The book is Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson, a tech writer at Bloomberg.

Another reason I chose this book is because I’ve worked in tech for most of my life and I've never seen a technology evolve so quickly or create this much excitement and anxiety simultaneously.

Survey

During the live talk, I surveyed the audience to see

  • How many of them had used Gemini.

  • How many had used ChatGPT.

  • Who had heard of Demis Hassabis?

  • And who had heard of Sam Altman?

As you probably guessed, almost no one had used Gemini or heard of Demis Hassabis and almost everyone had used ChatGPT and heard of Sam Altman, which perfectly captures the public view of AI right now: it's the Sam Altman and ChatGPT show.

But the central argument of the book Supremacy is that this is, and always has been, a two-horse race.

To truly grasp what's happening and where we're going, you need to know the other, arguably more foundational, figure in this contest: Demis Hassabis of Google DeepMind.

This book is essentially the story of their rivalry. So, for those who haven't read it, I’ll start with a quick summary.

Book Summary

In November 2022, a webpage was posted online with a simple text box. It was ChatGPT and it was unlike any app people had used before.

AI was not new in 2022, it’s been around since the 1950s in various forms, but the release of ChatGPT was the first time the average person like you and me now had the ability to use AI in a very interactive way.

In the book, Olson focuses on two of the most important people in AI - Demis Hassabis, the Co-Founder & CEO of Google DeepMind, the creators of Gemini. And Sam Altman, the CEO & Co-Founder of OpenAI, the creators of ChatGPT.

The book gives you enough biographical information about Hassabis and Altman to help you understand who they are, their motivations around AI, and how they got to where they are today, leading two of the biggest tech monopolies whose power is unprecedented in history, who are in a rivalry with each other to be the first to create Artificial General Intelligence or AGI.

It’s a biographical portrait of Hassabis and Altman as well as the stories of Google DeepMind, OpenAI and the race to AGI.

What is AI & what is AGI

Before I dive into the drama and the questions, I think it would be helpful to define AI and AGI, the jackpot worth trillions waiting for whoever crosses the finish line first.

Demis Hassabis defines AI as the science of making machines smart.

Essentially, the capability of machines or software to perform tasks that typically require human intelligence.

It’s an umbrella term for many technologies such as machine learning, deep learning and large language models, often referred to as LLMs.

ChatGPT, Gemini and Claude are Large Language Models and are what most people are referring to when they say they’re using AI.

Like many things in the AI ecosystem, Artificial General Intelligence or AGI doesn’t have an agreed-upon definition.

OpenAI defines it as “highly autonomous systems that can outperform humans at most economically valuable work.”

Demis, at Google DeepMind, defines it as “the ability to do pretty much any cognitive task that humans can do.”

Basically, machines and software that are as smart as humans.

The level above AGI is Artificial Super-Intelligence – machines and software that are smarter than humans, which sounds a little like science fiction, but not only do most of the AI labs think ASI is possible, it’s their stated goal.

And this is precisely why I think this is an important book for everyone to read, because it’s a glimpse into the lives of two of the most important people leading the charge toward AGI and, ultimately, ASI.

If they achieve their mission, they will have created THE most powerful technology to date and one that could fundamentally change the world - economically, socially, politically – literally everything.

But, since I don’t have time to talk about “literally everything”, I’ll focus on three themes in the book and some interesting questions around those themes.

The themes are:

  1. Power & influence.

  2. Intelligence.

  3. Work, the economy & meaning

THEME ONE – Power & Influence

Our first theme is about power and influence.

There’s a thread throughout the entire book of the pervasive influence and power of Big Tech monopolies.

When Demis co-founded DeepMind, with the mission of creating AGI, he wanted to protect AI from tech monoliths that "prioritized profit over humanity's well-being".

Then he realized it wasn’t going to happen without a LOT more money and computing power, so he sold DeepMind to Google, while trying, unsuccessfully, to maintain its independence.

On the other side of the pond, Sam Altman and Elon Musk, two of the co-founders of OpenAI, founded the company as a nonprofit “for the good of humanity” because they thought it would be very, very bad for the world if AGI was in the hands of Demis and Google DeepMind.

They, too, eventually realized that they were going to need a TON of money if they were going to beat DeepMind to AGI:

  • So, Elon proposed that Tesla buy OpenAI.

  • Sam’s response was, no thank you.

  • Elon was given the boot (or he took his boot and his cash, depending on how you look at it) and the feud between the two of them is still going strong.

  • And, ultimately, in order to get the cash OpenAI needed, Altman teamed up with Microsoft, another tech giant.

Both AI labs, despite their initial desire for independence, became "bound to big tech" because they needed a huge amount of computing power, data, and talent, along with billions & billions of dollars to build AGI.

And now the race to AGI is being driven by companies whose driving motive is an insatiable hunger to grow bigger and bigger and make more and more billions. And if their bet pays off, trillions and trillions.

With this unprecedented cash cow as motivation, they’ve been "rushing to sell AI tools to the public with virtually no oversight and with far-reaching consequences".

At this point there’s no going back and layered on top of the race to AGI between the tech giants, we have a race to AGI between the US and China, driven by a fear that whoever gets there first will have the power to annihilate the other.

This is precisely the scenario that led Time magazine to frame this situation as “The ‘Oppenheimer Moment’ That Looms Over Today’s AI Leaders.”

It captures that same sense of a high-stakes race to create a world-altering technology, knowing that the creators—and the world—will have to live with the consequences forever.

So, my first set of questions for you, dear reader (or listener if you are listening to this on YouTube) are:

  1. Can we create AI that is safe and beneficial to all of society IF it’s being driven by a motive for profit, a drive to be “the first”, and a fear that it will be the end of us if we’re not first?

  2. If it is possible, how might we do it?

  3. If not, what does that mean for society?

I realize these are huge questions, so I invite you to consider even ideas that might seem outrageous.

I also invite you to pause and ponder these questions.

THEME TWO – Intelligence

Our second theme centers on intelligence because the race to create a machine intelligence that is as smart or smarter than humans begs the question:

  • What is the nature of "intelligence" and can machines truly possess it in the human sense?

Some people believe large language models are nothing more than math and machines predicting words based on patterns that may sound human but are not human because the machines don’t perceive, understand, feel, etc.

There are others, some of them engineers working in the AI labs, who think our current level of AI is already showing signs of being human and maybe even conscious.

So, what do you think?

  • What is the nature of "intelligence”, and can machines possess it?

  • Machines are clearly not human, but if they can imitate our intelligence and imitate emotions such as empathy, what makes us unique, and can our minds or essence be replicated or even improved upon by machines?

  • What do you think it will mean for humans if we succeed in creating machines that are smarter than we are?

Again, I invite you to pause and ponder these questions.

THEME THREE – Work, The Economy & Meaning

The last theme I want to talk about is the Impact of AI on Society as it relates to Work.

The book touches upon the potential impacts of AI on various aspects of society, including the job market.

When asked, most researchers and executives in the AI labs will say something like, “Yes, AI will eliminate some jobs, but it will also create jobs, just like all previous technological advances.”

But not everyone has this “middle of the road, nothing to worry about” attitude.

Recently, in an interview with Axios, Dario Amodei, Co-Founder & CEO of Anthropic, the makers of Claude predicted "AI could wipe out half of all entry-level white-collar jobs - and spike unemployment to 10-20% in the next 1 - 5 years."

Demis Hassabis, Co-Founder & CEO of Google DeepMind, disagrees with Dario. He envisions a future where artificial general intelligence transforms not just how we work, but why we work.

He suggests that while AI will displace many current jobs — especially repetitive white-collar roles — it will also enable new types of work built around managing, applying, and creatively collaborating with AI systems.

He points to the possibility of universal AI assistants replacing traditional jobs and anticipates roles emerging that focus more on creativity, exploration, and high-level decision-making.

Meanwhile…

Amazon CEO Andy Jassy, in a recent memo to staff, announced that he expects the rise of generative AI to “reduce” Amazon’s corporate workforce over the coming years as they get efficiency gains from using AI extensively across the company.

And the new AI startup, Mechanize, has the explicit goal of building artificial intelligence tools to automate ALL white-collar jobs “as fast as possible.”

In a New York Times interview, one of Mechanize’s founders is quoted as saying, “Our goal is to fully automate work. We want to get to a fully automated economy and make that happen as fast as possible.”

Everyone has an opinion, but no one really knows what the percentage of jobs lost vs. jobs created will be.

I also think it will be the case that some of the people who do get laid off won’t have the skills to get rehired?

Programmers are a good example.

Senior programmers become senior-level through experience as entry-level programmers. If AI is doing all the entry-level programming, then we have two problems:

  1. Entry-level programmers can no longer get work because they don’t have the skills of a senior-level programmer.

  2. As the senior-level programmers retire, companies are eventually going to run out of senior-level programmers because they’ve killed off the entry-to-senior pipeline?

This very thing is being discussed in the programming and the legal fields.

And, on the flip side of layoffs are soft hiring freezes being driven by two things:

  1. Some companies want to train their current staff to use AI and see how much more they can do without increasing head count.

  2. Other companies have a “prove you can’t do it with AI” before we say yes to new head count policy.

And, finally, if AI will eventually be able to do most of the economically valuable work, as OpenAI thinks it will, how can that not lead to mass unemployment?

The idea of “AI being able to do most of the economically valuable work” is presented as a kind of utopia, but the benefits of AI doing all the work will primarily accrue to companies and shareholders, not to human workers, if there are any human workers left.

This creates three big questions in my mind:

  1. How will humans earn money in a society in which machines are doing all the work?

  2. As AI agents replace more and more people, and more and more people don't have income from jobs, then whose going to have the money to buy the abundance of products that these super-efficient, AI-maximized companies are producing?

  3. Finally, how will humans find meaning and purpose in a culture conditioned to find meaning and value through the work they do when AI is doing much, if not all, of that work better than they are?

And this is your last invitation to pause and reflect on these questions.

Conclusion

As I come to the end of my talk, I’d like to leave you with a couple of things to think about.

As I said in the beginning of this discussion, I think AI is THE biggest technological advance in my lifetime and it brings with it both enormous potential and enormous risk.

In the risk category:

  • China and the US want to leverage AI against each other

  • Some companies are already trying to figure out how AI automation and agents can replace some or all of their human workforce

  • AI in the hands of criminals puts us all at greater risk of harm

  • And the concentration of power in AI is unprecedented and growing.

On the upside…

  • AI is already having a positive impact on the lives of those who are putting in the effort to learn how to use it effectively.

  • AI labs like Google DeepMind have created systems like AlphaFold, which is revolutionizing medicine, & AI-powered global weather prediction systems that have enormous potential to save lives.

  • And AI has the potential to help us solve some of the biggest issues we’re facing, such as climate change, as well as cure diseases that thus far we have been unable to find cures for.

Ezra Klein recently interviewed Ben Buchanan, the former special advisor for AI to the Biden White House.

His New York Times opinion piece was titled, "The Government Knows AGI is Coming".

In the opening of the opinion piece, Klein says…

"If you've been telling yourself this isn't coming, I really think you need to question that. It's not Web3. It's not vaporware.

A lot of what we're talking about is already here right now.

I think we're on the cusp of an era in human history that is unlike any of the eras we have experienced before.

And we're not prepared, in part, because it's not clear what it would mean to prepare.

That's a very important point. We don't know what this will look like, what it will feel like.

We don't know how labor markets will respond.

We don't know which country is going to get there first.

We don't know what it will mean for war.

We don't know what it will mean for peace.

And while there is so much else going on in the world to cover, I do think there's a good chance that when we look back on this era in human history, AI will have been the thing that matters.”

So, what can the average person like you and me do?

  • We can educate ourselves on the basics of AI, the ethical issues surrounding AI, and the potential societal impacts.

  • Then we can contribute our ideas and expertise to coming up with solutions and safeguards.

It might seem like you’re not qualified to participate in such an endeavor, but the engineers and researchers creating AI don’t see it as their job to solve for societal impacts such as job loss.

Those solutions are going to need to come from government, philosophers, ethicists, and people with domain expertise such as educators, doctors, lawyers, financial analysts, etc.

Individually, we might not be the ones to create the solutions, but they will only come from people like us asking the questions and forcing the hard conversations.


YOUR CALL TO ACTION: If you are one who likes sharing (hello extroverts), CLICK HERE and tell me what you thought of this blog post. How do you think AI is going to shape our world?

MJ

Digital Innovation for Nonprofits. We handle the digital heavy lifting—web design, SEO, and AI—so nonprofits can focus on changing the world.

https://qtWebExpert.com
Next
Next

Are Podcasts Losing Their Soul?