AI's Great Divergence

2026-04-16 20:28:34 • 20:53

-

Today on the AI Daily Brief, AI is great divergence.

0:05

Before that in the headlines, one of the weirdest AI pivots yet.

0:09

The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI.

0:21

Alright friends, quick announcements before we dive in.

0:23

First of all, thank you to today's sponsors KPMG, Blitzie, Zencoder and Granola

0:28

to get an ad free version of the show, go to patreon.com-aideallybreef

0:32

or you can subscribe to Apple Podcasts.

0:34

If you are interested in sponsoring the show, send us a note at sponsorsat-aideallybreef.ai.

0:39

As I finished recording this episode and tropic dropped Claude Opus 4.7,

0:43

the show was produced before we got that announcement.

0:46

Come back tomorrow for that episode, but for now, let's talk about that weird pivot.

0:50

Yesterday, AI made waves on Wall Street once again, although the context for it may be the most absurd yet.

0:56

You might remember briefly popular sneaker company Allbirds.

1:00

They were beloved by many in the tech sector and in 2021 when they went public,

1:04

the company was worth over $4 billion.

1:07

Their stock has since cratered 99% and earlier this month,

1:11

they sold their assets in intellectual property for $39 million

1:14

to a holding company called American Exchange Group,

1:17

which is known for acquiring fashion brands like Ed Hardy,

1:20

that left Allbirds as a largely valuable shell company.

1:23

A blind canvas, if you will, and on Wednesday,

1:26

the company announced that their next chapter would be,

1:28

Drumroll Please, an AI NeoCloud provider.

1:32

They said they would be raising $50 million to fund the pivot

1:35

and would be changing the company name to Newbird AI.

1:38

Now, re-burthing a dying company to chase a hot new trend

1:41

is not nearly as uncommon as you would think.

1:43

In 2017, a beverage company called Long Island IST

1:46

changed their name to Long Island Blockchain and saw a huge pop.

1:50

Just kidding, the company was later delisted

1:52

and charges were filed against it for insider trading.

1:54

The crypto industry saw similar plays with Kodak, RadioShack,

1:58

and of course, Enron.

1:59

Now, in the AI domain, more recently,

2:01

a former karaoke machine company announced that they would be releasing

2:04

AI logistics software.

2:06

Cinnical though, the analysis may be,

2:07

usually these rebrands have very little substance

2:09

beyond pumping the stock, and Allbirds certainly received a solid pump.

2:13

The stock stored by as much as 875% yesterday,

2:16

but whether they can actually do anything,

2:18

most people are fairly dubious on.

2:20

The Wall Street Journal notes that $50 million

2:22

doesn't get you far in the AI race,

2:24

with Neo Clouds like CoreWeave and Nevious

2:26

planning to spend tens of billions on infrastructure this year.

2:29

Matt Levine sums it up, of course,

2:31

there are two levels of analysis here.

2:33

One is sure Allbirds is pivoting its business to AI compute infrastructure.

2:36

That seems like a competitive and capital intensive business

2:38

in which Allbirds has no obvious expertise,

2:40

but whatever nostalgic fondness for the sneakers,

2:42

maybe it'll work out.

2:43

The other level is that Allbirds is pivoting its stock

2:46

to being an AI meme stock.

2:47

That definitely worked out.

2:49

I would say that that is a story we can safely leave behind,

2:52

and moving to something that is much more relevant,

2:54

which is that OpenAI has updated their agents SDK,

2:57

with a host of new features that make it easier to build enterprise-grade agents.

3:00

The software developer kit now includes

3:02

a native sandbox integration, allowing developers to keep agents

3:05

contained in particular systems and workflows.

3:07

The basic just here is that the harness is now separated from the compute layer,

3:11

meaning data can live in the sandbox rather than being jammed into context.

3:14

Interestingly, this is not dissimilar from what we talked about

3:17

on our hardest engineering show, in terms of anthropics-managed agents.

3:21

Both companies independently arrived at a similar architectural move,

3:24

andthropic called it decoupling the brain from the hands,

3:27

while OpenAI called it separating the harness from compute.

3:30

Both however cite the same reasons.

3:32

Security, i.e. credentials shouldn't live where model-generated code runs,

3:36

durability, i.e. losing a sandbox shouldn't kill the session,

3:39

and scale spinning up many sandboxes per agent says needed.

3:42

The new agents SDK for OpenAI also delivers significant upgrades

3:45

to the in-distribution harness, improving file access tools,

3:48

as well as adding memory and compaction.

3:50

Overall, the release brings OpenAI's infrastructure closer to the way

3:53

agents need to operate within secure systems.

3:55

Karen Sharma, a member of the product team, said,

3:57

this launch at its core is about taking our existing agents SDK

4:01

and making it so it's compatible with all of these sandbox providers.

4:05

Together with performance upgrades, Sharma said the goal is to allow companies

4:09

to quote, go build these long-horizant agents using our harness

4:12

with whatever infrastructure they have.

4:14

One way to look at this is another example of the MAD-DASH

4:17

to translate ProSumer AI products and enterprise products

4:20

that can conform to security and operational standards.

4:23

Steve Coffee from OpenAI writes,

4:25

this is the direction I'm excited about for agents.

4:28

Open harnesses that give you the flexibility to deploy your agents at scale

4:31

with your own data on your own terms.

4:33

Armand City writes, agents can now run and control environments

4:36

where their access to resources, APIs, and data can be scoped precisely.

4:40

This isn't for consumer chatbots.

4:42

This is for enterprise deployments where you need to let an AI loose

4:45

on real systems without letting it break things.

4:48

Now in a very different part of OpenAI's business,

4:50

the information reports that the company is shifting their ad revenue model

4:53

to paper click.

4:54

One of the frustrations with the early version of chat GPT ads

4:57

was that advertisers complained that they couldn't properly track performance.

5:00

OpenAI's ad data was less developed than Google or Meta,

5:03

so advertisers were left guessing on how other ads were converting.

5:06

OpenAI was also charging a high premium for those who wanted to participate

5:09

in the early trial.

5:10

The information now reports that OpenAI will now charge only when users click on an ad

5:14

that's opposed to charging per view.

5:16

They're also looking at other action-based pricing,

5:18

including charging when a user makes a purchase.

5:20

The goal is to de-risk trying out this new advertising medium

5:23

by having the payment structure better aligned with the outcomes.

5:26

Moving to a very different topic,

5:28

the man-ass investigation is casting a chilling effect

5:31

over China's startup scene as founders are forced to pick aside.

5:34

Earlier this year, reports circulated that the CCP was taking a closer look

5:37

at Meta's acquisition of Manus.

5:39

In particular, there was some suggestion that Manus's relocation to Singapore

5:42

last year was a bid to circumvent Chinese tech export controls.

5:46

In late March, two Manus co-founders met with Chinese officials

5:48

and were informed they would not be allowed to leave the country

5:51

until the investigation concluded.

5:53

According to the information's China-based reporter Jingyang,

5:55

this move has spooked Chinese founders

5:57

and neutered hopes of international success.

5:59

Hank Yuan, a founder working on an AI agent company, said,

6:02

If you want to build AI products for markets outside China now,

6:05

you will have to think even more carefully about which markets to target,

6:08

how to structure your business, and whether to raise money

6:10

in Chinese yuan or US dollars.

6:12

He added all the AI startup founders I know are paying attention to Manus.

6:16

Now, until this point, there had sort of been a tacit truth

6:19

between Beijing and Shenzhen.

6:20

Founders could freely travel to the US to seek funding,

6:23

and there was an implicit understanding that tech success

6:25

mattered more than strict nationalism.

6:27

Now, of course, no official policy exists, so there's no policy to change,

6:30

but it still appears that founders have gotten the message.

6:33

A co-founder of an AI video startup said,

6:35

Originally, we thought we had many options for exits,

6:37

but now the takeaway from Manus is,

6:39

if your startup is acquired by other companies,

6:41

don't get acquired by US companies.

6:43

If you are acquired by Alibaba or Tencent, that's fine.

6:46

Now, interestingly, the result isn't a total halt

6:48

to Chinese founders heading to US markets.

6:50

They seemingly just need to commit to picking aside.

6:53

One Chinese-born founder working in San Francisco, for example,

6:55

said he is now pivoted to hiring devs in Singapore rather than China.

6:58

He commented,

6:59

Having a team in Singapore costs more,

7:01

and the quality isn't as good as having a China team,

7:03

but I still don't want to build a China team.

7:05

It's too risky.

7:07

Which I think is interesting context for our last story,

7:10

which is recent comments from Nvidia founder, Jensin Huang,

7:13

about the need for dialogue between the US and China.

7:15

Jensin was the latest high-profile guest to appear on the Duar Keshe podcast this week.

7:19

And in the show, he dug in on why he believes cooperation

7:22

rather than export controls is the right way to navigate the rise of AI in geopolitics.

7:27

Duar Keshe framed the question around a scenario where China gets access to enough advanced chips

7:31

to train a mythos-level model and can run cyber-attacks using millions of agents.

7:35

Huang rejected the premise, commenting,

7:37

Mythos was trained on fairly mundane capacity

7:40

and a fairly mundane amount of it by a fairly exceptional company.

7:43

So the amount of capacity it was trained on is abundantly available in China.

7:46

You first need to realize that chips exist in China.

7:50

Huang went on to explain that China has around half the AI researchers in the world,

7:53

abundant energy, and chip manufacturing that is swiftly ramping up.

7:57

Reframing the question then, Huang asked,

7:59

if you're worried about them, what is the best way to create a safe world?

8:02

Victimizing them, turning them into an enemy likely isn't the best answer.

8:05

He continued,

8:06

they are an adversary.

8:07

We want the United States to win.

8:09

But I think having a dialogue and having research dialogue is probably the safest thing to do.

8:13

This is an area that is glaringly missing because of our current attitude about China as an adversary.

8:17

It is essential that our AI researchers and their AI researchers are actually talking.

8:21

It is essential that we try to both agree on what not to use AI for.

8:25

Now for some, this was just Jensen talking his book,

8:27

as Beth J. Zos put it,

8:29

securing the bag for GPU sales to China.

8:31

But I think Ed Elson's more nuanced take is closer to right.

8:34

He writes that he thought that what Jensen was basically trying to say

8:37

is that the question isn't whether China achieves Mythos Level AI,

8:41

because they will.

8:42

Ed writes it's whether they will use it to try to destroy America.

8:45

Bringing up the nuclear comparison Ed says,

8:47

the same question goes for nukes.

8:49

China has nukes and yet they haven't nuked us.

8:51

Why? Because they don't want to.

8:53

The interview is certainly worth a watch.

8:55

If for no other reason the door catch seems to be one of very few people who is actually willing to ask CEO's hard questions.

8:59

But I will say that I don't think it's nearly as contentious and simple as social media is making it out to be.

9:04

Shocker, right?

9:06

If nothing else, it did give us a meme video quote which I will use forever now.

9:09

You're not talking to somebody who woke up a loser.

9:13

And that loser attitude, that loser premise makes no sense to me.

9:18

But with that moment of glory, that's going to do it for today's AIDLiveHeatLines.

9:21

Next up, the main episode.

9:26

Alright folks, quick pause.

9:28

Here's the uncomfortable truth.

9:29

If your enterprise AI strategy is we bought some tools, you don't actually have a strategy.

9:34

KPMG took the harder route and became their own client zero.

9:38

They embedded AI in agents across the enterprise, how work it's done, how teams collaborate, how decisions move,

9:44

not as a tech initiative but as a total operating model shift.

9:47

And here's the real unlock.

9:49

That shift raised the ceiling on what people could do.

9:51

Human stayed firmly at the center while AI reduced friction, serviced insight, and accelerated momentum.

9:56

The outcome was a more capable, more empowered workforce.

9:59

If you want to understand what that actually looks like in the real world, go to www.kPMG.us slash AI.

10:06

That's www.kPMG.us slash AI.

10:11

Want to accelerate enterprise software development velocity by 5x?

10:16

You need Blitzie, the only autonomous software development platform built for enterprise code bases.

10:20

Your engineers define the project, a new feature, refactor, or greenfield build.

10:24

Blitzie agents first ingest and map your entire code base.

10:27

Then the platform generates a bespoke agent action plan for your team to review and approve.

10:31

Once approved, Blitzie gets to work autonomously generating hundreds of thousands of lines of validated

10:35

end-to-end tested code.

10:37

More than 80% of the work completed in a single run.

10:40

Blitzie is not generating code, it's developing software at the speed of compute.

10:44

Your engineers review, refine, and ship.

10:46

This is how Fortune 500 companies are compressing multi-month projects into a single sprint, accelerating engineering velocity by 5x.

10:52

Experience Blitzie first hand at Blitzie.com.

10:55

That's B-L-I-T-Z-Y.com.

10:58

So, coding agents are basically solved at this point.

11:01

They're incredible at writing code.

11:03

Here's the thing nobody talks about.

11:05

Coding is maybe a quarter of an engineer's actual day.

11:08

The rest is stand-ups, stakeholder updates, meeting prep, chasing context across six different tools.

11:13

And it's not just engineers.

11:15

Sales spends more time assembling proposals than selling.

11:17

Finance is manually chasing subscription requests.

11:20

Marketing finds out what shipped two weeks after it merged.

11:22

Zencoder just launched Zenflow work.

11:25

It takes their orchestration engine, the same one already powering coding agents, and connects it to your daily tools.

11:30

Gira, Gmail, Google Docs, Linear, Calendar, and Notion.

11:33

It runs gold-driven workflows that actually finish.

11:36

Your stand-up brief is written before you sit down.

11:38

Review cycle coming up?

11:39

It pulls six months of tickets and writes the prep doc.

11:41

Now you might be thinking, didn't OpenClaw try to do this?

11:44

It did, but it has come with a whole host of security and functional issues which can take a huge amount of time to resolve.

11:49

Zencoder took a different approach.

11:51

Sock2, Type2 certified, curated integrations, tighter security per meter, enterprise grade from day one,

11:57

model agnostic, and works from Slack or Telegram.

12:00

Try it at zenflow.free.

12:02

Today's episode is brought to you by Grenola.

12:05

Grenola is the AI notepad for people in back-to-back meetings.

12:08

You've probably heard people raving about Grenola.

12:10

It's just one of those products that people love to talk about.

12:13

I myself have been using Grenola for well over a year now,

12:16

and honestly it's one of the tools that changed the way I work.

12:18

Grenola takes meeting notes for you without any intrusive bots joining your calls.

12:22

During or after the call you can chat with your notes, ask Grenola to pull out action items,

12:26

help you negotiate, write a follow-up email, or even coach you using recipes which are pre-made prompts.

12:31

Once you try it on a first meeting, it's hard to go without.

12:34

Head to Grenola.ai-slash-ai-daily and use code-ai-daily.

12:38

New users get 100% off for the first three months.

12:42

Again, that's Grenola.ai-slash-ai-daily.

12:45

Welcome back to the AI-daily brief.

12:49

One of the big themes of the year is the heightened stakes around everything with AI.

12:53

Obviously we're seeing that from a technology perspective as agents come online,

12:56

and then the implication of agents coming online is that it raises the stakes from a work perspective.

13:00

And then of course as the stakes get raised from a work perspective,

13:03

we have the stakes raised on the politics of AI as well.

13:06

And that's even before we get into all of the other AI politics issues,

13:09

even beyond implications for jobs, which are becoming more and more a part of the public discourse.

13:14

Now in all of this raised stakes, part of the impact is greater divides between people

13:18

who sit in different spaces relative to all of these changes.

13:21

And by that I mean everything from the difference between leaders and laggers in the corporate sphere

13:25

to optimists and pessimists in the public sphere.

13:28

And if you look carefully, this great divergence is showing up in all sorts of different places.

13:32

We're looking at two of them today in recent studies that have come out

13:35

with the first being the annual Stanford Artificial Intelligence Index report.

13:39

This annual report comes out of the Stanford HAI or their Center for Human-Centered Artificial Intelligence,

13:45

is generally seen as a very comprehensive and high-level look

13:48

at the state of AI both internal to the industry as well as where it sits in society.

13:52

And this year tells the divergence story in very clear terms.

13:55

The report itself is massive, something like 420 pages long.

13:59

And all across the headliner topics you see this divergence.

14:02

On their website summary, one of the big themes that they point to is AI experts in the public

14:06

having very different perspectives on the technology's future.

14:09

So let's talk about some of these gaps.

14:11

A representative gap that they point to is the difference in the way that experts

14:14

versus the general public view AI's likely impact on how people do their jobs.

14:18

When asked how AI would impact how people would do their jobs,

14:21

73% of experts expect a positive impact compared with just 23% of the public.

14:27

When expanded out, this gap between experts and the general public shows up all over the place.

14:33

In addition to that gap we just heard about in terms of how people do their jobs,

14:36

the economy more broadly sees a similar gap.

14:39

69% of AI experts say that AI will have a positive impact on the economy over the next 20 years

14:44

compared to just 21% of US adults.

14:47

Medical care is where the general US public is the most optimistic with 44%,

14:51

saying that AI will have a positive impact, but that is still vanishingly smaller than the 84% of AI experts who say that.

14:57

On K-12 education, it's 61% optimism for the experts versus 24% for US adults.

15:02

And pretty much everyone thinks it's going to be bad for elections,

15:05

with just 11% of AI experts saying that AI will have a positive impact on elections,

15:09

which is their closest number to the general US public of whom only 9% think that it will have a positive impact.

15:14

And other parts of the study show pessimism in more acute ways.

15:18

When asked whether AI will create or eliminate jobs, almost a full two-thirds of US adults believe that it will lead to fewer jobs,

15:24

although perhaps surprisingly 39% of AI experts also think that it will lead to fewer jobs.

15:29

Another interesting area of divergence is the gap between formal education for AI and informal education for AI.

15:35

Stanford points out that while over 80% of US high school and college students now use AI for school-related tasks,

15:41

only half of middle and high schools have AI policies in place, and just 6% of teachers say that those policies are clear.

15:48

Basically everyone is getting their AI skills outside of the formal classroom setting,

15:52

and of course reporting them on LinkedIn.

15:54

One area where AI is not diverging is in the performance of top US versus Chinese models.

16:00

In fact, it would be much more accurate to call that a convergence, although we'll have to see if that remains,

16:05

once we actually get Anthropics Mythos and open AI's Spud.

16:08

Staying on AI's performance for a moment, Ethan Mollick has often referred to AI as having a jagged frontier.

16:14

Basically it can be massively good in some things, including really hard things,

16:18

and be just pathetically awful at some other things that it seems like it should be good at at the same time.

16:23

This is actually one of Stanford's big takeaways as well, where AI models can win a gold medal at the International Math Olympiad,

16:29

but not reliably tell time.

16:31

Now this jagged capability frontier can also lead to jagged adoption, especially inside the enterprise,

16:36

as organizations have to individually figure out where AI doesn't fit within what they do.

16:42

One important area of divergence that is obviously very top of mind for people,

16:46

Stanford sums up as productivity gains from AI appearing in many of the same fields where entry level employment is starting to decline.

16:52

They write, studies show productivity gains of 14 to 26% in customer support and software development,

16:58

and in areas like software development, where AI's measured productivity gains are clearest,

17:02

US developers ages 22 to 25 saw employment fall nearly 20% from 2024,

17:07

even as the headcount for older developers continues to grow.

17:10

And so here we're seeing not just divergence between productivity gains and employment,

17:13

but actually divergence between different types of employment,

17:16

with early stage employees going one direction and older employees going the other.

17:20

Now if Stanford is showing this story of divergence on the very biggest macro levels,

17:24

AI's great divergence is also very acutely captured at the enterprise level by a new study from PWC.

17:30

The study is PWC's annual AI performance study,

17:34

and the headline or stat is at around 75% of AI's economic gains are being captured by just the top 5th of companies.

17:42

This is one of the clearest indicators I've seen yet of the difference between leaders and laggers

17:46

when it comes to corporate AI adoption.

17:48

This comes from a study that interviewed more than 1200 senior executives,

17:52

who PWC says are primarily at large publicly listed companies.

17:56

And what's really interesting about this study is that the difference between efficiency AI and opportunity AI,

18:01

which we talk about fairly regularly on this show, is on full display.

18:05

And now by way of reminder, efficiency AI is my term for companies that view AI as a way to do the same with less.

18:11

Basically whose primary interest is in having the same amount of output with less resource input.

18:16

Opportunity AI on the other hand is the idea not of doing the same with less,

18:21

but of doing more with the same or way more with a little more.

18:24

Basically that recognizes that the real opportunity with AI is to go harness new opportunities,

18:29

do things that weren't possible before, get into new orthogonal fields, release new products, do more R&D,

18:35

grow towards the future rather than make the present more efficient.

18:38

And boys that on display in this PWC study, they found that leading organizations work twice as likely to redesign workflow studies.

18:45

They found that leading companies were approximately two to three times more likely to use AI to identify and pursue growth opportunities and reinvent their business model.

18:55

They sum up the research shows that these top performing companies are not simply deploying more AI tools.

19:01

Instead they are using AI as a catalyst for growth and business reinvention, particularly by pursuing new revenue opportunities created as industries converge,

19:09

while building strong foundations around data governance and trust.

19:13

Now interestingly, one might think that this is all about just using AI for more.

19:17

And certainly that's part of it.

19:19

The companies in their survey that had the best AI driven financial outcomes were twice as likely to be executing multiple tasks within guardrails,

19:25

and about twice as likely to be allowing AI to operate in autonomous self-optimizing ways.

19:29

They were increasing the number of decisions made without human intervention, at almost three times the rate of their peers.

19:35

And yet the story is a combination of automation but also governance.

19:39

These leaders were 1.7 times as likely to have mechanisms such as responsible AI frameworks and one and a half times more likely to have cross functional AI governance boards.

19:47

In addition to doing more with AI, the employees of these leaders are twice as likely to trust AI outputs as those from the laggers.

19:55

Overall PWC found that the companies that were the most AI fit in their research delivered AI driven financial outcomes.

20:03

And in addition to the research delivered AI driven financial performance that was 7.2 times higher than other respondents performance.

20:11

As AI continues to proliferate through society, we're going to continue to see these kinds of divergences.

20:15

In some cases, particularly in the areas of policy, divergence can actually be helpful.

20:19

It can inspire better debate, and if we have the right systems in place, better more considered action.

20:25

In some areas however, the divergence is dangerous.

20:27

Divergence which turns into underperformance can threaten individual employees and organizations as a whole.

20:31

That's going to do it for today's AI Daily Brief.

20:33

Appreciate you listening or watching as always and until next time, peace!