Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good Thing

2026-04-10 11:00:00 • 1:04:06

-

Maybe that's an urgent message from your CEO.

0:02

Or maybe.

0:04

It's a deep fake trying to target your business.

0:06

Doppel is the AI native social engineering defense platform

0:10

fighting back against impersonation and manipulation.

0:13

As attackers use AI to make their tactics more sophisticated,

0:17

Doppel uses it to fight back.

0:19

From automatically dismantling cross-channel attacks

0:21

to building team resilience and more.

0:23

Doppel, outpacing what's next in social engineering,

0:26

learn more at doppel.com.

0:28

That's d-o-p-p-e-l.com.

0:31

Casey, I got a haircut yesterday.

0:32

Thanks for noticing.

0:33

Kevin, it looks extraordinary.

0:35

Has this ever happened?

0:36

I went into the barber.

0:37

I sat down in the chair.

0:38

He did not ask me what I wanted.

0:40

He just started cutting.

0:42

Has this ever happened to you?

0:43

No, because they know I'm not straight.

0:45

With a straight guy, you don't need to ask them.

0:48

You just get the standard haircut.

0:50

The man gets.

0:51

He one-shoted my hair.

0:53

He said, yeah, I've seen this before.

0:55

I know what I'm doing here.

0:57

Where is the bar lock in?

0:58

It's like, okay, let me get out the schematics.

1:00

It's also like you two dead-hers.

1:02

It's also like you two dead-hers.

1:03

It's not like he knew me.

1:05

See, this is exactly the fact that you just go to random barbers

1:10

and will accept whoever happens to be.

1:12

This is why they can just start cutting your hair.

1:15

Oh, who is, yeah, I don't know this person.

1:17

Yeah, do whatever the hell you want.

1:18

See if I care.

1:19

That is the straight approach to hair.

1:22

But it's working great for you.

1:23

Thank you.

1:24

Appreciate it.

1:27

I'm Kevin Russo Tech-Column,

1:30

set the New York Times.

1:31

I'm Casey Neud from Platformer.

1:33

And this is Hard Fork.

1:34

This week, the dangerous new AI model

1:36

that has cyber security experts on high alert,

1:39

then New Yorker writers Ronan Farrow

1:41

and Andrew Moran's join us

1:43

to discuss their spicy new profile of Sam Altman.

1:46

And finally, it's time for one good thing.

1:49

Although I guess really there are two things in the segment.

1:51

Yeah, we should really rename the segment.

1:53

Okay.

1:56

Okay.

2:04

Casey, we have a big announcement.

2:06

Kevin, what is the announcement?

2:07

We're ending the show.

2:09

No.

2:11

No, you're finally free of America.

2:14

Yes.

2:16

No, on June 10th in San Francisco,

2:19

we are doing the second ever installment of Hard Fork Live.

2:23

It's too fast.

2:24

It's too furious and it's happening.

2:27

I tried to let them get them to let me call it

2:29

too hard to fork, but they decided that was not appropriate.

2:33

Kevin, where can people get more information

2:35

about Hard Fork Live too?

2:36

Okay.

2:37

It's happening on June 10th in San Francisco

2:40

at the Blue Shield of California, theater.

2:42

Bigger venue than last year.

2:45

Tickets will be on sale at nytimes.com slash events,

2:49

not today, but next Friday, April 17th.

2:53

So we're giving you a full week to get your act together

2:56

reach out to all your friends,

2:58

use meta AI to plan a trip to California,

3:01

use cloud code to build your scraper bots

3:05

to scoop up all the tickets.

3:06

And on Friday, the 17th, you can buy tickets.

3:09

And let and we will just say in advance,

3:12

last year the tickets did sell very quickly.

3:13

They did.

3:14

So get in there quickly if you want to go.

3:16

There would be more tickets available,

3:17

but Kevin reserves 50 for quote his team,

3:20

which I don't even know what all these people are doing

3:22

at this point, but they'll be there.

3:24

You say I did them too.

3:25

So get your tickets next Friday, April 17th at nytimes.com slash events.

3:33

Well, Casey, as you know, on this podcast,

3:36

we have a rule about discussing AI models called

3:40

Shippeter Zippet.

3:41

Shippet or zip it, unless you're actually putting it

3:44

in people's hands, we usually do not want to hear about it.

3:46

Yes, but today we are making an exception

3:49

for the new anthropic model,

3:52

Claude Mythos preview that just was announced,

3:56

but not released for reasons that we will talk about.

3:59

But first, since this will be a segment and a show about AI,

4:04

our disclosures, I work for the New York Times,

4:06

suing open AI, Microsoft, and perplexity

4:08

over alleged copyright violations.

4:10

And my fiancee works at anthropic.

4:12

Casey, this is, I want to say like the biggest story

4:16

of the year in AI.

4:17

I know there's been a lot of AI news.

4:19

I know that people are probably saying,

4:21

oh, here they go, talking about another model again.

4:24

I am telling you, this is something that people

4:26

need to be paying attention to because of the implications,

4:29

because of the way it was rolled out,

4:31

and because of the model itself,

4:32

which we will get to all of that.

4:34

But do you agree that this is a big deal?

4:36

Well, you know, when we were talking about the show this week

4:38

and we were kicking around the idea of like,

4:40

hey, exactly how big do we think this is?

4:42

You pointed out that one question people have been asking

4:44

this week is, are we going to have to rewrite all software?

4:48

And I feel like usually when folks are kicking that question

4:50

around, it's a big story.

4:52

Let's just talk through what was actually announced this week.

4:55

So on Tuesday, anthropic announced that it was starting

5:01

something called project glass wing.

5:04

The name project glass wing refers to the glass wing butterfly,

5:07

which has transparent wings.

5:09

And so it can hide in plain sight.

5:11

And that is thematically important for reasons

5:13

that we will come back to.

5:14

It's also a delicacy in some countries.

5:16

I've never had glass wing butterfly.

5:18

I've got to try it.

5:20

So notably, they are not releasing this model to the public

5:24

because they claim it is too dangerous to do that.

5:27

Instead, they are giving access to a consortium of tech companies,

5:32

including Cisco, Broadcoms, or makers of internet infrastructure

5:36

as well as Microsoft, Apple, Amazon.

5:40

Basically every big tech company that is not open AI

5:43

or meta is getting access to this model.

5:47

But not general access, just access to do defensive

5:51

cybersecurity testing, basically to go out and harden

5:55

their systems and their infrastructure and their software

5:59

before the general public can get its hands on this model.

6:02

So what are some examples of what mythos was doing

6:05

in training that so alarmed anthropic

6:08

that it came to this point?

6:09

So anthropic has been running this model internally

6:12

for several weeks now.

6:14

And they claim that this thing has found vulnerabilities

6:17

in every major operating system and web browser.

6:21

They gave some examples that have already been patched.

6:25

One of them was that this model apparently found

6:28

a 27 year old security flaw in OpenBSD.

6:32

OpenBSD is an open source operating system

6:36

that runs on firewalls and routers.

6:39

It is sort of like a critical security layer on the internet.

6:43

And it was designed specifically to be hard to hack.

6:47

And this model because of its advanced coding

6:50

and reasoning capabilities was able to find this bug

6:53

that 27 years worth of professional security researchers

6:56

had not been able to find.

6:58

What else?

6:59

Another example was that it found a bug

7:03

in a piece of popular open source video software

7:06

called FFMPEG that had, according to anthropic,

7:10

been scanned for bugs five million times

7:13

by automated security tools without finding

7:16

this critical exploit.

7:19

And that's why it's important to always look

7:20

the five million in first time

7:22

because you might find something.

7:24

Now, Casey, I think for people who are not

7:26

cybersecurity experts, it might be worth sort of

7:30

sketching the context here for like how software works.

7:34

So every piece of software, every operating system,

7:38

every app, every web browser that people use,

7:42

is built on a mixture of tools.

7:45

Some of those tools are proprietary to the companies

7:48

that make the software.

7:49

Some of them are sort of shared open source tools

7:52

that are just in everything.

7:54

Companies will just grab this open source thing

7:56

and plug it into their thing.

7:57

Because that's compatible with everything

7:59

and I'll save you a lot of time and trouble.

8:00

It's already been security tested by decades,

8:02

sometimes of researchers.

8:04

And this is sort of a big piece of kind

8:06

of the foundation layer of the internet

8:08

or these open source software projects.

8:10

What is happening now, according to Anthropic,

8:13

is that they can basically use this model,

8:17

Claude Mythos Preview, to sort of proactively go out

8:21

and find all of the unfound bugs they call these

8:24

zero-day exploits with a sort of speed and efficiency

8:28

that no human security research team could do.

8:32

Yeah.

8:33

And, you know, I would say that it can be difficult

8:37

to talk about cybersecurity in a way

8:39

that resonates people for a couple reasons.

8:41

One is just that cybersecurity as a field

8:45

exists essentially almost entirely to alarm people

8:48

and say, here are a bunch of problems

8:50

and these are really scary.

8:51

You know, I hope that folks in the cybersecurity field

8:54

would not mind me saying, like,

8:55

it is just like kind of an alarmist profession

8:57

and that when I've talked to these people

8:58

or the last 15 years, they've been telling me like,

9:00

look, the entire internet is held together with spit and glue

9:03

and we're very lucky that there hasn't been a catastrophe yet.

9:05

Okay.

9:06

So after all of this news came out,

9:09

I was like, I want to talk to some people

9:11

who are at least not working for anthropic

9:14

or this consortium to try to give me a gut check

9:17

on how big a deal this is.

9:18

And so I talked to Alex Stamos who formally led security

9:21

at Yahoo and then Facebook.

9:23

And Alex said, like, yes, this is a big deal.

9:25

And he was hoping for a long time

9:27

that we would see a consortium come together like this

9:30

because of exactly what you just said, Kevin,

9:33

the intelligence in these machines

9:35

and their ability to work autonomously

9:38

are now great enough that they can chain together

9:40

exploits that human beings either would never see

9:44

would take them a long time to see

9:45

or they would just never get to

9:47

because we're limited in ways that these machines are not.

9:50

So that got my attention.

9:51

Now, we should also talk about like,

9:53

what the strategy is here from anthropic

9:56

because I think a lot of people see an AI company

9:59

that is known for sort of being alarmist about safety

10:03

say we've created this powerful spooky new model

10:06

and we're not gonna show you

10:07

because it's too powerful and spooky

10:10

as some kind of marketing tactic.

10:12

So I think what you just say, like,

10:14

that is not to my understanding the case here.

10:17

No, in my mind, it is obvious why,

10:20

like if you're a corporation and you release a tool

10:23

and people with no real technical expertise

10:26

are able to use it and within a few hours

10:28

discover a novel exploit in the Linux kernel

10:31

and then take over other people's machines

10:33

to cause crimes, you might be held liable as a corporation.

10:36

You will get in trouble.

10:38

Like there will be congressional hearings.

10:40

So companies just in their rational self interest

10:43

do not want to sell cyber weapons on the open market.

10:47

Yes, it's also like if this was a marketing strategy

10:50

it is a horrible marketing strategy.

10:53

Like the government already thinks

10:55

you're a bunch of panicky doomers.

10:58

You have a new model that you claim is the most powerful

11:00

model in the world.

11:01

So instead of selling it,

11:04

you give a hundred million dollars of

11:06

clawed credits away to a consortium of companies

11:09

that includes many of your competitors,

11:11

which is what Anthropic is doing.

11:13

That is not how I personally would market a spooky new model

11:16

if I were in the business of marketing spooky new models.

11:19

Yeah, now look, it may be that despite everything

11:21

that we just said, there is still some marketing benefit

11:24

to Anthropic from doing this, right?

11:26

Like we know that they saw a huge increase in their revenue

11:30

after they took that stand against the Pentagon.

11:32

And that is in that stand.

11:34

They said like we are determined to do things

11:36

in a really safe way.

11:37

It seemed like the business world really liked that.

11:39

And so I could imagine there being a business benefit

11:42

to Anthropic of coming out and saying,

11:43

we have the most powerful model in the world

11:45

and we're not releasing it.

11:46

Like yes, I'm sure that there are plenty of businesses

11:48

that are salivating over the chance together hands on it.

11:51

But they can't unless they are part of this consortium.

11:54

So they are at least claiming that they are trying

11:56

to get ahead of what they envision will be a reckoning

12:01

was what was the word they used for cybersecurity.

12:04

And it seems plausible to me that in the next kind of six

12:09

ish months, every major piece of software in the world

12:14

is going to need to be patched, rewritten, and re-released.

12:19

So just an absolutely massive project.

12:22

Let me ask you this.

12:22

You know, Alex Stamos, the security expert that I mentioned,

12:26

told me that he sees essentially like two broad possibilities.

12:31

One is, and this is the good scenario,

12:34

there are a finite number of critical bugs

12:37

and vulnerabilities to be found.

12:39

And that maybe if we all work really, really hard

12:42

over the next six months or however long it turns out to be,

12:45

we will be able to patch those vulnerabilities

12:48

and our infrastructure will remain safe and stable.

12:51

The other possibility is that this model is already good enough

12:54

that it can just simply invent exploits

12:56

that we never would have thought of.

12:58

And so this will essentially just be a really, really big problem

13:01

that potentially just keeps growing in scope

13:03

because maybe eventually you hit some sort of true, super intelligent point.

13:07

So I'm curious if you've talked to people

13:10

about what they see the scenarios are

13:11

and if you have any thought as to which of those two is more likely.

13:15

So I think it's possible that they will patch

13:17

this sort of top 1% of critical software, right?

13:20

The stuff that everyone knows is important.

13:22

Your Linux, your very popular open source libraries,

13:27

your routing equipment and networking equipment.

13:30

Like it seems plausible to me that a couple of companies

13:33

with the right resources and the right models

13:36

could like find and fix the worst security vulnerabilities.

13:39

But I also talk to people who are telling me that

13:42

it's not as simple as that because once you get outside

13:45

that kind of top 1% of critical infrastructure,

13:49

there's just a lot of machines that are running on old code, right?

13:53

So it's theoretically possible that all of these fixes

13:57

could be submitted to the people who maintain these software projects.

14:02

But that a, there aren't enough humans to review

14:05

all of the proposed bugs and fixes

14:09

so that just sort of is a human bottleneck there

14:12

or that there is just a lag in the time between

14:16

when a piece of software is patched

14:19

and when the person running the router

14:21

at the medium-sized business in Tulsa

14:24

decides to update the firmware or install the security patch.

14:28

So people can expect a lot of apps that are asking them

14:32

to update their software or reinstall their software

14:36

over the next few months.

14:38

I've started getting a few of these already.

14:39

Have you started getting these?

14:41

Yeah.

14:42

So I think this is going to be a kind of forced reset

14:46

for the entire cybersecurity industry

14:50

and a very significant event in the history of technology.

14:55

Yeah, well, and just to make it concrete,

14:56

like we are currently at war with Iran

14:58

and Iran is currently hacking our critical infrastructure.

15:01

There's a story in wire this week about them successfully hacking,

15:05

like water and energy infrastructure.

15:08

Right now they're able to do that without a mythos quality model.

15:11

I would be quite nervous about what they could do

15:13

if something like that fell into their hands.

15:15

So this really is not an abstract concern that we're laying out.

15:19

Right.

15:19

And we should talk about this government piece of this

15:22

because one weird characteristic of this moment

15:27

is that this very powerful advanced model

15:32

that anthropoclames is capable of doing autonomous

15:36

cybersecurity research and attacks

15:39

is also a company that the US government

15:42

has spent the last several months trying to kill.

15:45

And has tried to declare anthropoc a supply chain risk.

15:49

They have ordered all federal agencies to stop using cloud.

15:53

And so my understanding is there have been some conversations

15:57

between anthropoc and parts of the national security

16:01

establishment and apparatus about this model.

16:05

But it is also simultaneously true

16:07

that they cannot use this model

16:10

without sort of running afoul of the administration.

16:13

So a private company right here in San Francisco

16:17

currently has a technology that they claim

16:19

is capable of finding critical security vulnerabilities

16:22

in every major operating system and web browser in the world.

16:26

And the US government to my knowledge

16:27

does not have access to this technology.

16:29

Yeah, it does seem like something that like our national security

16:32

infrastructure would want to have access to.

16:35

One more piece on the regulatory front.

16:38

It is crazy to me that model development

16:42

of this scale and seriousness remains essentially

16:45

unregulated in this country.

16:47

Right here you have a private company saying,

16:50

well, we have now created software that can create

16:53

so many different kinds of novel exploits

16:56

that all software might have to be rewritten.

16:58

And they are not really under any kind of regulatory regime.

17:01

And the regulatory regime that the previous administration

17:04

tried to put into place was thrown out by the current one

17:06

because it might harm American competitiveness.

17:09

So I just want to say that makes me really, really uncomfortable.

17:12

I think that if you are making stop this powerful

17:14

regulators ought to be paying attention.

17:16

Yeah.

17:17

One interesting sort of historical note that I'll make here

17:21

is like for the past few years at least

17:26

there has not been a significant gap between

17:30

what the AI companies have built internally

17:34

and what the public has access to.

17:35

Yeah.

17:36

Maybe there's a slightly better model

17:38

that the companies are working on

17:39

that they need to spend a few months testing before they release it.

17:42

Or it runs a little faster than the one that you have access to.

17:45

Yeah.

17:46

But there has not been kind of a significant gap.

17:49

Since I think GPT-2, which was in 2019,

17:53

which involved some of the leaders of Anthropic

17:56

who were then at OpenAI,

17:58

who made a decision to hold back this model, GPT-2,

18:02

out of fears that it could be used for things like

18:05

automating propaganda and misinformation.

18:08

Right.

18:09

In reality, it could barely write a limerick.

18:10

Yes.

18:11

You know, they aired on the side of caution.

18:12

They did.

18:13

And they got a lot of crap for that.

18:15

People sort of said, oh, you're using this to hype some of the same stuff

18:18

we're hearing this week about Anthropic.

18:21

And I think in that case,

18:24

they were probably a little over-excited about what this model could do.

18:29

But they wanted to make sure that they weren't wrong.

18:32

And so they held this back.

18:33

And that created a gap of at least a couple months to maybe a year

18:39

between what the average person could see

18:41

and what was happening inside the AI labs.

18:44

That gap is now open again.

18:46

There is now a model that you and I cannot use

18:48

that our listeners cannot use unless they work at one of these companies

18:51

in cybersecurity defense.

18:53

And what the AI companies are claiming.

18:55

And I think that is just a very tenuous situation.

18:59

And I don't like it, but I also understand why...

19:03

I think in this case, this was the right decision.

19:04

Well, what do you mean when you say that it's tenuous then?

19:07

I think as hostile and suspicious as people feel toward the AI industry,

19:12

that only gets worse if they think that there are secrets

19:15

being kept in a basement that they can't access.

19:18

And I think that it creates paranoia and fear.

19:23

I think that it is generally responsible to have transparency

19:28

from the AI companies about how capable their models are.

19:32

And I understand in this case that Anthropic felt like it had to make an exception.

19:36

But I think this gap may be here to stay

19:41

is the thing that I'm wondering about.

19:44

I think it probably is.

19:45

I mean, it's worth saying that Anthropic was founded on the idea

19:50

that if it could build models that were at the state of the art,

19:54

at the frontier, that it could have some influence over that frontier

19:59

and it could guide it to a safer place than it otherwise might have gone.

20:03

To me, the Pentagon fight and now mythos are examples of that thesis in action, right?

20:10

Where it made the best model and that gives it some room to try to do a little bit of good.

20:16

So blocking domestic surveillance and autonomous weapons for a little while

20:20

or preventing bad actors from getting their hands on tools that could create novel exploits.

20:29

At the same time, in order to do that, they had to build the model in the first place.

20:34

And there is a risk that there is some sort of, I don't know, intellectual property,

20:39

leakage that sort of somehow all of the innovations that they're building

20:44

are going to trickle down into other places.

20:46

And my fear is just that it becomes this sort of self-fulfilling prophecy, right?

20:50

Where we have to build this frontier even though it's dangerous

20:53

and we're going to guide it to this safer place.

20:55

But you know, you did build the thing in the first place.

20:58

So I just like reminding people of that tension because it is not actually inevitable

21:02

that we build these systems.

21:04

And yet we do often act as if that were the case.

21:06

Yeah.

21:07

Last thing, a lot of the people I know who are plugged into the cybersecurity world

21:13

are being asked right now what people should do about their own security.

21:19

If they are worried that models like this will become public,

21:22

it's funny.

21:22

Should they be like locking down all their accounts

21:25

and moving their cryptocurrency into cold storage?

21:28

Like what do you think people should be doing in anticipation

21:32

that something like this will become public?

21:34

You know, it's funny.

21:34

I had a friend ask me that just this morning as I was preparing for the podcast.

21:38

And I said, you know, a couple of things.

21:39

Like one to some extent we're just going to have to wait.

21:43

I mean, to the extent that any of what we've just described is good news,

21:47

it is that the defenders appear like they're going to have some runway

21:50

to fix some really bad problems before the bad guys catch up.

21:54

So I think we should give them a little bit of room to see what they can do.

21:59

If it does emerge that there is a similar model that can wreak havoc

22:03

like rest assured, there'll be segments about it on hard fork

22:06

and we'll have some updated guidance.

22:07

But I asked my friend, do you have a password manager

22:11

and do you reuse passwords for the same thing?

22:14

And she said, you know, I've never really been able to get one of those

22:18

password managers to work for me.

22:19

And I do sometimes reuse my passwords.

22:21

So I said like, look,

22:23

if you're looking for something that you can do, just make sure that you

22:26

have done your basic online cybersecurity hygiene, you should use a password

22:31

manager. I use one password.

22:32

There are many of others out there that are just as good.

22:36

Don't use the same password for anything.

22:38

Your passwords should be randomly generated and not, you know, the name of your

22:41

pet or whatever.

22:42

And then use multi factor authentication where you can.

22:45

Right.

22:46

So don't let anybody get into like your Gmail or your banking account

22:49

just by typing in eight letters.

22:51

You should also be using an authenticator app.

22:54

And so those are some of the basic things that I would tell people to do, Kevin.

22:57

Yeah.

22:58

I am planning to deal with the possibility of a massive cybersecurity breach

23:03

by just sort of selectively dribbling out and cramitating things about myself.

23:06

Okay.

23:07

Just sort of trying to get ahead of any hacks that might expose my, you know,

23:11

emails going back decades or anything like that.

23:13

So I'll just say in that spirit, I used to like the black eyed piece.

23:19

And I still do.

23:20

Let's get it started.

23:21

Now that was a critical vulnerability that I just exposed.

23:27

When we come back, we'll talk to New Yorker writers Ronan Farrow and Andrew

23:30

Moran's about their investigation into Sam Altman.

23:33

I also said something to stuff about you.

23:35

Oh boy.

23:50

I'm going to get it started.

23:55

Most all in one HR systems are a patchwork of disconnected and manual tools.

23:59

Ripling is totally automated.

24:01

If you promote an employee,

24:03

Ripling can automatically handle necessary updates from payroll taxes and

24:07

provisioning new app permissions to assigning required manager training.

24:10

That's why Ripling is the number one rated human capital management suite on G2.

24:15

Trust radius and Gartner.

24:16

If you're ready to run the backbone of your business on one unified platform,

24:20

head to Ripling.com slash hard fork and sign up today.

24:24

That's RIPP, LNG.com slash hard fork to sign up.

24:28

Hard fork is supported by Adio, the AICRM that knows what's going on.

24:33

Set up in minutes, get powerfully enriched insights and surface context on every

24:38

deal.

24:38

Need a prep for a meeting?

24:39

Done.

24:40

Got to follow up to right.

24:41

Drafted ready to close this deal.

24:43

Just ask Adio with universal context.

24:46

Adios intelligence layer.

24:47

You can search, update and create with AI across your entire business.

24:52

Ask more from your CRM.

24:53

Ask Adio.

24:54

Try Adio for free by going to adio.com slash hard fork.

24:59

That's ATTIO.com slash hard fork.

25:03

Thousands of businesses from early stage startups to Fortune 500s are choosing to

25:08

build their websites in Framer.

25:10

Changes to your Framer site go live to the web in seconds with one click,

25:14

publish without help from engineering, helping your team reduce dependencies

25:18

and reach escape velocity.

25:20

Learn how you can get more out of your.com from a Framer specialist or get

25:25

started building for free today at Framer.com slash hard fork for 30% off a

25:30

Framer pro annual plan rules and restrictions may apply.

25:35

Well, Casey, the talk of the town in San Francisco this week has been,

25:40

well, there have been two talks of the town.

25:42

One we already covered in our a that was the cloud myth.

25:44

This town conducts multiple conversations at the same time.

25:48

Where the amazing multitasking.

25:51

The other big talker this week has been this big piece in the New Yorker

25:55

about Sam Altman.

25:57

Yes, more than 16,000 words devoted to a question that has come up.

26:00

What's our twice on a hard fork, Kevin, which is can Sam Altman be trusted?

26:05

Yes, the writers on the piece are Ronan Pharaoh,

26:10

famous for his work on the Harvey Weinstein investigation and others.

26:13

And Andrew Moranse, who is a good friend of mine and a long time writer at the New

26:18

Yorker, they worked on this piece for a very long time.

26:22

Talk to many, many people in and around Sam's orbit and tried to answer the

26:28

question of like, who is this guy?

26:30

Yeah.

26:30

And also, why does that matter?

26:32

Right?

26:33

We're talking during a week where these systems have arguably experienced a

26:37

step change in what they can do.

26:39

And I think those kind of advances just naturally should draw more scrutiny

26:43

onto the people running these companies.

26:46

What do they know about who they are, how they operate, are they honest with

26:49

each other?

26:49

And this piece offers one of the more comprehensive portraits that we have had

26:54

so far, I would say, on that question.

26:56

You know, Ronan Pharaoh investigating you has to be one of the scariest experiences.

26:59

I know you pick up the phone.

27:01

It's like, hi, it's Ronan.

27:02

And also, but it seems hot too, you know, that's what everyone wants is just a,

27:06

you'd really handsome man asking them a lot of questions.

27:09

You know, okay.

27:12

So let's bring in Ronan Pharaoh and Andrew Moritz.

27:23

Ronan Pharaoh and Andrew Moritz, welcome to Hard Fork.

27:25

Thank you guys.

27:26

Happy to be here.

27:27

I mean, truly long time first time.

27:29

And in fact, I brought receipts to that effect.

27:33

I'm, this is your show you can take or leave this in the edit, but I wanted to show

27:39

what a devoted, long time fan I am of Hard Fork.

27:42

I know the show well.

27:44

I know you guys like merch and I know you guys like disclosures, but you don't

27:48

have any disclosure merch to my knowledge.

27:51

So I had these made for you.

27:53

Come on.

27:55

One for you.

27:56

One for you.

27:57

I'm going to put it in the mail after we get off.

27:59

But one, one of them says I work for the New York Times, which is

28:02

doing open AI Microsoft and perplexity for alleged copyright violations.

28:06

The other one says in my fiancee works and then drop it.

28:09

Oh my gosh.

28:10

That is amazing.

28:11

So I think time limited.

28:13

It's going to be a time capsule.

28:17

But I mean, made at the print shop in Brooklyn, one of a kind.

28:20

Existing.

28:22

That's incredible.

28:23

You are here.

28:24

And I also, I think I should also make when I gave you a hat at your wedding.

28:30

And I gave you one at your wedding.

28:32

So I thought we were even we have a sort of a theme going on here.

28:35

Okay.

28:35

Right.

28:36

Well, and that's also our disclosure, which is that Kevin and I are buds and have known

28:39

each other forever.

28:39

So actually Casey, you can come to me anytime.

28:43

I know you guys like to rib and roast on the show.

28:46

So you can come to me behind the scenes for any roastable Kevin material.

28:49

My dream has been to get the New Yorker to investigate Kevin Roots.

28:52

So you guys really could not have long had a better time.

28:56

We're on.

28:56

Don't attempt us.

28:58

I'm not picking up the phone.

29:00

Okay.

29:01

Let's talk about this big piece that you both just published in the New Yorker.

29:07

The title of the piece is can Sam Altman be trusted?

29:11

Now, usually there's this sort of folk rule about headlines that end with question marks,

29:16

which is that the answer is always no.

29:19

So I want to put this question to you.

29:21

Can Sam Altman be trusted?

29:23

Well, I think one important thing to note is the piece is really forensic and even.

29:30

And actually to a point where I've been happy to see there's a range of reactions, right?

29:35

There's people who have answered that question in a very severe way and looked at the fact pattern

29:42

that is laid out here and the documentation that's laid out and said, you know,

29:46

this is someone who poses an acute danger and should be kept away from an authority position.

29:51

And then there's people who, I mean, hilariously enough, my mother called me and she's like,

29:56

you know, I kind of like him.

29:57

And so I think that is a true reflection of our intentions.

30:04

In this case, as you might imagine, there is deep consultation with all of the subjects of the

30:08

reporting to really understand their feelings.

30:12

And anytime we thought there was a persuasive argument from Sam or anyone else that, you know,

30:18

something shouldn't make it in or something would be sensationalist, we really carefully discuss

30:21

that editorially.

30:22

So the result is very even and I would say on the question itself,

30:27

what we lay out is something that is remarkable, I'd say, even against the backdrop of

30:33

the culture of mistrust in Silicon Valley where everybody understands and expects, right?

30:38

That being a founder means telling different audiences different things at times to some extent

30:43

where everyone understands that the entire enterprise is building based on hype long before

30:48

there is actual actionable, deliverable product.

30:51

Even against that backdrop, there is an extraordinary preponderance of people who emerge from

30:56

interactions with Sam Altman, including close years long ones with really active complaints

31:01

and allegations that he lies repeatedly about things big and small.

31:07

Well, one of my favorites was when you quote him telling you that he wears a gray sweater every

31:12

day to avoid decision fatigue and then he shows up for his next interview in a green sweater.

31:16

That felt like a really sad.

31:17

That was just for you, Casey.

31:19

I was wondering if you were going to catch that.

31:20

I appreciate that eye for fashion that you so rarely get in these tech profiles.

31:24

Andrew was our fashionista in the writer's room.

31:29

But that's the kind of thing where we didn't want to make too much of that,

31:33

because it's like, oh, we caught you in this deep hypocrisy of choosing a green sweater.

31:40

This is consistent with a lot of the things people say throughout the piece and throughout

31:45

the career of Altman and OpenAI is that there isn't this one smoking gun thing where he's caught,

31:52

you know, with his hand in the cookie jar.

31:54

It's this sort of allegedly longer, more subtle accumulation of facts, which my kind of like

32:00

glib and annoying way of describing it is like the fabled memos and documents that were compiled

32:07

that led to him being fired in 2023 and that have kind of dogtem throughout his career.

32:12

They really shouldn't have been like a secret bullet pointed list.

32:16

They should have been a 16,000 word in New Yorker piece because when it only really makes sense

32:20

when you like, lay them all out together in narrative form.

32:23

Yeah, I mean, you guys mentioned in your story that there have been sort of these rap sheets

32:27

that have been circulating about Sam inside OpenAI and other parts of the AI industry

32:34

for years. One of them was compiled by Daria Ombede when he worked at OpenAI under Sam Altman.

32:41

One of them you said was maybe circulated by some allies of Elon Musk and people who are

32:48

opposed to OpenAI. So give us some sort of behind the scenes details about what is being said by

32:56

whom and how and to what ends about Sam Altman in Silicon Valley.

33:02

Well, it was really important to us to filter for the obvious competitive incentives out there.

33:09

There are people who are massively incentivized to go after Sam Altman.

33:14

And the reality is that there are very firmly evidence-based critiques,

33:21

many of which are promulgated not just by the rivals, although they're certainly amplified by

33:26

them happily, but also by more neutral figures and people who are just kind of technologists who

33:32

aren't in the fight. And then there is the White Hot Center of the rivalry, the stuff you mentioned

33:39

that I think is in a very different category, which is Elon Musk and other direct competitors

33:45

really amplifying everything they can come up with. And in some cases, we document things that are

33:51

inflated or trumped up or just seem to not be true. So Elon Musk in particular has intermediaries

33:58

circulating some pretty spicy and pretty unsubstantiated material in Silicon Valley. And we

34:05

talk about that. I really appreciated that about the piece because this has become more salient

34:10

over the past year as these rivalries heat up and you hear more and more of these scurrilous

34:15

rumors. And while I do think this winds up being a pretty damning portrait of Sam on the whole,

34:21

you do also point out that in some very real ways, he's the subject of legitimate smear campaign.

34:26

Yeah. Oh, yeah. I think that's absolutely accurate. And we were trying not to go in, you know,

34:30

with naivete of like, can you believe business titans are being mean to each other? But like,

34:35

the level of this really does seem kind of shocking and unprecedented. And you know, it's kind of

34:42

consistent with people who think of this as like whoever gets the ring first will control the world.

34:47

Like it just seems like all bets are off. And so as a reporter, it's very challenging to be like,

34:52

do you bring up the scurrilous rumors to knock them down? And so we had like months of conversations

34:57

about how best to do that. So there's been a lot of reporting on Sam Altman, especially around the

35:03

the board, who a few years ago, could you maybe give us like the two or three things that you think

35:09

are new and important from your reporting that rise above the rest in terms of people's understanding

35:15

of Sam Altman and open AI. So I think there are things here that put to rest some of the long

35:22

standing rumors, right? I mean, Altman has always said, and Paul Graham at White Combinator has

35:29

always said he was not pushed out. He left of his own volition. It really seems from our reporting

35:34

that that was not the case. They have talked a lot about their fundraising in the Gulf in the

35:40

Middle East as innocuous all businesses do this. It really seems from our reporting that the

35:46

relationships that Sam has cultivated with some Emirati and Saudi royals is deeper than was

35:53

previously realized, Ron, and what am I missing? There are several things like this.

35:57

We just didn't really know in full what was in those Ilya Satskiver memos. We didn't really have

36:06

the detailed multiple-sourced, heavily documented accounts of the individual proof points that were

36:12

offered in those memos. We didn't have the contents of those Dario Amade notes, and we didn't

36:18

have a lot of these people on the record yet. So I think actually in a way that was a disservice,

36:22

not only to Sam's critics, but also to Sam himself, there was a bit of a veil of mystery, and that

36:28

wasn't purely accidental. One of the things we document that's new here is as a condition of

36:34

the exit of the board members who had moved against Sam that he wanted out. They insisted on an

36:40

outside investigation. What happened there is in my view quite extraordinary, which is yes at

36:46

private companies, sometimes reports of this type when a law firm is brought in to restore legitimacy

36:53

can be kept out of writing, often it's to limit liability, and often legal experts say it's a bit

36:59

of a red flag. This is a different kind of case. This isn't just any private company. This is a

37:04

high profile scandal that engulfed Silicon Valley when Sam was fired. And at a non-profit.

37:11

At a 501C3, exactly. So there were stakeholders, not just in the public, but within this company,

37:18

that would be the bare minimum threshold, where senior executives thought, okay, we're going to get

37:24

some kind of at least detailed summary of what this law firm investigation found when they

37:30

invoked to rubber stamp, Sam coming back. And instead, what happened was an 800-word press release

37:37

that said there had vaguely been a breakdown in trust and offered very few other details.

37:42

And what we reported in this piece for the first time is there wasn't a report. For years,

37:46

people were like, where's the report? Where's the report? There wasn't a report because it was kept

37:49

out of writing. And this is no longer just a speculative supposition. One of the two board members

37:55

Sam helped select to oversaw this process just explicitly says, well, a written report was not

38:02

needed as now their line on this. Yeah, I'm glad you brought it up. It was actually my favorite

38:06

detail in the piece because it was something I'd been curious about forever. I mean, the thing that I

38:11

found most interesting from the piece were the people who spoke on the record, or at least gave

38:16

you quotes. Some of them were unattributed about Sam who, I think previously might have supported

38:23

him or at least felt like there was no upside in sort of talking about him in a negative way in

38:28

public. There was a Microsoft executive quote in your piece as saying that there's a small but

38:33

real chance he's eventually remembered as a Bernie made-off or Sam Bankman-free level scammer.

38:39

There's another unnamed board member who said, quote, he's unconstrained by truth and said that he

38:46

has quote, an almost sociopathic lack of concern for the consequences that may come from deceiving

38:52

someone. I haven't been on a lot of corporate boards, but I think that it's something that's

38:56

quite rare to hear a board member say about a CEO of a company. I'm just curious, like when you were

39:03

weighing these statements, did you feel like there are people who used to be fans of Sam, who

39:11

have soured on him, or are these people who have really held a grudge against him for a long time?

39:17

The thing that you point out about people changing their tune over time, I think is an integral part

39:24

of what we document in the piece, which is the fact that Sam Altman comes up through this Y

39:29

Comminator world is not incidental. The fact that he has an investment portfolio by his own

39:35

estimation, about 400 other tech companies, the fact that he has sat on everyone's board and

39:40

everyone has sat on his board, I think our sort of line about this in the piece is like,

39:45

we spoke to people who are Sam's friends, Sam's enemies, and given the mercenary nature of Silicon

39:50

Valley, some people who have been both. Given that that's the landscape, you are going to have

39:57

people who changed their tune as the wind blows different ways, and that's a lot of how

40:02

Altman's been able to weather a lot of this stuff in the past. One thing that results from that

40:06

spread of opinions is to your question about evolving, takes on Sam, there's definitely a class of

40:14

nuts and bolts investors, prominent people in Silicon Valley who are really pragmatists,

40:19

not just safetists, and who are growth and business oriented, who told us that at the time of Sam's

40:27

firing of the blip, they gave him the benefit of the doubt, and especially because of the factor

40:33

we talked about before, where there just was a dearth of clear information. In that void, a lot of

40:38

prominent people gave him the benefit of the doubt and saw only upside in bringing him back,

40:43

and removing the board that tried to fire him. There are a number of those prominent people in

40:48

that category now who say, I don't know that I would have given him the benefit of the doubt if I

40:54

knew everything then that I now know. It just strikes me though that everyone who digs into this

40:59

winds up coming back with essentially the same story. You know what I mean? There are not like

41:03

17 versions of Sam Altman out there, depending on which reporter calls which different source.

41:08

I feel like we now sort of know the broad outlines of this person's psychology.

41:14

I don't know. I want to challenge that. I do talk to people who are big fans of Sam,

41:21

some of whom work for him, some of whom don't. Clearly, this is a guy who has been able to,

41:26

at various points, lead very important technology projects and rally people behind a vision.

41:32

These people are not mindless sheep. They're critical in discerning and thoughtful people.

41:38

I don't want to seem like I'm taking Sam's side on anything, but I just like, I think that

41:44

there are a lot of people with very strong feelings about Sam Altman. Positive and negative. I

41:49

think the positive side tends to be more like people defending him in private, and the public side

41:56

tends to be more people criticizing him. But I don't know. I guess for Ron and Andrew,

42:01

do you feel like there are vocal supporters who you came across in reporting the story who had

42:08

no direct employment relationship with OpenAI or Sam, or we're leading companies that he

42:14

invested in or something who were like, yeah, this guy seems pretty good and smart and talented.

42:19

Yeah, I was on an 11-year-old who used chat GPT to pass sixth grade.

42:22

Oh my god. No, no. There were legit defenders of Sam on a number of these friends who we

42:31

talked to for sure. I think a lot of this has to do with what baseline expectation are you

42:37

starting from. If you think of this as a business and you start from the premise that people who

42:45

run giant successful businesses have to say a lot of different things to a lot of different people,

42:51

why is anyone even, why is this a story? I think though there's a kind of level setting here where

42:59

one of the things you can do when you take a big sort of putting everything in one place narrative

43:04

effort like this is you can start from the beginning and remember what the original pitch was.

43:10

And when you go back to what the original pitch was, the defense of why are you guys being so naive?

43:16

This is a normal competitive business. Like, okay, so when you pitch this as a nonprofit,

43:23

safety-focused research lab that would aggressively comply with all regulation,

43:28

like, were the people who believed that naive to believe it at the time? So that's when the

43:32

defenses start to feel a little more pressured to me. Yeah, also like for what it's worth,

43:38

you know, it's like, oh, you know, is it really a story that this guy's telling different things

43:42

are so many different groups? It's like, that's not like really a story that gets told about

43:45

Satana Della. It's not really a story that gets told about Sooner Pachai. It's not really a story

43:49

that gets told about Tim Cook, right? Like, there does seem to be something really unusual here.

43:52

And my question for you guys now that you've sort of spent so much time immersed in this company

43:57

is, what do you think it means for OpenAI? Well, I mean, luckily we have a really robust

44:02

independent tech media to, you know, so I was going to tune into TBPN and see

44:06

that there are independent journalistic take on this would be. Do you want to give listeners

44:12

who may not be familiar with what you're talking about some context? I think the day after our

44:19

piece closed, Ronan, or something like late last week, OpenAI acquired TBPN, which is this big

44:25

sort of tech chat show. So that's one aspect of this answer, right? That as OpenAI expands and

44:30

grows, they seem to be sort of buying up more of the press infrastructure to tell their own story.

44:35

Relatedly, by the way, a lot of announcements over there right? Yeah.

44:39

concentrated around when they knew we were going to be running and right developed in the period

44:45

where we were in these intensive conversations with them. And many of them sort of pointed at

44:51

the topics in the piece, you know, they announced this new safety fellowship that's very

44:55

airy. They announced this new governance plan. That's very sort of airy and ethereal.

44:59

But are meant to, I think, you know, occupy space in the conversation on the same topics.

45:04

And look, I mean, everyone Ronan, you should say more about this, but everyone, including

45:09

Altman and the OpenAI execs, we spoke to, recognizes the economic pressures here. I mean,

45:15

I think you guys were there when he said, oh, yeah, it's definitely a bubble and someone's

45:19

going to lose a phenomenal amount of money, right? So even putting sort of the sci-fi

45:24

SkyNet stuff aside, you know, the economic pressures are unavoidable. And a lot of it has to do

45:32

with this sort of pitch man rhetoric, the exact thing we're talking about, right? Because these

45:38

things are contingent. It's not like, oh, will it be a bubble or not? It's like, how hyped up will

45:43

the cycle get is a byproduct of how people like Sam go around the world talking about it?

45:48

Yeah. I want to ask sort of a basic question that I think people have probably raised with you,

45:56

which is like, why does it matter who Sam Altman is? If what we are talking about is a technology

46:04

that could have profound implications on national security, the economy, potentially the future

46:11

of humanity, it doesn't seem obvious to a lot of people why it matters who is running these

46:18

companies. Because a very nice person who is very honest and very transparent in all their

46:23

dealings could still release a rogue super intelligence that blows up the world. And a very,

46:32

you know, manipulative person could release a very aligned model. And so what we should be paying

46:37

attention to are the models themselves, not the people running the companies that make the models.

46:42

I'm not saying I believe that, but I'm curious, what do you make of that argument that we are focusing

46:47

too much on the humans and not enough on the technology? We probably both have thoughts on this.

46:54

I think I have two. The first of which is it's worth noting that while reasonable minds could

47:01

perhaps differ on the question you just posed, the answer provided by Sam Altman and the founders of

47:06

OpenAI was very clear, which is actually part of the way the entire enterprise was structured when

47:12

it was founded as a nonprofit was they talked a lot about avoiding an AGI dictatorship. They really

47:18

believed that actually the person who gets there first and has the most power over this technology

47:24

is pivotal. The individual integrity is formative to the way the technology goes and the way it's

47:32

controlled and the way it's used. The other thought that I have is in my mind, Ureza valid point,

47:39

and more significant than any of this is the structures around these individuals.

47:46

We have a technology emerging that could really affect us all in all of the existential ways you

47:52

just mentioned. And we don't have the regulatory guardrails to keep an eye on these folks. We are

47:58

completely seeding the power to these individual companies and their whims, the mud fight between them,

48:06

the quality control that each of them has or lacks. I think that to me is the big question.

48:13

And the integrity of an individual figures in that and it's important, but it reveals the

48:20

weaknesses in the system. If you have someone who potentially lies all the time, could in the

48:26

eyes of any critics be a danger, the important thing is to have the structures that account for that.

48:32

You, there's a great quote that you guys have in the piece from one of his former co-workers who

48:38

talk about how Sam now has this track record of setting up these elaborate guardrails to keep

48:44

him in check and then skillfully navigating around them. And it made me wonder if you had seen

48:50

this piece in the information this week about tensions that are being reported between Sam and

48:56

his chief financial officer, Sarah Fryer. She's reportedly expressed doubts that open AI will be

49:02

ready for an IPO this year. And according to the story, Sam has noticeably and awkwardly excluded

49:08

her from some conversations related to the company's financial plans kept her out of some key meetings.

49:13

I read that and I was like, well, this is exactly what you guys are writing about in your piece,

49:17

right? You sort of bring in somebody whose job it is to look over the finances of the entire

49:22

company, get it ready for an IPO, but then for whatever reason, we're going to sort of exclude

49:26

her from some meetings. So anyway, I just sort of feel like we really are seeing the exact pattern

49:32

that you guys are writing about now repeating in real time. Yeah. And I mean, just to agree with

49:37

all of this, I think the thing that Kevin's bringing up about given the power of this,

49:42

why are we focusing on one personality? I think that's very legit. I think that this is way

49:47

beyond one person. This is way beyond one personality. It's not like the point of the piece is,

49:52

Sam shouldn't be a GI dictator. So Elon should or Demis should or whatever, right? It's to point out

49:58

the fact that we're having a discussion about a GI dictators at all is insane. These guys know it's

50:02

insane. And yet this seems to be the race that they see themselves being in. When he was fired,

50:09

he was brought back in part because I think no one could really imagine an open AI without

50:14

Sam Altman. Do you think that's still the case? I don't think it's unimaginable anymore. I think

50:20

that part of reaching the scale that they've reached is that you can have a, you know, Steve Jobs

50:28

figure be replaced by a Tim Cook figure, right? It seems like it's inseparable from reaching

50:33

this scale that that becomes at least a possibility in people's minds, right? Ron, I mean,

50:37

does that strike you that way? Absolutely. I think the landscape has changed substantially over the

50:43

period of time. We were reporting this story. The fact that gradually more and more people were

50:49

talking openly about this critique is very telling. We report in the piece that there are periodic

50:57

spasms of senior executives at open AI talking about succession again, of course, naturally the

51:04

company denies this, but also very interesting that in recent forms of that discussion,

51:11

there has been talk about Fiji Simo being sort of the first potential successor candidate who

51:17

could slot into any ideas of that type that circulate between our asking about that and the

51:25

piece coming out. Obviously, Simo has now gone on leave for medical reasons. There's a lot of

51:32

reshuffling. We see it in the Sarah Fryer case. I think you're right to link it to that quote that's

51:37

in the article about constraints being sidelined. Yet, I think these doubts and questions persist

51:46

and are now much more out in the open. On the leadership question, it just strikes me that for

51:52

somebody who I assume wants to stay CEO for a long time, it's interesting to me that he's hired so

51:57

many former public company CEOs to be his top lieutenants. He has the former CEO of Instacart there.

52:03

He has the former CEO of Nextdoor there. He has the former CEO of Slack there. So, you know,

52:08

that's you're bringing a lot of really sort of sharp and pointy elbows into the room when you do

52:12

something like that. I'm trying to tell Sam that there's danger here.

52:18

Pro tip. If you're listening, Sam, you know, there are people in this piece talking about earlier

52:26

tracks of Sam Alman's career where they feel he was deliberately avoiding that. Actually, part of

52:32

what underpinned the terrible, terrible thumbling of the firing effort was a feeling that Sam had

52:39

kind of stacked the board with as one former member put it, JV people, you know, certainly if

52:47

we're being more charitable than that, people who were unprepared for the ruthless corporate warfare

52:52

that ensued. And, you know, I think one thing that is accompanied to the emergence of this as a

52:59

more openly discussed critique is that there's more people around this company, more stakeholders,

53:06

wanting, you know, professionalizing influences in the mix. I have to ask about one detail that I

53:12

loved in the piece, which is that the first time that Sam Alman and Dario Amade were scheduled to

53:18

meet, they were going to meet an Indian restaurant for dinner. This was back in, I think it's 2015.

53:24

And Sam texted him and said that his Uber had gotten in a crash and he was going to be 10

53:30

minutes late to dinner. Now, you did not editorialize on that piece, but knowing you both, I'm sure that

53:37

you went back through the Uber FOIA requests and found the logs of Sam Alman's Uber ride that night.

53:46

Is it your belief that Sam Alman's Uber actually got in a crash?

53:51

I think we're just going to leave that as non editorialized and let it stand right there by itself.

53:59

We also, I will say, I had this conversation and really liked just presenting that uninflected

54:07

for consideration. Okay, if you are the Uber driver who was driving Sam Alman to dinner with Dario

54:14

Amade and you are listening to this show, we do want to hear from you. We do want to hear your

54:19

side. Hard for it and why times.com. We will get to the bottom of this. Well, it's a great piece.

54:28

People should go, read it. Please do not investigate any other AI companies before my book comes out.

54:36

It was a very stressful week for me. Yeah, why don't you guys take a nice long spring summer break

54:41

before you get back? Yeah, look into some politicians or Hollywood executives or something.

54:45

We'll send you some names. Yeah, luckily it takes us as long to write a piece as it takes you to write a

54:49

book. So I think you'll be does if we do anything else. There's two of you. It should be faster.

54:56

Totally. Ronan Andrew, thanks so much for coming. Thanks guys. Thanks guys. Your hats are in the mail.

55:05

When we come back, what are Spanish language friends would call Una Cosa Buena.

55:09

Do you just google that? No. You plotted it? Yes. Okay.

55:40

The right technology can strengthen human judgment. That's why Deloitte brings together AI and

55:50

data analytics with multi-disciplinary teams who can help you connect the dots across your enterprise

55:56

from risk to operations to customer needs. So opportunities don't slip by and surprises don't spread

56:03

because the smarter your systems, the sharper your instincts. That's how technology makes people

56:08

better at what they do best. Deloitte together makes progress. Learn more at Deloitte.com slash

56:15

together makes progress. The thing about AI for business, it may not automatically fit the way

56:21

your business works. At IBM, we've seen this firsthand. But by embedding AI across HR, IT and

56:29

procurement processes, we've reduced cost by millions slash repetitive tasks and free thousands

56:35

of hours for strategic work. Now we're helping companies get smarter by putting AI where it actually

56:40

pays off. Deep in the work that moves the business. Let's create smart to business, IBM.

56:47

Big jobs don't need 10 different suppliers. It's time for one partner. For every size, finish,

56:54

and bulk order delivered on your schedule, the Home Depot Pro, it's about time.

57:00

Well, Casey, it's been a pretty heavy show today. So we thought we wanted to end on a positive note

57:10

with our segment called One Good Thing. One good thing, of course, our segment where we each talk

57:18

about one thing that's been tickling our fancy lately. Kevin, why don't you go first this time?

57:24

Okay, Casey, I am in love with this space mission. Yes. The NASA Artemis 2 mission, I have been

57:34

totally and earnestly obsessed. My wife was like, you're sure are talking about this space mission a

57:41

lot. I have been glued to this thing. And I have been filled with a childlike glee and wonder

57:50

that I did not know I still had the capacity to feel. Now, what exactly are they doing on this

57:55

mission? Or putting the moon? They are going further than any humans have gone from Earth before.

58:03

252,756 miles from Earth. And if you're wondering how many miles is that, well, the New York Times

58:11

had a helpful comparison list. And what do they find? You would need a chain of 2.37 billion of Nathan's

58:19

famous hot dogs to cover the distance that this spacecraft has gone from Earth. That's great.

58:23

Something we can all easily visualize. Thank you for that comparison. Casey, I am learning the things

58:31

that I never expected to learn. I've been watching this with my kid. I have become completely obsessed

58:36

with like concepts and terms that I did not know a week ago, including corona structure.

58:41

The terminator line, which I know you're wondering, that sounds scary. Yeah. It's actually the line

58:47

that separates the sunlight side of the moon from the side that is dark. Oh, I also learned that

58:53

we don't call it the dark side of the moon. That's not the preferred astronomical term.

58:58

Yes. We call it the far side of the moon. The far side of the moon. I am obsessed with all

59:03

of these astronauts. There are four of them up there. Victor, Christina, Jeremy, Reed. This is my

59:08

mountain rush more. I love these people who I've never met. They are adorable. They are incredibly

59:15

brave. I think we should go to the moon every single year. I think we should give NASA whatever

59:21

budget it needs to do because this has reignited my faith in humanity. Absolutely. I also saw

59:27

somebody on social media was posting that because the mission specialist Christina Koch had

59:32

communicated with Houston's Jenny Gibbons during the mission, this mission actually passed the

59:38

Bechtel test, which you don't often see on these missions. So I thought that was cool. I also,

59:45

somebody pointed out they said, you know, the coolest thing about going on one of these missions,

59:48

Kevin would be leaving Florida at 5,000 miles an hour. So that resonated with me as well.

59:54

Okay. You're more interested in the jokes. I am filled with childlike wonder over here.

59:59

I just think this is the coolest thing imaginable. It is very cool. You know, recently, I had an

1:00:06

opportunity to go stargazing. I'm not sure if you've been stargazing recently. I was up on

1:00:11

Monicaia on the island of Hawaii. And we had a really cool telescope there with our guide. And I

1:00:18

got to stare at the face of the moon. And it inspired a childlike sense of wonder in me as well.

1:00:23

But it did not make me want to go there because it looked quite bleak actually. You wouldn't go to

1:00:27

the moon. No, there's no Wi-Fi. Okay. Casey, what is your one good thing this week? Today, Kevin,

1:00:35

I want to talk about the only thing that can compete with the moon when it comes to inspiring

1:00:40

childlike wonder in a person. And that is a weather app. Okay. I'm listening. So recently,

1:00:46

I was reading about these entrepreneurs, Adam Grossman, Josh Reyes, and Dan Abrouton. And they

1:00:51

are the team behind ACME weather, which you probably have not heard of yet. But I bet you've heard

1:00:56

of Dark Sky. Yes. Dark Sky was by consensus the best weather app on iOS. And while it rained during

1:01:06

the 2010s, and I'm using rain in the sort of like non-medialogical sense, the non-medialogical

1:01:12

sense, it would tell you whenever it rained. And now I am using the meteorological sense.

1:01:16

Very good apps. Yeah. This app was bought by Apple in 2020, which was like kind of a head

1:01:21

scratcher. Apple already had a weather app. It was fine. And then Apple sort of integrated some

1:01:26

of its forecasts and some of its other features into its weather app. And then shut Dark Sky down

1:01:31

in 2022. And this made people really sad because I think a lot of us feel myself included like

1:01:36

the Apple weather app has never lived up to what Dark Sky was in a Tadeh. It's like a prediction

1:01:41

market. It's like, there's, you know, maybe it's going to rain. Exactly. Well, so these guys get

1:01:46

back together and they say, Frickit, we're doing weather apps again. And they make Acme weather.

1:01:54

And so you can download this now for iOS. It is apparently coming later to Android. And I know

1:01:59

what you're thinking, Kevin, which is what could you possibly build in 2026 in a weather app that

1:02:05

could differentiate it from all the other weather apps that are already on the market, right? Yes.

1:02:09

Are you wondering this? I'm wondering this. Well, let me tell you a few things. Number one,

1:02:12

they don't just tell you the weather. They show you a range of possibilities and a line chart.

1:02:17

So most of the time, it'll be like, yeah, it's going to be 63 degrees in San Francisco today.

1:02:22

But every once in a while, there's a lot of volatility in all the different signals that they

1:02:25

use to predict the weather. And then you say, okay, I don't actually know what I'm walking into today.

1:02:30

I better bring a couple of layers. This is the weather app for rationalists and other believers in

1:02:34

Bayesian statistics. Exactly. Some of the other things that this app does, they will send you a push

1:02:40

notification if they think there's going to be lightning in your neighborhood. Okay. They will

1:02:43

also do that when they think a sunset is going to be beautiful wherever you happen to be. Wow.

1:02:49

They'll send you an umbrella reminder if it's going to precipitate in the next 12 hours. And they'll

1:02:53

send you a sunscreen alert when the UV index is high. But I've saving my last two favorites for

1:02:59

the end. Number one, there will send you an alert when the Aurora Borealis may be visible where

1:03:05

you are. That's beautiful. I haven't gotten that notification yet. But I wake up every day,

1:03:09

hoping I'm going to get my Aurora Borealis notification. You got to go to skin and maybe I think.

1:03:13

Number two, and this is just some time for pride. They will tell you when there is a rainbow in

1:03:18

your neighborhood. Wow. Are you kidding me? This is such a good idea for a weather app. Who does not

1:03:24

want to be sitting at your wage slave job? You haven't been outside in like seven and a half hours

1:03:30

and then ask me whether it tells you, hey, guess what? There's a rainbow in your neighborhood.

1:03:34

You're going to book it outdoors and you are going to behold the majesty of creation. How really

1:03:38

possibly collecting that data? Well, interestingly, they're taking this ways like approach where they're

1:03:44

inviting their community to submit reports. And so if a bunch of people say, hey, rainbow in my

1:03:49

neighborhood, they're going to go out and send out a notification. Wow. So now, look, this app does

1:03:54

cost $25 a year. And I know, you know, probably most people out there are perfectly content with the

1:03:59

free weather app on their phone. That is fine for you. But as somebody who loves cool things, new

1:04:04

ideas, people having fun, I just wanted to shout out, ask me whether because I think it's a really

1:04:09

cool thing. Now, what is the likelihood that this app will be purchased by Apple and then shut down?

1:04:14

I mean, if that happens, I hope these guys get paid again because somebody has to move the weather

1:04:19

app industry forward. And these are the folks who are doing it. I love that. Like, Grandpa, how

1:04:24

did you make your fortune? Well, I built 17 weather apps that were identical and then sold them

1:04:29

all to Apple. I just also think it's inspiring that at time when some companies are like, we're going

1:04:33

to make a system that is going to force the world to rewrite all software. There are other

1:04:37

guys who are like, what if there's a rainbow in my neighborhood? I want to find out about that.

1:04:42

And those are the people that I want to highlight on today's show, Kevin. Okay. Well, download

1:04:46

Acme weather before the heat death of the universe renders weather irrelevant. And tell us whether you

1:04:52

liked it. That was a good thing. Thank you. Thank you for alerting me to this wonderful rainbow

1:04:57

detector. Well, thank you for alerting me to the existence of the moon. I know you weren't a big

1:05:02

believer in the moon before, but hopefully I've convinced you today. Well, somebody told me

1:05:06

something about a sound stage and maybe the landing was fake, so I've just been curious.

1:05:10

I think, you know, were the only podcasters who actually believe in the moon?

1:05:13

That's our competitive advantage.

1:05:16

Hard for where we believe that people have been to the moon.

1:05:32

The right technology can strengthen human judgment. That's why Deloitte brings together AI and

1:05:54

data analytics with multi-disciplinary teams who can help you connect the dots across your

1:05:58

enterprise from risk to operations to customer needs. So opportunities don't slip by and surprises

1:06:05

don't spread because the smarter your systems, the sharper your instincts. That's how technology

1:06:11

makes people better at what they do best. Deloitte together makes progress. Learn more at Deloitte.com

1:06:18

slash together makes progress. So there's a lot of noise about AI, but times too tight for more

1:06:24

promises. So let's talk about results. At IBM, we work with our employees to integrate technology

1:06:29

right into the systems they need. Now, a global workforce of 300,000 can use AI to fill their HR

1:06:36

questions, resolving 94% of common questions. Not noise. Proof of how we can help companies get

1:06:42

smarter by putting AI where it actually pays off. Deep in the work that moves the business.

1:06:48

Let's create small to business IBM. Big jobs don't need 10 different suppliers.

1:06:55

It's time for one partner. For every size, finish and bulk order delivered on your schedule,

1:07:01

the Home Depot Pro, it's about time.

1:07:06

Before we go, we are saying goodbye this week to our wonderful executive producer,

1:07:13

Jen Poyant. Jen has been with the show for years since almost the very beginning and she's been

1:07:20

a critical force in helping us make the show and conceive the show. So Jen is leaving the New

1:07:27

York Times for a new adventure, but we wanted to just give her a special shout out and say,

1:07:32

thank you from the entire HardFork team for all of the amazing work you've done.

1:07:36

It's true. Jen has been a friend and mentor to us both and we will miss her terribly, but she

1:07:41

will always be part of the HardFork family, which means she has to bring a dish to the potluck.

1:07:47

Thanks Jen. HardFork is produced by Rachel Kohn and Whitney Jones. We're edited by Vierran

1:07:53

Povitch. We're fact-checked by Katen Love. Today's show was engineered by Chris Wood.

1:07:59

Our executive producer is Jen Poyant, original music by Marion Luzano, Diane Wong,

1:08:05

Rowan Niemistow, Alyssa Moxley, and Dan Powell. Video production by Sawyer Roké, Pat Gunther,

1:08:11

Jake Nichol, and Chris Chott. You can watch this full episode on YouTube at youtube.com slash

1:08:17

HardFork. Special thanks to Paul Assuman, Puywing Tam, and Dalia Hadad. As always, you can email us

1:08:23

at HardFork at nytime.com. Send us your zero-day critical security vulnerabilities. Actually, please don't.

1:08:39

How much time do you waste searching for things you know you've seen? Little Bird is an AI

1:08:54

assistant that remembers everything on your screen, every doc message and website. It builds a

1:08:59

secure memory of your work, so when you need something, you just ask. Little Bird finds it in

1:09:04

seconds, and when you need to create, it uses what it already knows about your work to help you

1:09:09

start stronger. Stop searching. Start creating. Download Little Bird for free at Little Bird.AI.