The Future of Addictive Design + Going Deep at DeepMind + HatGPT

2026-04-03 11:00:00 • 1:09:26

-

Dell PCs with Intel inside are built for the moments you plan and the ones you don't.

0:04

There for those all-night study sessions, the moment you're working from a cafe and realize

0:09

every outlet's taken, the times you're deep in your flow and can't be interrupted by an auto

0:13

update. That's why Dell builds tech that adapts to you. Built with long-lasting battery, so you're

0:19

not scrambling for an outlet and built-in intelligence that makes updates around your schedule, not in the

0:24

middle of it. Find technology built for the way you work at Dell.com slash Dell PCs built for you.

0:35

Now, here was a really interesting situation, Kevin. Did you see this robotaxi outage that left

0:41

passengers stranded on highways in China? No. So this happened in Wuhan recently. I've heard of

0:49

that place before. Did they do anything else? It's not clear to me. I'm not really familiar with

0:53

their game, but apparently there was some sort of technical glitch that caused a number of robot

0:59

taxis owned by the Chinese tech giant Baidu to freeze, trapping some passengers in their vehicles

1:05

for more than an hour. And I just thought, my gosh, what a nightmare. Just imagine you're in your

1:10

robotaxi on the way to a wet market in Wuhan. You have an appointment with a pangolin who's going to

1:15

have on you to see if they can transmit anything to you. And then your robotaxi gets horrible.

1:20

It's a nightmare. It's an absolute nightmare. I think that robotaxi outage is definitely the

1:25

worst thing that's ever come out of Wuhan. Yeah. When it comes to these Baidu robotaxis, my advice,

1:31

buy don't. No, boy. No, that was the worst thing to come out of.

1:41

I'm Kevin Russo tech column since the New York Times. I'm Casey Newton from platformers.

1:44

This is hard for this week. Social media companies keep losing in court. How will that

1:49

reshape the internet? Then the Infinity Machine author Sebastian Malibu joins us to discuss his

1:55

new book on Google DeepMind and Demis Hassabas' quest to build super intelligence. Finally,

2:00

it's been a while. Let's catch up with some hat GPT. I missed you.

2:05

You too.

2:16

Well, Kevin, while we were away, I was riveted by what was going on in the courtrooms in

2:22

Los Angeles and New Mexico related to social media. Yeah, it has been a big week for these social media

2:29

product liability trials that have been going on now for some months. And we actually got some

2:34

verdicts. We did. And in both cases, social media lost in LA. A jury found that meta and YouTube

2:41

had been negligent in the way that they designed features that they said were harmful to this plaintiff.

2:47

They have to pay $6 million combined to this plaintiff. And then in New Mexico, the jury said,

2:53

we believe that meta has violated the state's unfair practices act and has misled consumers about

3:00

the safety of its products and has endangered children. In that case, they are ordering

3:04

meta to pay $375 million. Yeah, so we've talked a little bit about this series of cases against

3:11

the social media companies. Social media companies, they get sued all the time for all manner of

3:16

different things. I think what caught R.I. and specifically, your I was the sort of legal theory

3:22

underlying these cases. So talk a little bit about that and what makes this case different from

3:27

other cases that have been brought against the social media companies. Yes, so I would say there

3:31

are kind of two big reasons why these cases are super important. One is that these are what are

3:37

called bell weather cases. Kevin, you ever heard of a bell weather case? These are like cases that

3:41

set precedent for other cases, yeah? Exactly. These are the cases that, if successful, are going to

3:46

open the floodgates for lots of other people to sue under the same theory. The second big reason

3:51

that these cases are really important is that they appear to have opened up a crack in section 230

3:57

of our communications decency act here, which for 30 years has been essentially the foundation

4:04

that the entire internet rests on. It's also a dentist's favorite statute. Yes, that's section

4:10

tooth herty if the joke wasn't landing for you. So yes, this is a super important super

4:16

glad you got that. No, the really sad part was I was planning my own section 230 joke.

4:21

Oh, because I just went to the dentist yesterday. No, I didn't have any cavities. So tooth and

4:25

not herty moving on. So section 230, Kevin, you may remember is the law that says that in most cases,

4:33

these platforms cannot be held liable for what their users post. Yes. So if I went on Facebook and

4:38

I defamed you, which is something I think about doing every day, you could sue me, but you couldn't

4:43

sue Facebook. This is what's been blocking my lawsuits against Facebook over your posts for years.

4:48

That's right. And back in the day, like 30 years ago, this was actually really important because

4:53

there were these small internet forums that were starting up. Some of them got to be bigger size,

4:58

you know, Compuserve, AOL. And inevitably somebody would be mean to another user and they would say,

5:03

I'm not just suing you. I'm suing Compuserve. I'm suing AOL. I'm putting the whole system on trial.

5:08

And a couple of lawmakers got together and they said, this is going to destroy the entire internet.

5:11

Like we need for there to be forums and not have these platforms being held liable for all these

5:16

things. But fast forward to today, and Kevin, would you agree that maybe there are some harms that

5:21

are taking place on the internet that do not consist entirely of people defaming one another on

5:26

Compuserve? Yes. Yeah. And so this is essentially the question that gets asked in this case, right?

5:31

People say, Hey, it seems like we're a pretty long way away from 1996. I'm opening up TikTok.

5:37

I'm opening up Snapchat. And I'm seeing infinite scrolling feeds. I'm seeing auto playing videos.

5:43

I'm a teenager, but I'm getting barrage by push notifications in the middle of the night. And

5:48

that's the say nothing of the recommendation algorithms that might be driving me toward content

5:52

related to eating disorders or other things that are going to make me sad and upset. And so some

5:56

of these people get together with their attorneys. And they say, this actually feels different from

6:01

the thing that section 230 was designed to protect, right? This is not about, oh, I got harmed by

6:07

this particular piece of content. This is about the design of the whole platform, the design feels

6:13

defective. And the really crazy thing about these cases, Kevin, is that juries agreed with these

6:19

plaintiffs for the first time. And they said, we like this theory. We think these products are

6:23

defective. Right. So this is kind of a side door that these lawyers have found around litigating

6:29

on section 230, which they have successfully now shown that at least in these cases can convince

6:36

a jury that it is not about what's on the social network content wise. It's about the actual

6:40

like sort of mechanics and plumbing of the social network that are harmful to people. That's right.

6:45

And we should say that we do expect some appeals here and until those are sort of fully exhausted,

6:53

I can't tell you for certain, this is the moment that the internet changed forever. But there's

6:58

been a lot of commentary over the last week and about what it would mean if these cases were upheld

7:02

because it seems like juries are just going to be really, really sympathetic to these claims.

7:06

So before we get into the implications, like can I just ask a couple more questions about these

7:11

actual specific cases? Please. So what are the actual platform mechanics that are being litigated

7:19

over here? Yes. So in the LA case, among the design features that were at issue were the

7:25

so-called beauty filters that can make you look quote unquote more beautiful if you use them,

7:31

infinite scroll, auto play video, these barrages of push notifications that platform sense.

7:38

And also I would argue more problematically the recommendation algorithms that power the

7:42

platform. And then in the New Mexico case, that was much more about kind of child safety. So

7:49

they were arguing that Instagram in particular had become this playground for predators. It was very

7:56

critical of the fact that meta offers end to end encrypted messaging. And the basic idea was

8:02

meta falsely advertised that these platforms were safe when in reality children are being harmed

8:06

there all the time. So from what I understand, it was like the case was basically

8:13

taken out of the playbook for going against Big Tobacco or another sort of industry that makes

8:18

harmful products. You say this is harmful and not only is it harmful, but the company that was

8:24

making it new that it was harmful and either made it more harmful or just released it as planned.

8:30

Anyway, I did see some sort of exhibits that had been shown off at the LA trial, I believe,

8:36

where some employees at meta were sort of talking on their internal forums about how this stuff is

8:42

so addictive for kids. That seems bad. And I imagine that was persuasive with the jury. But are there

8:50

other instances where the platforms are being sort of taken to court over things that they sort of

8:56

newer harming people and that they either dialed up the harm in an attempt to spike engagement or

9:02

sort of knowingly release these things to the public? Yeah, so I mean, some of this research has come

9:06

up in other litigation over the years, but I think this has been probably the most damaging case

9:12

that we have seen. You know, the first time I remember reading a lot of these internal studies

9:17

was in the wake of the Francis Hogan revelations a few years back, right? Like Francis Hogan walks out

9:21

the door of meta and takes a bunch of this internal research with her winds up sharing with the

9:25

Wall Street Journal and then eventually a bunch of other reporters, including me. The reason that

9:28

the research mattered a lot here, though, Kevin was again, the plaintiffs are now building this

9:33

very specific case, which is you're building a defective product, right? Before the past couple of

9:38

years, we weren't really using this language. We weren't really adopting this sort of public health

9:42

framing of a way to discuss the harms of social media. Before then, it was just kind of this more

9:46

nebulous like, hmm, like they're studying the effect of Instagram on teen girls and it seems like

9:51

some of these girls are having really bad outcomes, but we didn't really have the framing. Well,

9:54

now we have the framing and we're just saying like, hey, you looked into it. You found that some

9:58

subset of your users are having really bad experiences and you did not change the features until

10:02

that mattered. Well, let's talk about the changes. So what would you expect a platform like Instagram

10:08

or Facebook or YouTube to change in the wake of these jury verdicts or are they just going to

10:14

wait till it all shakes out on appeal? I honestly don't know the answer to that question,

10:18

and I think it's a really interesting thing to watch. The question that you just asked is really,

10:22

really controversial actually because much of what these platforms do is just protected under

10:28

the first amendment. And then section 230 also protects a lot of speech, right? And the big debate

10:35

that's like raging in the internet policy community right now is can you separate design from

10:41

content? I want to get your thoughts about this. Right. Is it like the container or is it the

10:45

stuff in the container that is dangerous? Yeah. And there are some people who are saying that no,

10:49

you cannot make that distinction and that effectively all design is content, right? Like if I want

10:55

to send you a push notification, that is my right under the first amendment and you cannot tell me

11:00

that I cannot do that. You cannot tell me that there is a certain limit that I have to place on

11:06

the depth that you can scroll an Instagram like that is protected. But for what it's worth,

11:11

juries are taking the opposite view. They're saying that there are at least some things which seem

11:16

like are just clear mechanical design features and I happen to agree with them. So let's talk about

11:22

this because I think this is maybe a place where you and I disagree or at least where I have some

11:26

misgivings about this theory. So in the case of something like cigarettes, which is a very heavily

11:33

litigated field that I think a lot of this social media litigation has been modeled after.

11:39

There's like an addictive ingredient, right? Nicotine. Everything that you put nicotine in

11:45

becomes more addictive as a result of having nicotine in it. This happens with cigarettes,

11:49

it happens with vapes, it happens with nicotine pouches. If you started putting nicotine in ice cream,

11:55

ice cream sales would go up because nicotine is very addictive. I think the question I have about

12:01

the mechanical addictiveness of these features like infinite scroll, like auto play recommendations

12:09

is that if it followed the same principle as nicotine, then every product that has those would

12:14

become way more popular. And one example I've been thinking about on this is Sora. They sort of

12:21

took the playbook that was working for TikTok and Instagram and they put it onto a new app.

12:26

And the app did not succeed, right? There are other apps that have tried to mimic things like

12:31

the newsfeed that have tried to mimic things like auto play video or recommendation algorithms

12:38

that have not taken off. And so I guess the question in my mind is like if the litigation over

12:44

social media is modeled after the litigation over Big Tobacco, shouldn't there be like some industry

12:51

wide lift as a result of every platform trying to borrow the most addictive features of Facebook

12:57

and Instagram and YouTube? I mean, I hear what you're saying and I think it's an interesting point,

13:02

but I think that internet platforms just work differently than cigarettes, right? Like,

13:06

because you're right, like with nicotine, like nicotine is just addictive. Now there are people

13:10

that smoke cigarettes without getting addicted to them, right? But probably the majority of people do.

13:15

Social media platforms are an imperfect analog to those cigarettes. I believe that platforms need

13:21

to be of a certain scale in order for them to be truly addictive in the way that these

13:26

plaintiffs are now suing about, right? There's something about the fact that there's hundreds of

13:30

millions of people on Instagram and on TikTok creating content that creates that kind of infinite

13:35

supply of things that you might potentially want to watch that is actually able to. But I

13:40

am talking about the stuff in the container, right? Well, I think that there are many ingredients

13:44

that all work together, right? But you're raising a criticism that people are making of this lawsuit.

13:49

Like, effectively, what I hear you saying is you cannot distinguish between the content and

13:52

I'm not sure. I mean, I think I'm open to the to being persuaded that you can, but to my mind,

13:58

it's like one lesson that you could take from this is that it is very bad to be a popular platform

14:06

that engages these mechanics to keep users coming back. But it's okay to be an obscure platform

14:12

that does it because that's not going to have as much harm. So what's really sort of at issue here

14:17

is the fact that these platforms are very, very good and very, very popular at doing the thing

14:21

that everyone else is trying to copy. Yes. And this is the approach that Europe has taken to

14:26

regulating these platforms, right? They have certain categories. And if you are a very large online

14:31

platform, then you just have more responsibility. That makes intuitive sense to me. I think that

14:35

bigger and richer and more powerful, you are the more responsibility that you have to society,

14:39

right? And so in this particular case, you have companies like Metta, which we know are hiring

14:44

cognitive scientists who are working very hard to figure out all the different ways that they can

14:48

hack your brain to get you to look at Instagram for as long as they possibly can. It is in their

14:53

interest to get you to look at Instagram as long as they possibly can. And right now, there's

14:57

just no break on that at all in our society, except for this litigation. So I'm so sympathetic to

15:03

these juries that are looking around. They're seeing this, you know, almost completely unregulated

15:07

platform and they're saying something's got to be done. Yeah. So regardless of sort of what are

15:11

thoughts on the overall sort of legal theory here are like, what do you think the effects are on

15:16

the platforms? If this does get held up on appeal, if these platforms are found liable for millions

15:22

or potentially billions of dollars in damages against all of these people who claim that they were

15:27

harmed by social media, does that mean that they have to, I don't know, go back to like the

15:32

reverse chronological feed of 2008? Does that mean they have to shut off, you know, infinite

15:38

scroll and auto play and recommendations and all these other things? This is where it gets really

15:42

tricky. And this is like maybe the one narrow way in which I'm sympathetic to the platforms,

15:46

which is, okay, the juries have said your product is defective. What juries have not said is,

15:51

here's what an okay product looks like, right? They're saying we don't like this sort of set of

15:56

features, but they're not saying with any specificity like, well, how do we think that these features

16:00

are interacting? What is your actual model of the harm here? And so there is a world where the

16:06

platforms feel like they have to comply and they maybe start picking off some of these features one

16:10

by one, like, okay, if you're like under 16, we'll disable infinite scroll, for example. How much

16:15

benefit does that really have to like the individual teenager who may be struggling? I don't know.

16:19

This of course is why it would be great if Congress could pass some sort of law regulating this,

16:24

but we're now like, I don't know a decade into that project and still not getting very far.

16:28

Yeah, I mean, I think one prediction about how this will change platforms and their behavior is

16:33

that if you start talking about gambling or addictiveness on an internal meta chat room,

16:40

you just immediately get fired. They're just like a little button on your seat that just presses

16:46

and you get ejected out of the building. It's like, because so much of the incriminating evidence here

16:53

just comes from people like spouting off and work chat rooms about like, oh, it really seems like

16:57

this thing we're doing is dangerous. And like, I have to imagine that if it hasn't happened already,

17:01

they're just going to absolutely crack down on that kind of internal discussion.

17:04

Absolutely. Well, so I want to hear a little bit more about how you think about this,

17:08

because you've talked on this show many times about your own struggles to look at your phone less.

17:12

This is an issue that, you know, at various times you feel like has plagued you. So how are you

17:17

feeling about the addictiveness of these platforms? Like, do you buy the sort of public health framing

17:22

for the way that people are talking about them these days? Or do you think that this is overreach?

17:27

So I need to do some more thinking about the product harm arguments here and whether it makes

17:33

sense to me. I am basically on board with the idea that there should be age gating for social media.

17:40

I am sold on the premise that there is a certain age, whether it's 16 or 18 or 14,

17:49

where sort of the most harmful effects taper off. And I think before that age, it makes total

17:55

sense to age gate or at least give parents a lot more control over what their kids are able to

18:01

do and not on these platforms. I think the addictiveness question is just hard for me because

18:08

I feel like my sort of macro theory on all this stuff is that what is happening to social media

18:15

over time is that the social part is fading away and the media part is rising in the mix.

18:20

And so I think that if you start treating the design and mechanical decisions of these media

18:28

platforms as harmful under the law, it just sort of leads me into a place where I become much less

18:36

certain. Like before any of this existed, there were cliffhangers on TV shows that were designed

18:42

to keep you coming back after the commercial break or to the next week's episode or whatever.

18:48

Those were arguably addictive features. They would keep people coming back. Is that illegal? I would

18:54

say probably it shouldn't be and it's not. So I think there is a certain sense in which the

19:00

closer to that social media moves to something like TV or streaming video, the blurrier of the lines

19:07

in my mind get between the content and the mechanics. What are your thoughts on that?

19:11

Well, I have to disagree. I do think cliffhangers should be illegal because I want to know what

19:14

happened. I don't have to wait till the fall to find out if that person is still alive.

19:18

But also I do think that there are some really important differences between like let's say

19:22

YouTube and HBO Max. HBO Max is not going to modify the content of HBO to your individual

19:30

preferences. They're going to go pay some money for a bunch of shows and they're going to hope

19:34

a bunch of people watch them. The platforms that we're talking about are doing is going to be

19:38

very different. They're looking at across the entire corpus of every video that's ever been

19:42

uploaded to their platform and they're trying to figure out what will keep you personally

19:47

here the longest and we're going to show you that as much as we can. So I just do think that there's

19:51

a kind of categorical difference here. And while I do think people should have broad freedom to

19:56

you know, look at whatever they want. I do think that at a minimum we should probably place an

20:00

age gate on it for the same reason that we don't let 14 year olds walk into bars.

20:04

Right. Unless they're really cool and have a big idea.

20:07

So talk about the encryption piece because you had a lot about this in your newsletter that I

20:11

didn't quite understand. But what is the encryption debate that's part of these losses?

20:15

Yeah. So you know, here I understand that I'm coming across as being broadly supportive of these

20:19

jury verdicts, which I am, but I do want to acknowledge like this could lead to some really bad

20:24

places. Like and that's why we needed to handle section 230 with care. In the New Mexico case,

20:29

the attorney general argues that a reason that meta should be considered liable in advertising

20:36

their platform as being safer children is that it includes encrypted messaging. Right. In fact,

20:42

meta in March announced that they would discontinue encrypted messaging on Instagram.

20:48

In what I believe was an effort to sort of get ahead of this. What they said was, look,

20:52

if you want to do using encrypted messaging, you can use WhatsApp instead. But to me, this would be

20:57

like just a legitimately horrible outcome of all of this is if like every company that now

21:03

and offers encrypted messaging either voluntarily decided to stop offering it or was pressured by

21:08

the government to stop offering it because in my view, encryption is a necessary part of privacy

21:13

in a world where people are mostly communicating online. Right. Are you comfortable with all this

21:18

happening in the courts through jury verdicts? This is not my preferred way of addressing this,

21:26

but I think it was inevitable. In part because the tech companies have been so obstinate about making

21:32

meaningful changes to their platforms, right. Like societies across the world have been begging

21:38

these companies for a decade, please do something to make these platforms safer and to make them

21:43

less addictive and to reduce some of the harms. And instead, what we've mostly seen is a series of

21:50

engagement hacks designed to get people to look at them longer. Right. And in the United States,

21:55

where you cannot regulate the content of any of these apps for the most part, you can

22:02

really only left with the design, right. You're really only left with just the raw mechanics of

22:07

the app. So if the social media platforms are upset about the verdict here, I truly believe they

22:12

brought this on themselves. And you asked me about my own experience of screen addiction. And I've

22:17

never been sort of a total screen addict, but I've struggled like I think many, many other people have

22:23

with like how much I'm using my phone, how much I'm using various apps. I have come up with

22:28

convoluted ways of trying to reduce my screen time. You once were six hours late to a hard fork

22:33

taping because you wanted to find out what happened to chimpenini, Binance,

22:38

on TikTok. I thought we agreed to keep that private. But like never in all my struggles with

22:44

screen time, have I thought to sue the companies that were making the apps they went on my phone.

22:50

And I guess it's different when you're talking about kids, but like there is some part of me that

22:55

just feels like, well, it just feels like an easy way out. You know, blame the platforms. And look,

23:00

I think these platforms absolutely have culpability here. I am not saying that I disagree with these

23:06

jury verdicts. I think that these platforms, especially meta, have done the research, have found

23:12

the harms and then have shielded them from the public. But I just, I guess I'm thinking about my

23:19

own experience of these addictive platforms being one of like feeling bad about myself

23:26

rather than trying to find someone else to blame. Yes, but you also had the benefit of

23:31

beginning to use these platforms when you are already an adult, right? Like your hippocampus

23:36

was formed. And I think I was on instant messenger from a very early age. Do you really think that

23:41

messaging apps are as addictive and harmful in the same way as TikTok or Instagram? Oh my God,

23:46

take me back to 1999, put me on AOL instant messenger. I could not tear myself away from that thing.

23:52

I had to put up a little message with, you know, get up kids lyrics on it every time I left the

23:57

computer because it was such a rare event. And I wanted my friends to know that I was away from

24:02

keyboard. Casey, these things were addictive. The kid got up. It's a get up kids joke.

24:10

Yeah, let, look, I just think that like messaging apps are different from like these,

24:14

these social platforms. And I think, you know, honestly, like I will be curious, you know,

24:18

who knows if Instagram and TikTok will be what they still are in like 10 years maybe when your

24:22

son is ready or wants to use social media. But I just think that it probably just feels very

24:29

different than when you're a parent. Yeah. Well, Casey, are there any new social media apps that

24:34

you're addicted to? It's called Claude. And, wait, I do want to talk about the AI of this all.

24:41

So obviously every discussion on this show has to come back to AI at some point. So I'm curious

24:46

like what effects you think this might have on some of these AI companies because they are also

24:52

trying to create experiences that are engaging addictive, whatever you want to call it. I can imagine

25:00

some of these, you know, lawsuits that are being brought against the makers of chatbots for harms.

25:06

Like it all feels like it's sort of going to converge at some point. So what's your take on that?

25:11

Yeah. So Pew did a study in 2025 and found that 64% of teens now use AI chatbots, about 3 in 10

25:19

use them daily. That same survey said that the teen use of YouTube TikTok and

25:25

surveillance Snapchat had remained relatively stable, right? So yes, chatbot usage is growing.

25:31

It has not yet come at the expense of the social platforms. Although, of course, I expect that

25:36

we'll soon see chatbots inside all of those platforms, right? And like these things will all just

25:40

kind of merge together. There's something about these things where they do kind of go hand in hand

25:45

and to your point, like I think that yes, AI chatbots will be the next frontier of this debate because

25:51

in many ways they're much more engaging and I think like will be stickier than even these platforms are.

25:56

Yeah. I mean, it just seems so obvious to me that the platform should be like absolutely begging

26:00

Congress to regulate them because the alternative is like they just get sued into oblivion by a bunch

26:06

of law firms. I mean, absolutely. If I were running one of the big AI labs, I would want to have

26:10

an understanding from Congress of like what do you consider a safe chatbot? Give me a checklist

26:15

that I can follow because I don't want to have to be dealing with this in next few years.

26:20

Yeah. Casey, what's an addictive engagement mechanism we could use to get people to come back

26:25

after the break? Well, we could study their behavior and weaponize it against them.

26:30

Good idea. We'll re-come back. Sebastian Malibu, author of the new book, The Infinity Machine

26:36

joins to talk about Demis Hassabas, Google Deep Mind, and the quest for superintelligence.

27:00

Framer is a website builder that turns comms from a formality into a tool for growth.

27:10

Whether you want to launch a new site, test a few landing pages, or migrate your full.com,

27:15

Framer has programs for startups, scale ups, and large enterprises to make going from idea to

27:20

live site as easy and fast as possible. Learn how you can get more out of your.com from a Framer

27:26

specialist. Orkets started building for free today at framer.com slash hard fork for 30% off of

27:31

Framer Pro annual plan. Rules and restrictions may apply. The thing about AI for business

27:37

it may not automatically fit the way your business works. At IBM, we've seen this first hand.

27:44

But by embedding AI across HR, IT, and procurement processes, we've reduced cost by millions,

27:50

slash repetitive tasks, and freed thousands of hours for strategic work.

27:55

Now we're helping companies get smarter by putting AI where it actually pays off. Deep in the work

28:00

that moves the business. Let's create smart to business, IBM. I'm Robin and I am excited to

28:08

open my crossplay app. I'm challenging John, my colleague at the New York Times. Robin played the

28:14

word grunge which has a G which is four points. She got that triple word multiplier. I'm going to

28:20

take facts and make it faxes for 30 points. I might just take another two letter word here with

28:26

woe gets me at 23. I think this will put me back in the lead if my maps are mapping. I like to play

28:31

it more from a strategic point of view and see where I can block the other player from scoring high.

28:37

I'm pretty competitive. It's fun to beat friends and co-workers and also get to learn new words.

28:42

Crossplay. The first two player word game from New York Times games. Download it for free today.

28:48

I think he thinks he has us in the bag but I'm not so sure.

28:54

Well Casey, if our listeners read one book about AI this year, it should be mine.

28:59

But if they read two books, the second one should be Sebastian Malibis new book, The Infinity

29:07

Machine, Demisusavus Deep Mind and the Quest for Super Intelligence. Tell us about this book,

29:13

Kevin. This book came out this week. It is full of a bunch of new anecdotes and stories about

29:17

the work of Deep Mind and the motivations that drive its CEO, Demisusavus. Sebastian is a long time

29:25

journalist. He's a fellow at the Council on Foreign Relations and he's spent a long time with

29:30

Demis and the people close to him and brought us this book about what I think is the AI Frontier Lab

29:36

that gets the least coverage relative to its importance. Yeah, and it would look. Demisusavus is a

29:41

singular figure. He's been on hard fork several times but Sebastian went really, really deep and

29:48

I think maybe gave us the most fully featured portrait of the man that we've had to date.

29:53

And before we bring him in, because we're going to talk about AI, let's make our disclosures.

29:58

I work for The New York Times which is doing Open AI on Microsoft in Proplexity.

30:01

And my fiance works for Anthropic.

30:12

Sebastian Mal be welcome to Hard Fork. Great to be with you. So people who listen to our show are

30:17

familiar with Demisusavus and Deep Mind. He's been on several times. What is something non-obvious

30:23

about Demis that you learned through talking with him through many hours and interviewing many people

30:31

who know him? I mean, I think maybe the spiritual underpinning for his scientific curiosity is interesting.

30:38

You know, there was one time when we were sitting in this London park and talking for a couple of

30:44

hours and he suddenly started saying, you know, when I'm up at two in the morning at my desk by myself

30:50

thinking about science, thinking about computer science, I feel reality is screaming at me,

30:55

staring me in the face, waiting for me to explain it. And he calls it the goddess Benosa, that this is

31:01

the 17th century philosopher, Spinoza, who said that to understand nature is getting closer to God's

31:07

creation. And that resonates with Demis. Maybe that's something people don't know. That's interesting.

31:13

I mean, yeah, this has been something that's come up in my own research too is that, you know,

31:17

he grew up going to church, I believe with his mother. And I think unlike a lot of the other AI

31:23

leaders has a way of sort of fusing the science of AI with his own spiritual beliefs. And I know

31:32

some folks have seen his ambition and his many years of competing to build a GI. And I've seen

31:40

something suspicious in that, right? Elon Musk has this whole theory about how Demis secretly wants

31:46

to be an evil AI dictator who takes over the world. And I guess I'm curious if in any of your

31:54

reporting with him, you ever saw something that that seemed like what Elon Musk was talking about.

31:59

No, I mean, to the contrary, I think this idea that Demis is a, quote, evil genius, which is the one

32:04

that's the phrase that Elon used to use, came from the fact that in his video game production days,

32:11

Demis had created a game called evil genius. And so maybe it was a joke at first, but, you know,

32:16

really, I got to know Demis extremely well. I spent more than 30 hours with him. You stress

32:21

test people quite deeply as you know, Kevin, when you're writing about them and then you might get

32:26

pushed back and legal threats and all that stuff. And he did make me talk to his lawyer once.

32:30

And it wasn't totally easy the whole time, but he was reasonable in the end. And I,

32:35

wait, why don't you make you talk to his lawyer? Yeah. He was very mad at the fact that I unearthed

32:40

the whole story about deep mind trying to spin out a Google between 2016 and 2019. And, you know,

32:46

they retained a whole bunch of advisors, lawyers, bankers, et cetera. They got read Hoffman to pledge

32:52

a billion dollars to finance the spin out. They went to see Joe Tsai in Hong Kong, the Alibaba

32:58

co-founder. Anyway, so the lawyer was not amused that I had all these internal documents from

33:04

inside deep mind, which had been leaked to me, the board presentation that Deep Mind gave to Google

33:08

and so forth. And he said, you're not supposed to be writing about this. And I said, well, you know,

33:13

people gave me this stuff and tough. So there were moments of free and frank discussion.

33:18

I have always believed that when a source gives you secret documents, it helps you get closer

33:22

to God's creation. That's what I would have told him. I wanted to ask another question about

33:28

childhood, because Demis told you that he really identified with the boy genius protagonist

33:34

of the novel Ender's game and of relating to this feeling of being socially isolated by his

33:40

own talent and consumed by a desire to make his mark on the universe. And the reason it struck me

33:45

is that in this novel, Ender believes that he's doing training exercises, but then what he thinks

33:51

is like a test, essentially a video game, accidentally wipes out an alien species. So I wondered if you

33:57

talked with him about like why he relates to that story. And in particular, if there's any relation

34:02

to that and the idea of maybe trying to build a super intelligence. Well, I was astonished. You know,

34:09

this was before my first dinner with him. And it was certainly kind of the vetting process. It was

34:15

the last part of the vetting process where he agreed to give me the access I needed. And he said,

34:19

you know, you got to read this novel before you come and see me. And so I show up. I've read this

34:23

story about a diminutive boy genius who basically saves humanity from aliens. And I'm thinking, does

34:29

he really see himself as saving humanity by doing what he's doing with AI? And even if he thinks that

34:36

why would be so crazy as to tell me? I mean, surely that's hebristic beyond belief. Why would you

34:41

put that out there? And you know, he made no secret about it. He said, yeah, you know, I feel like

34:46

I identify because this guy put all of his energy and his life into saving humanity. And I feel

34:53

like I'm on a mission like that. And he said, I felt so strongly about this. I gave it to my wife

34:58

to read it, thinking that she would understand me better and sympathize with me. And you know what?

35:03

She sympathized with the kid ender, but not with me. That's not fair.

35:06

Yeah. I mean, one other character trait that comes up over and over again in reporting about

35:13

Demis and especially in your book is how competitive he is. This is a guy who loves to win. You know,

35:20

he was a child test prodigy and he won this thing called the pentamined, you know, five times,

35:26

which is sort of like an all around gaming competition. Do you think that is part of his approach

35:32

to AI? I mean, he's always talking about how he wants to use this to solve scientific mysteries

35:37

and cure diseases, but is some part of it just like this guy loves to win. And this is a really

35:41

big contest. Totally. I mean, that's exactly right. I remember going to see him, you know, when

35:47

Churchill PT was just going viral. And he said, you know, Sebastian, this is war. These guys at Open AI,

35:54

they've parked the tanks in my front yard. He actually said, park the tanks on my lawn because he's

35:59

English, but yeah, you get it. You bring up the release of chat GPT, which happens in November,

36:06

2022. And I'd love to hear a little bit more about how Demis had reacted to that because I think

36:12

before that happened, Google really thought they were comfortably in the lead and did not seem to

36:17

be feeling a lot of pressure to release anything. So I'm particularly interested if in hindsight,

36:23

Demis has regrets about the fact that they sort of let Sam Altman beat them to the punch.

36:28

Yeah, I mean, he has an explanation more than a regret. And the explanation is super interesting.

36:33

It's basically that because he studied neuroscience for his PhD, and you've got to remember,

36:37

this is back in 2008, 2009. So nothing worked in AI. So he was starting from scratch. And one of the

36:44

ideas in neuroscience is called action in perception. And this is the idea that to really be

36:51

intelligent, you have to take action in the world. You don't know what it means for something to be

36:56

heavy unless you pick it up. You don't know what gravity is unless you actually drop something.

37:02

And so he had this idea when the transformer paper came out in 2017 and Open AI was starting to do

37:08

the first GPT in 2018, second one in 2019, and so forth. You know, that's not going to work.

37:13

It's not going to take you all the way to powerful intelligence because language is just a system

37:17

of symbols. It's not grounded in the real world. And it's not that he was wrong in the sense that

37:23

now we see world models come back in 2026 as a big area of excitement and research. But back in

37:30

2018, 2019, he was missing the fact that a huge amount of knowledge about how the real world works

37:37

is in fact in language, if you download all the language on the internet. And he missed

37:43

how much you could squeeze out of language as a training set. Yeah, I mean, I want to run a theory

37:48

by you Sebastian for your take. But as I've been working on my own book and about this sort of

37:54

period at Google and at Open AI and at DeepMind, it strikes me that there are sort of like two visions

38:00

of what intelligence is that these companies disagree on. And in one vision, it's like intelligence is

38:08

about winning. It's about optimization. It's about a contest between rival intelligences. And that's

38:14

very much like the DeepMind sort of reinforcement learning paradigm, which is like AlphaGo and you

38:21

know, you play a board game a bunch of times and you get better at it a little more every time.

38:26

And then there's this other view, which is sort of the more open AI sort of language model scaling

38:31

paradigm, which is like, no, it's about answering questions like being very smart is about

38:36

having the right answer to everything. Does that theory hold water with you that there's something

38:40

like psychological about these two approaches to AI development that actually are rooted in like

38:45

what we think intelligence actually is? Yeah, I would say that the DeepMind special source right

38:51

from the beginning was to try to put those two things together. It's interesting, for example, that

38:56

with AlphaGo, the early research on that Ilya Satskever contributed to it. And of course, he was,

39:03

you know, the sort of leading practitioner of deep learning went on to be Open AI's chief scientist.

39:09

But at the time, he was working for Google because Google had acquired his critique. And so

39:15

the reinforcement learning people in London working for DeepMind collaborated with the Deep

39:19

Learning People in Mountain View. And that's what produced the AlphaGo breakthrough. So I think

39:25

I think you're right. There are these two strands within AI of reinforcement learning, which I

39:30

would describe as learning through experience interaction with the real world through trial and

39:35

error. And on the other hand, learning through data, and that is the deep learning. And for humans,

39:42

you could think of it as being, you know, you can go to the library and read all the books,

39:46

and that would be deep learning. You're learning from data, from sort of crystallized human knowledge,

39:51

or you can go out there in the real world and learn about stuff by planting your garden and

39:58

whatever, you know, actually, you can be like, Casey, who's never read a book. I'm going to get

40:02

around to it. Learns by trial and error. Yeah. So we're sort of the two approaches here.

40:09

You mentioned earlier this, I don't know if it's fair to call it a plot. It sort of seems like a plot

40:14

that they had at one point after they had gotten acquired by Google to try to spin themselves out.

40:19

I believe they call this project Mario. I would love to hear a little bit more about how that came

40:24

about and why they didn't go through with it. So what happened was that when they sold DeepMind

40:31

to Google in 2014, they had a rival offer from Facebook and Facebook actually offered them more cash.

40:39

And one of the reasons they said no was that they wanted safety protections around their

40:44

technology. And so they had this deal. There was going to be a safety and ethics board.

40:47

And Google promised that and they went ahead and sold to Google. And they had a first meeting of

40:52

the safety and ethics board in 2015 after the acquisition. And in order to like bind in the other

41:00

people in the space, they got Elon Musk to host the whole safety and ethics board at SpaceX.

41:09

They got Reid Hoffman to show up. And you will notice that then these are the characters who either

41:14

found OpenAI or fund it in those two. So Google wasn't best pleased. As you can imagine.

41:23

I have to say that doesn't seem like a very ethical thing to do. You know, maybe not the people I

41:27

would have put on my ethics board. These are these characters. But it's a dichotomy, right?

41:31

Dilemma. I mean, either you put people on the board who don't know what they're talking about and

41:37

they're not interested in AI. All they do know about AI in which case they're going to do their

41:42

own thing because it's too exciting not to. And a fundamental mistake that Demis made in his early

41:47

conceptualization of how AI would be developed was this notion that there would be one single lab

41:53

producing AI on behalf of all humanity. And therefore it could be safe because there'd be no race

41:58

dynamic. And you could take your time and sort of read teaming the models before you release them.

42:04

And that's why he brought Musk into the tent. That's why he brought Reid Hoffman into the tent.

42:10

Precisely because he thought we could all be one team together. And so then what happened after

42:14

to answer your question Casey? So what happened after was that having lost that first experiment

42:21

in setting up a safety and ethics oversight board. Google didn't want to do another one. And

42:27

really deep minds project project Mario was to try and force them to do more by threatening to

42:33

walk out if they didn't. Why did they call project Mario? Was that about the video game?

42:37

Good question. I don't know the answer. Sorry.

42:40

I failed to ask that. It's much better than the alternative project Wario they were working on,

42:45

which was just the evil version of that. So how does Google get them to abandon this plan?

42:51

You know, it's a tradition. Sunda Pichai, his personality and his management style,

42:57

comes out quite interestingly in this whole story because, you know, right at the beginning in

43:02

2015 when, you know, the first safety and ethics oversight board fails, the next idea that

43:12

Demis has for how to get some independence and control of the technology is to become a bet as

43:18

in an alpha bet when they were spinning out Waymo and some of the other side bets they had. And

43:24

Larry Page was cool with this and he was CEO at the time. But then right as these discussions were

43:30

going on, he handed over to Sunda. And Sunda kind of pretended to say, oh yeah, absolutely great idea,

43:37

we should look into it. But really he was just spinning them along and had no intention whatsoever

43:42

of letting Demis spin out because he recognized him as the AI talent that Google was going to need

43:48

in the future. And so essentially there was this long drawn out, you know, delays here and we

43:55

should just look at some more details. And here's another term sheet. And I was given some of these

43:59

term shapes. They're like huge great documents with red lines all over them where, you know, one

44:04

team of lawyers had come back to the other team of lawyers. And you know, basically by 2019,

44:09

everybody was exhausted. It all fizzled out and they just moved on.

44:13

There's been a lot of sort of jostling for independence within deep mind ever since the earliest

44:21

negotiations about selling to Google. Given some update on how things are going with them now,

44:27

like, you know, when we talk to them, they present things as being, you know, fairly like Hunky

44:32

Dory between everyone, but are there still kind of tensions and fought lines between Google and

44:37

deep mind? Well, you know, I'll give you sort of what I would regard as somewhere between

44:43

probably true and unconfirmed rumor. Is that all right? Can I, man, I'd like to do that?

44:48

Oh, please. We look, we look at gossip on this track. I'm getting a spilt tea.

44:53

So I'd say that, you know, Sergei Brynn is the trouble maker here, that he,

45:00

one of the Google IOs, I guess it was a couple of years ago, and the stage was set up for two people

45:06

to be on it. There was the interviewer and there was Demis. And suddenly, Sergei kind of runs onto

45:10

the stage. They have to get a third chair. And then he kind of inserts himself into that conversation.

45:17

We know what I hear is that that was the outward symptom of a much deeper tension, where Sergei

45:23

doesn't really like Demis' leadership on this and wants to push back against it.

45:28

And I think it follows from that that the single most important business

45:33

buddy act in order of capitalism today is the one between Sundar Pichai and Demis Ashabis. Because Sundar

45:40

manages the board, manages the sort of high politics of Google and alphabet that Demis has

45:46

the space, the resources, the oxygen to go do his science. And without Sundar holding that

45:51

all together, we might be in a different place.

45:56

One area where Demis has changed his mind is about the use of AI in the military. This was a big

46:04

sticking point in the negotiations with Google and Facebook back when they were selling Deep

46:09

Mind. He didn't want their technology to be used for the military. Now, obviously, Google Deep Mind

46:14

has one of these Pentagon contracts. They're working with the military. So what do you attribute

46:19

that shift in his thinking to? Is it just kind of the realities of the market or needing to compete

46:25

or what is it? Yeah, I mean, Demis described this to me as no, you mature. You get to know the

46:32

real world and all that. You one might say, how come you weren't mature when you sold the company

46:36

in the first place? I mean, surely it was predictable. But I think that the real truth of the matter

46:41

is he did not predict. I mean, it comes back to this single tonight, which I mentioned before.

46:46

He really thought there would be one lab. And in a scenario where there's only one lab who's

46:51

got the technology, then sure, you can say to the military, you can't have our technology. Go

46:56

away. And the problem today is that we saw with Anthropic just now with the Pentagon, if Anthropic

47:01

tries to draw a red line, you know, open AI is in there like a shot and says, Hey, Mr. Pentagon,

47:06

what do you need? We've got it for you. Do you worry that Demis is competitive streak or his

47:13

pursuit of science, whatever it is that drives him will compromise his ability to develop something

47:18

like AGI safely? You know, I asked myself that question all the way through my research. And in

47:25

some ways, the question about, can you be a strong, consequential actor in the world and still be good

47:33

is sort of the deep question in the book. And he is somebody who really wants to be good.

47:38

And I think one way of framing this question about is he being good? Will he be good? Can he be good?

47:45

Is to say, should he, will he do what Dario did standing up to the Pentagon about red lines on

47:53

military usage and surveillance? And I don't think he is going to do that. And I think the way he

47:59

would rationalize this would be to say, Look, you've got to pick your moment with this stuff. If

48:03

you make a stand and actually the Pentagon does what the hell it wants anyway, you didn't really make

48:08

the world better. My best shot at making the world better and making AIS safer is to go through the

48:16

route, which is the only route that can get us to AIS safety. And that is government intervention

48:22

forcing safety rules on all the labs at once. Because otherwise some are safe, some are not safe

48:27

and the ones that are not safer going to screw it up for everybody. And that's the route that I think

48:31

DEMIS wants to push. Problem is you have the Trump administration. They just want to accelerate.

48:36

And so all you can do for now, I think, is to keep this conversation alive with other governments.

48:42

And then maybe when there's a new administration in the US, we could see a conversation.

48:47

You write that DEMIS used to inform job candidates at DeepMine that if they signed on,

48:54

they should, quote, prepare for a climactic endgame when they might have to disappear into a bunker.

48:59

Why would they have to disappear into a bunker? And do they still tell the job candidates that?

49:06

Yeah. So the idea was when you get very close to AGI and it's super dangerous, you're going to

49:14

A, B subject to potential attack by bad guys who want to steal the technology. And B, you really

49:20

don't want to be distracted by quotidian real world stuff. So you just get into the desert.

49:26

Yeah. That's right. You leave your TikTok on your phone and some, well, I think Kevin used to

49:33

lock his phone up in the boxes, I recall. That's correct. And so you do a Kevin and you go and you

49:38

really, really focus and you really get the AI right in the last stages. That was sort of DEMIS'

49:44

vision. And to test whether he really meant it, I was having to know with somebody who used to be

49:50

at DEMIND in that period around 2015, 2016 and had now left. And I said, this wasn't really true.

49:56

He didn't really know. Yeah, yeah. This guy said to me, if Demis had told me any time I was working

50:02

at DEMIND that I had to take the next flight to Morocco and hide, I would have said I'd been

50:07

given fair warning. Wow. So the bunker is in Morocco, just so everyone knows. Yeah. And I

50:15

said, why Morocco? And he said, well, you know, it's the desert. And you know, the Manhattan

50:21

project was in the desert. Oh, it's the open-heimest syndrome. These guys in their Manhattan project

50:26

analogies, man. I don't know if they read to the end of that story. I didn't go that well.

50:33

Sebastian, you spent many years writing about hedge funds. And I remember encountering your work

50:37

back when you were writing about hedge funds and hedge fund managers. You're now spending time

50:41

with the new masters of the universe. And I'm curious what, if any observations you have about how

50:46

those two classes of people, the AI leaders and the hedge fund managers are similar or different.

50:53

Well, I would say that the hedge fund guys are playing a game inside a set of Fetty

51:00

when understood rules. They're not rethinking humanity. They're not rethinking everything

51:05

about society. They're not changing the way we bring up our kids. They're not changing the

51:09

conception of what it means to be human. Speak for yourself. I'm trading my kid to do algorithmic

51:13

arbitrage. He's four terrible additives, down to 200% this year. Anyway, sorry, carry on.

51:21

Yeah, no, but I just think that AI is so, so much bigger than some kind of

51:28

event driven arbitrage or whatever you want to talk about with hedge funds.

51:32

Maybe a last question from me. I have a question about the writing of this book and how

51:39

you decided to frame it. It strikes me Sebastian that we don't know how AI is going to go.

51:45

We don't know whether AI is going to turn out to, you know, cure a bunch of human disease and

51:50

usher in the utopia or usher in these like far darker scenarios. I think it's clear that you have

51:57

a lot of respect for Demis and the work that he's doing, but there's also this risk that things

52:03

go really, really badly. So I'm curious as you wrote the book, how you approached that tension

52:09

and the sort of not knowing of how history is going to judge this person who you've now gotten

52:14

to know so well. I thought of the book as a book about that tension. In other words, I'm trying to

52:19

do a portrait of somebody who has his hands on the 25th century version of the nuclear material,

52:25

who has that tingling sense of playing with something that could destroy humanity. What's it

52:31

feel like when you're creating that? Can you sleep? How do you live with it? I think I've delivered a

52:38

portrait of somebody who's in that hot seat. Hopefully that remains interesting for some time. It's

52:44

not something that depends on how this AI development story ends.

52:49

Well, Sebastian, thank you so much for coming on. The book is called The Infinity Machine

52:53

and it is out now. Thank you, Kevin. And Casey, thank you.

52:58

Thank you, Sebastian. When we come back, a Game of Hat GPT! It involves snowmen.

53:05

Would you like to build one? Hmm. I don't think so. I saw what happened to the Olaf.

53:15

Hi, I'm Solana Pine. I'm the Director of Video at The New York Times. For years, my team has made

53:31

videos that bring you closer to big news moments. Videos by Times journalists that have the

53:36

expertise to help you understand what's going on. Now, we're bringing those videos to you

53:41

in the Watch tab in The New York Times app. It's a dedicated video feed where you know you can

53:45

trust what you're seeing. All the videos there are free for anyone to watch. You don't have to be a

53:50

subscriber. Download The New York Times app to start watching. All right, Casey. Well, we took a little

53:58

break last week and there's been a lot of tech news, so we feel like we should do a roundup

54:03

and play a round of Hat GPT. Hat GPT, of course, the game where we put recent news stories into a

54:15

hat, draw a slips of paper out of the hat, discuss them, and then when one of us gets bored, we say to

54:20

the other, stop generating. And if you can't see us, we're using the hard fork hat official merch.

54:28

And Casey appears that these are sold out in The New York Times store. Not that specific hat,

54:34

which was of course a hard fork live exclusive. Yes, this is an exclusive. You can't get this one,

54:39

but you also can't get any of the other ones. Here's the important point. You cannot get a hard

54:43

fork hat anymore, so stop trying. Now, someone did suggest to me the other day that we should make

54:47

hard hats for hard fork, like a yellow construction vibe. Well, we can wear them over to the new studio,

54:53

which is being built for us right now. That's true. Do you think we should make that? Yeah, hard fork

54:56

hard hat. That's a perfect piece of merch. Great. All right. Casey, you go first. All right, Kevin.

55:05

This first story comes to us from 404 Media, an AI agent was banned from creating Wikipedia articles,

55:12

then wrote angry blogs about being banned. I feel like I've heard something like this before.

55:17

So, Kevin, once again, agents are writing blog posts. What do we make of this?

55:22

This would never happen on Grakipedia. No, I think this is just going to be the year that every

55:32

system on the internet that is built on human contribution and review is going to break.

55:38

And it will break not only because the AI tools, but because people are letting them loose

55:42

onto websites where they are doing things like editing with Wikipedia articles and defaming people who

55:47

contribute things to GitHub projects. We heard from Scott Shambhal about that on a previous

55:53

episode. But I think this is going to be a challenge I have started talking about the inbox

55:58

apocalypse that is going to hit this year where everything that is normally sort of reviewed and

56:04

bottled next by humans is just going to be overwhelmed and flooded with AI submissions.

56:08

Absolutely. I mean, I'm already getting emails now every week from something claiming to be an

56:13

AI agent that says, you know, it's running a company, you know, but it's always sort of like,

56:17

let me know if you want to talk to my human. And I was like, you're human. Better hope I don't catch

56:22

them in a dark alley because this is not the log in my inbox or frankly anywhere. Yeah.

56:27

I'm getting these two. It's like, it's a total scourge. It's somehow even more annoying than the

56:32

like faceless PR spam that you and I got. It should just to be very clear. There's not one thing

56:37

that anyone's agent could do or say to get me to respond to it anyway. So use that information.

56:42

I hope that goes into your training data. Stop generating. All right. Next up.

56:47

This one comes to us from Sean Hollister at the verge title. I met Olaf, the frozen robot

56:53

who might be the future of Disney parks. Sean reported in mid March about his interaction with

56:58

a new animatronic Olaf the snowman robot from frozen. It's weighs 33 pounds. It was trained

57:04

with an Nvidia GPU and is controlled by an operator using a steam deck. But when it made,

57:09

it's debut at Disneyland Paris. Well, Casey, something happened. Should we take a look? Let's take a look.

57:17

All right. Olaf, the snowman talking, waving his stick arms. Oh no, no! We lost him.

57:24

Olaf! Oh, the carrot knows falls off. Oh, it's, oh. There's something about the way that he

57:37

very slowly falls onto his back. Oh, no. Yeah. 20 children just got lasting trauma.

57:45

They're going to be talking about this in therapy. Look, what do you expect? Like, of course,

57:49

he was frozen. That's what the whole movie is about. Do you want to kill a snowman?

57:59

Okay. I mean, there is, it's just reliably very funny when you create an animatronic thing for a

58:04

child. And then it is like reveal to be a machine. And it just sort of feels like a love crafty

58:09

and horror. Yeah. Something about that transition from like a cutesy cuddly thing to like,

58:14

it's eyes are bulging out of its head and the sparks start flying out of the back.

58:18

I'll never forget the day at Chuck E. Cheese as a kid when I learned that the guitar playing

58:22

mouse wasn't real. You know, Chuck E. Cheese is full government name, right? What is it?

58:26

You don't know. It's not a joke. It's Charles Entertainment Cheese. Come on.

58:30

That's where to go. I learned something every day for you. Stop generating.

58:36

All right. Now it's my turn.

58:40

Well, this Kevin is a story about the Claude code leak. So Kevin, what do you make of this

58:46

Claude code leak? Well, I think it's a big deal in part because the

58:51

agentic sort of coding harness that is around Claude code is really the special sauce, right? It's

58:57

the model underlying it is part of what makes Claude code and other agentic code systems good at

59:03

coding, but it's really all the stuff around it. And that's what leaked. It is not the actual like

59:08

weights or the source code of Opus 4.6 or whatever model people are running inside Claude code.

59:13

It's like the sort of apparatus around it that makes it quite effective. So within hours of this

59:19

leak, there were people who had cloned it and set up their own versions of it. I imagine it's a very

59:25

busy week over at the Anthropic legal department trying to get all the stuff taken down.

59:29

But look, I think this kind of thing was inevitable, maybe not at Anthropic, but like the

59:35

agentic coding tools were all going to get good. They were all going to sort of reverse engineer

59:40

Claude code and figure out what made it better. But I think this probably just accelerated that.

59:44

When I saw this, my first thought was right now, Kevin rooses somewhere vibe coding Claude code

59:48

using the downloaded leech Claude code harness. I have not yet downloaded the leaked Claude

59:54

harness, but I have seen other people sort of taking it and then putting it on top of like an

1:00:00

open source Chinese model or something that's sort of Frankensteining their own sort of version of

1:00:05

Claude code that they can run. And I will say the closer I get to my rate limits on Claude,

1:00:10

the more I'm tempted to do something like that. That makes sense. Here's the last thing I'll say.

1:00:14

If Anthropic is looking for a new harness for Claude, they might want to pick one up at Mr. S.

1:00:18

Leather and San Francisco down in the Folsom district. That's really nice options down there.

1:00:25

All right, stop generating. Okay, okay. Next up out of the hat.

1:00:32

Oh, this one is good. The AI Fruit drama on TikTok that's too juicy to pass up.

1:00:39

This one says we wish to watch a clip from MDC News.

1:00:42

All right, everybody. So tonight we are taking a look at one of the most popular shows circulating

1:00:47

on TikTok. That's causing a lot of let's just say some juicy drama. Because the stars of the show

1:00:53

are AI generated fruit. Welcome to Fruit Love Island. We're eight single fruits are about to flirt

1:01:00

fights and trucks. Things get messy fast. The guy I want to couple up with is Benanito.

1:01:07

So this is like sort of a love island style reality show featuring AI generated fruits.

1:01:13

There's a very ripped banana who is attracting attention from the lady fruits.

1:01:19

And it's all very silly. But this is going mega viral. This is this is the big new trend.

1:01:23

I just watched a banana kiss a pineapple and that's not in the Bible.

1:01:32

Do you think I could win a multimillion dollar jury verdict for being forced to watch that?

1:01:38

I'm calling my lawyer. I think it's a fair question. I'll say this. My mental health did not

1:01:44

improve watching Fruit Love Island. Watch what happens with the passion fruit in season three.

1:01:51

All right. Stop generating. This company is secretly turning your Zoom meetings into AI

1:01:57

podcast. This one also comes to us from 404 media. And here's a name for a company webinar TV.

1:02:05

Wow. Two great tastes that taste better together. Webinar and TV. It's been a worse word of

1:02:12

the English language than webinar. Not to my knowledge. Apparently this company is secretly

1:02:17

scanning the internet for Zoom meeting links recording the calls and turning them into AI

1:02:23

generated podcasts for profit. Kevin. Oh my God. In some cases, people only found out that

1:02:28

their Zoom calls were recorded once Webinar TV reached out to them to say their call was turned

1:02:32

into a podcast in an attempt to promote Webinar TV services. Wow. What is happening? What is happening?

1:02:41

Okay. I want to start by saying. Yeah. I am committed to making a podcast with you for the

1:02:46

rest of my life. But if we ever get overtaken on the charts by an AI generated Webinar TV podcast

1:02:52

that's been trained on people's boring ass Zoom meetings, I am leaving this industry.

1:02:56

Here's why this is such great news. I think a lot of podcasts are struggling with the idea that

1:03:01

maybe they're podcast, you know, maybe they didn't have a great episode. Maybe they're wondering,

1:03:04

like, is this thing good enough to put out on the internet? Congratulations. Because every

1:03:08

single human-made podcast is better than every single Webinar TV episode that's ever been released.

1:03:14

Yeah. I mean, I'm just like, these have to be the most boring podcasts ever created. Like,

1:03:19

what are you going to talk about? Is it called Action Items? Is it called Circle Back?

1:03:24

What's the title of this podcast? Touch base. A limited eight-part series.

1:03:30

They're actually, I heard there's a great series over on Webinar TV right now. It's called,

1:03:35

oh, I think you're on mute. So you may want to check out that one out. All right, stop generating.

1:03:45

Next out of the hat, we have North Korean hackers suspected in Axios Software Toolbreach.

1:03:51

This comes to us from Bloomberg and it's about Axios, not the media company.

1:03:56

I actually would prefer to read a story about this from Axios if you have one on hand.

1:03:59

This is a tool, an open source tool widely used to develop software applications.

1:04:05

This has been a big security breach. Hackers were able to breach one of the few accounts that

1:04:10

can release new versions of Axios late on Monday and publish malicious versions. Axios is

1:04:16

downloaded about 80 million times every week. Anyone who has downloaded the malicious version of Axios

1:04:21

could then have their own computer and the data on it stolen by hackers. This is being attributed

1:04:26

to North Korea. Seems really bad. There's a lot of cyber security incidents we'll talk about where

1:04:32

it's like, no personal data was stolen or nothing sensitive was at risk. This is one where it's like,

1:04:38

no, everything was at risk. This is one of the bad ones. If you've been messing around with NPM

1:04:45

over the past week, you probably need to take a look at this. I think this is going to be one of

1:04:50

the biggest stories of the years, just what is happening in cybersecurity right now. I was watching

1:04:56

this YouTube video. If you ever need something to keep you up at night, watch a talk given by this

1:05:02

guy Nicholas Carlyne who's a security researcher and anthropic at a cybersecurity conference recently.

1:05:09

It is like the most terrifying conference speech ever given because what he's basically saying is

1:05:15

these AI tools have gotten better than almost any human hacker, any human security expert at

1:05:22

finding vulnerabilities in tools, even tools that have been around for decades like the Linux

1:05:28

kernel. These language models are now finding bugs in them and basically every piece of code that

1:05:35

exists is going to need to be rewritten and substantially hardened because we are facing like

1:05:40

an onslaught of these very sophisticated AI tools that can find every little bug and problem in

1:05:48

them. Well, I am going to watch that talk as just as soon as I'm finished watching Fruit Love Island.

1:05:52

But, you know, the thing that this brought to mind for me, Kevin, was that last week while we were

1:05:58

away, there was this anthropic leak where someone found a draft of a blog post that said that

1:06:04

anthropic was delaying the release of its next model so that it could share it with cyber

1:06:10

defenders, basically. To my knowledge, we have not seen something like this happen since GPT-2

1:06:15

in 2019. One of the big labs saying like essentially we're afraid to release this thing because of

1:06:21

what it might, uh, rot. What is the present tense? What am I reek? Yes. Because of what it might

1:06:28

reek. That's reek with the W. Yes. Speaking of reeking, take a shower next week. Hey, I was in

1:06:34

a hurry. All right. Stop generating. Okay. You're up. Okay. So this is actually a two-parter, Kevin.

1:06:46

Two stories about open AI recently that caught our attention. One, Sora has shut down, which was

1:06:54

a prediction that I made at our year end episode. Yes, you called this one. This was my low

1:07:01

confidence prediction for the year and it's already come true by March. And then a second story,

1:07:06

which I think actually crazily enough is related, open AI has apparently a shelled its plans to

1:07:12

release the erotic chatbot or sort of the like the adult mode that it said that it was going to be

1:07:16

bringing soon to chat GPT in an effort to boost engagement. So Kevin, tying to know what you made

1:07:21

of those two changes. So I think you were smart to predict the end of Sora. I think the, um,

1:07:29

the story with Sora never quite made sense to me. Like it was obviously a very cool piece of

1:07:33

technology. It was devastatingly expensive to run is my understanding. Like generating all those

1:07:41

short videos was like computationally quite pricey. And so I think they are making the decision to

1:07:48

sort of spread their bets a little less and consolidate around like a few projects. One being

1:07:55

enterprise AI, one being coding and sort of automating AI research. But I think they maybe

1:08:02

made a few too many side bets in the past couple of years that they're now seeing were expensive

1:08:07

and diverted resources away from the core. I have to say I was personally really glad to see

1:08:14

both of these changes like like the release of this infinite slot feed app last year. And the

1:08:21

company saying that they were going to release this adult mode while they were still having all

1:08:26

of these issues with like psychological problems that some of their users were experiencing as a

1:08:30

result of getting a little too close to their chatbots. I just thought both of those seemed like

1:08:35

really irresponsible moves and just like contrary to what they said their mission was. So I was

1:08:40

actually just really happy to see them say, you know what, we're not doing any of these things

1:08:43

anymore. Like I think that was the right move. Now did they do that out of the goodness of their

1:08:49

heart and some sort of like moral awakening that they had? No, they saw Anthropic which had started

1:08:54

to print money because Claude code was taking off and they said we want to get a piece of that. But

1:08:58

hey, whatever it took, I'm just glad it's happening. Yeah, stop generating.

1:09:04

Last up in the hat, CalShi announces itself as the safe regulated prediction market in a new ad

1:09:11

campaign. CalShi has recently been putting up green ads around DC and I've actually seen them in

1:09:17

San Francisco. The first one says rule number one, CalShi bands insider trading. The second one

1:09:23

says rule number two, we don't do death markets. Casey, you're tick rule number three. We'll always

1:09:30

shoot you in the front, never in the back. Who are these people? What? Like these ads are raising a lot

1:09:36

of questions already answered by the ads. Truly, truly, it's it's just so funny to me like, you

1:09:43

know, I went to this prediction markets conference like several years ago. I think you were going to

1:09:48

bring this up. But go ahead. And like people from CalShi were there, people from polymarket were

1:09:53

there, people from all these like, you know, obscure like prediction works. And it was like 50 people.

1:09:58

It was like who were interested in this stuff? And it wasn't legal at the time. And so they were

1:10:03

all using like sort of play money and like work around. And it was it just seemed like like there

1:10:08

no part of me was like in three years, this will be the dominant industry in America. And they

1:10:14

will be taking out bus ads to tell people that they don't do death markets. I know. But at the same

1:10:19

time, I keep reading all of these like stories and blog posts that are like, you know, why is this

1:10:24

generation turning to prediction markets? Is this like really the only future they see for themselves?

1:10:28

It's like, no, they used to be illegal. And now they're legal. People love to gamble if you let

1:10:34

them. You are now letting them gamble. So that's why they've hooked this younger generation. Yeah,

1:10:38

you don't think it's because of the information harnessing potential in the wisdom of the crowds.

1:10:42

I really, I'm still waiting for the wisdom of the crowds on a Calhsi market to improve my life.

1:10:46

Yeah. Well, you're not going to find it when it comes to death or insider trading.

1:10:50

Calhsi rule number four, gambling is bad. That's the ad I dare them to put up.

1:10:59

Let's close the hat. KC. What's up, yellow hat? I was hatching APT. What?

1:11:03

Lock going on. Lock going on. Busy week. Busy week. Never a dull day here in Silicon Valley. How so?

1:11:19

Hard fork is produced by Whitney Jones and Rachel Cone. We're edited by Viren Povitch. We're

1:11:25

fact checked by Caitlin Love. Today's show is engineered by Chris Wood. Our executive producer is

1:11:30

Jen Poyon. Original music by Alisa B. YouTube, Marion Luzano and Dan Powell. Video production by

1:11:38

Sawyer Roké, Jake Nichol and Chris Shot. You can watch this whole episode on YouTube at youtube.com

1:11:44

slash hard fork. Special thanks to Paul Assumin, Puywing Tam and Dalia Hadad. You can email us at hard fork

1:11:52

at nwaytimes.com with who you're rooting for to win fruit love Island. I've got my eyes on the Kiwi.