The Future of Addictive Design + Going Deep at DeepMind + HatGPT
2026-04-03 11:00:00 • 1:09:26
Dell PCs with Intel inside are built for the moments you plan and the ones you don't.
There for those all-night study sessions, the moment you're working from a cafe and realize
every outlet's taken, the times you're deep in your flow and can't be interrupted by an auto
update. That's why Dell builds tech that adapts to you. Built with long-lasting battery, so you're
not scrambling for an outlet and built-in intelligence that makes updates around your schedule, not in the
middle of it. Find technology built for the way you work at Dell.com slash Dell PCs built for you.
Now, here was a really interesting situation, Kevin. Did you see this robotaxi outage that left
passengers stranded on highways in China? No. So this happened in Wuhan recently. I've heard of
that place before. Did they do anything else? It's not clear to me. I'm not really familiar with
their game, but apparently there was some sort of technical glitch that caused a number of robot
taxis owned by the Chinese tech giant Baidu to freeze, trapping some passengers in their vehicles
for more than an hour. And I just thought, my gosh, what a nightmare. Just imagine you're in your
robotaxi on the way to a wet market in Wuhan. You have an appointment with a pangolin who's going to
have on you to see if they can transmit anything to you. And then your robotaxi gets horrible.
It's a nightmare. It's an absolute nightmare. I think that robotaxi outage is definitely the
worst thing that's ever come out of Wuhan. Yeah. When it comes to these Baidu robotaxis, my advice,
buy don't. No, boy. No, that was the worst thing to come out of.
I'm Kevin Russo tech column since the New York Times. I'm Casey Newton from platformers.
This is hard for this week. Social media companies keep losing in court. How will that
reshape the internet? Then the Infinity Machine author Sebastian Malibu joins us to discuss his
new book on Google DeepMind and Demis Hassabas' quest to build super intelligence. Finally,
it's been a while. Let's catch up with some hat GPT. I missed you.
You too.
Well, Kevin, while we were away, I was riveted by what was going on in the courtrooms in
Los Angeles and New Mexico related to social media. Yeah, it has been a big week for these social media
product liability trials that have been going on now for some months. And we actually got some
verdicts. We did. And in both cases, social media lost in LA. A jury found that meta and YouTube
had been negligent in the way that they designed features that they said were harmful to this plaintiff.
They have to pay $6 million combined to this plaintiff. And then in New Mexico, the jury said,
we believe that meta has violated the state's unfair practices act and has misled consumers about
the safety of its products and has endangered children. In that case, they are ordering
meta to pay $375 million. Yeah, so we've talked a little bit about this series of cases against
the social media companies. Social media companies, they get sued all the time for all manner of
different things. I think what caught R.I. and specifically, your I was the sort of legal theory
underlying these cases. So talk a little bit about that and what makes this case different from
other cases that have been brought against the social media companies. Yes, so I would say there
are kind of two big reasons why these cases are super important. One is that these are what are
called bell weather cases. Kevin, you ever heard of a bell weather case? These are like cases that
set precedent for other cases, yeah? Exactly. These are the cases that, if successful, are going to
open the floodgates for lots of other people to sue under the same theory. The second big reason
that these cases are really important is that they appear to have opened up a crack in section 230
of our communications decency act here, which for 30 years has been essentially the foundation
that the entire internet rests on. It's also a dentist's favorite statute. Yes, that's section
tooth herty if the joke wasn't landing for you. So yes, this is a super important super
glad you got that. No, the really sad part was I was planning my own section 230 joke.
Oh, because I just went to the dentist yesterday. No, I didn't have any cavities. So tooth and
not herty moving on. So section 230, Kevin, you may remember is the law that says that in most cases,
these platforms cannot be held liable for what their users post. Yes. So if I went on Facebook and
I defamed you, which is something I think about doing every day, you could sue me, but you couldn't
sue Facebook. This is what's been blocking my lawsuits against Facebook over your posts for years.
That's right. And back in the day, like 30 years ago, this was actually really important because
there were these small internet forums that were starting up. Some of them got to be bigger size,
you know, Compuserve, AOL. And inevitably somebody would be mean to another user and they would say,
I'm not just suing you. I'm suing Compuserve. I'm suing AOL. I'm putting the whole system on trial.
And a couple of lawmakers got together and they said, this is going to destroy the entire internet.
Like we need for there to be forums and not have these platforms being held liable for all these
things. But fast forward to today, and Kevin, would you agree that maybe there are some harms that
are taking place on the internet that do not consist entirely of people defaming one another on
Compuserve? Yes. Yeah. And so this is essentially the question that gets asked in this case, right?
People say, Hey, it seems like we're a pretty long way away from 1996. I'm opening up TikTok.
I'm opening up Snapchat. And I'm seeing infinite scrolling feeds. I'm seeing auto playing videos.
I'm a teenager, but I'm getting barrage by push notifications in the middle of the night. And
that's the say nothing of the recommendation algorithms that might be driving me toward content
related to eating disorders or other things that are going to make me sad and upset. And so some
of these people get together with their attorneys. And they say, this actually feels different from
the thing that section 230 was designed to protect, right? This is not about, oh, I got harmed by
this particular piece of content. This is about the design of the whole platform, the design feels
defective. And the really crazy thing about these cases, Kevin, is that juries agreed with these
plaintiffs for the first time. And they said, we like this theory. We think these products are
defective. Right. So this is kind of a side door that these lawyers have found around litigating
on section 230, which they have successfully now shown that at least in these cases can convince
a jury that it is not about what's on the social network content wise. It's about the actual
like sort of mechanics and plumbing of the social network that are harmful to people. That's right.
And we should say that we do expect some appeals here and until those are sort of fully exhausted,
I can't tell you for certain, this is the moment that the internet changed forever. But there's
been a lot of commentary over the last week and about what it would mean if these cases were upheld
because it seems like juries are just going to be really, really sympathetic to these claims.
So before we get into the implications, like can I just ask a couple more questions about these
actual specific cases? Please. So what are the actual platform mechanics that are being litigated
over here? Yes. So in the LA case, among the design features that were at issue were the
so-called beauty filters that can make you look quote unquote more beautiful if you use them,
infinite scroll, auto play video, these barrages of push notifications that platform sense.
And also I would argue more problematically the recommendation algorithms that power the
platform. And then in the New Mexico case, that was much more about kind of child safety. So
they were arguing that Instagram in particular had become this playground for predators. It was very
critical of the fact that meta offers end to end encrypted messaging. And the basic idea was
meta falsely advertised that these platforms were safe when in reality children are being harmed
there all the time. So from what I understand, it was like the case was basically
taken out of the playbook for going against Big Tobacco or another sort of industry that makes
harmful products. You say this is harmful and not only is it harmful, but the company that was
making it new that it was harmful and either made it more harmful or just released it as planned.
Anyway, I did see some sort of exhibits that had been shown off at the LA trial, I believe,
where some employees at meta were sort of talking on their internal forums about how this stuff is
so addictive for kids. That seems bad. And I imagine that was persuasive with the jury. But are there
other instances where the platforms are being sort of taken to court over things that they sort of
newer harming people and that they either dialed up the harm in an attempt to spike engagement or
sort of knowingly release these things to the public? Yeah, so I mean, some of this research has come
up in other litigation over the years, but I think this has been probably the most damaging case
that we have seen. You know, the first time I remember reading a lot of these internal studies
was in the wake of the Francis Hogan revelations a few years back, right? Like Francis Hogan walks out
the door of meta and takes a bunch of this internal research with her winds up sharing with the
Wall Street Journal and then eventually a bunch of other reporters, including me. The reason that
the research mattered a lot here, though, Kevin was again, the plaintiffs are now building this
very specific case, which is you're building a defective product, right? Before the past couple of
years, we weren't really using this language. We weren't really adopting this sort of public health
framing of a way to discuss the harms of social media. Before then, it was just kind of this more
nebulous like, hmm, like they're studying the effect of Instagram on teen girls and it seems like
some of these girls are having really bad outcomes, but we didn't really have the framing. Well,
now we have the framing and we're just saying like, hey, you looked into it. You found that some
subset of your users are having really bad experiences and you did not change the features until
that mattered. Well, let's talk about the changes. So what would you expect a platform like Instagram
or Facebook or YouTube to change in the wake of these jury verdicts or are they just going to
wait till it all shakes out on appeal? I honestly don't know the answer to that question,
and I think it's a really interesting thing to watch. The question that you just asked is really,
really controversial actually because much of what these platforms do is just protected under
the first amendment. And then section 230 also protects a lot of speech, right? And the big debate
that's like raging in the internet policy community right now is can you separate design from
content? I want to get your thoughts about this. Right. Is it like the container or is it the
stuff in the container that is dangerous? Yeah. And there are some people who are saying that no,
you cannot make that distinction and that effectively all design is content, right? Like if I want
to send you a push notification, that is my right under the first amendment and you cannot tell me
that I cannot do that. You cannot tell me that there is a certain limit that I have to place on
the depth that you can scroll an Instagram like that is protected. But for what it's worth,
juries are taking the opposite view. They're saying that there are at least some things which seem
like are just clear mechanical design features and I happen to agree with them. So let's talk about
this because I think this is maybe a place where you and I disagree or at least where I have some
misgivings about this theory. So in the case of something like cigarettes, which is a very heavily
litigated field that I think a lot of this social media litigation has been modeled after.
There's like an addictive ingredient, right? Nicotine. Everything that you put nicotine in
becomes more addictive as a result of having nicotine in it. This happens with cigarettes,
it happens with vapes, it happens with nicotine pouches. If you started putting nicotine in ice cream,
ice cream sales would go up because nicotine is very addictive. I think the question I have about
the mechanical addictiveness of these features like infinite scroll, like auto play recommendations
is that if it followed the same principle as nicotine, then every product that has those would
become way more popular. And one example I've been thinking about on this is Sora. They sort of
took the playbook that was working for TikTok and Instagram and they put it onto a new app.
And the app did not succeed, right? There are other apps that have tried to mimic things like
the newsfeed that have tried to mimic things like auto play video or recommendation algorithms
that have not taken off. And so I guess the question in my mind is like if the litigation over
social media is modeled after the litigation over Big Tobacco, shouldn't there be like some industry
wide lift as a result of every platform trying to borrow the most addictive features of Facebook
and Instagram and YouTube? I mean, I hear what you're saying and I think it's an interesting point,
but I think that internet platforms just work differently than cigarettes, right? Like,
because you're right, like with nicotine, like nicotine is just addictive. Now there are people
that smoke cigarettes without getting addicted to them, right? But probably the majority of people do.
Social media platforms are an imperfect analog to those cigarettes. I believe that platforms need
to be of a certain scale in order for them to be truly addictive in the way that these
plaintiffs are now suing about, right? There's something about the fact that there's hundreds of
millions of people on Instagram and on TikTok creating content that creates that kind of infinite
supply of things that you might potentially want to watch that is actually able to. But I
am talking about the stuff in the container, right? Well, I think that there are many ingredients
that all work together, right? But you're raising a criticism that people are making of this lawsuit.
Like, effectively, what I hear you saying is you cannot distinguish between the content and
I'm not sure. I mean, I think I'm open to the to being persuaded that you can, but to my mind,
it's like one lesson that you could take from this is that it is very bad to be a popular platform
that engages these mechanics to keep users coming back. But it's okay to be an obscure platform
that does it because that's not going to have as much harm. So what's really sort of at issue here
is the fact that these platforms are very, very good and very, very popular at doing the thing
that everyone else is trying to copy. Yes. And this is the approach that Europe has taken to
regulating these platforms, right? They have certain categories. And if you are a very large online
platform, then you just have more responsibility. That makes intuitive sense to me. I think that
bigger and richer and more powerful, you are the more responsibility that you have to society,
right? And so in this particular case, you have companies like Metta, which we know are hiring
cognitive scientists who are working very hard to figure out all the different ways that they can
hack your brain to get you to look at Instagram for as long as they possibly can. It is in their
interest to get you to look at Instagram as long as they possibly can. And right now, there's
just no break on that at all in our society, except for this litigation. So I'm so sympathetic to
these juries that are looking around. They're seeing this, you know, almost completely unregulated
platform and they're saying something's got to be done. Yeah. So regardless of sort of what are
thoughts on the overall sort of legal theory here are like, what do you think the effects are on
the platforms? If this does get held up on appeal, if these platforms are found liable for millions
or potentially billions of dollars in damages against all of these people who claim that they were
harmed by social media, does that mean that they have to, I don't know, go back to like the
reverse chronological feed of 2008? Does that mean they have to shut off, you know, infinite
scroll and auto play and recommendations and all these other things? This is where it gets really
tricky. And this is like maybe the one narrow way in which I'm sympathetic to the platforms,
which is, okay, the juries have said your product is defective. What juries have not said is,
here's what an okay product looks like, right? They're saying we don't like this sort of set of
features, but they're not saying with any specificity like, well, how do we think that these features
are interacting? What is your actual model of the harm here? And so there is a world where the
platforms feel like they have to comply and they maybe start picking off some of these features one
by one, like, okay, if you're like under 16, we'll disable infinite scroll, for example. How much
benefit does that really have to like the individual teenager who may be struggling? I don't know.
This of course is why it would be great if Congress could pass some sort of law regulating this,
but we're now like, I don't know a decade into that project and still not getting very far.
Yeah, I mean, I think one prediction about how this will change platforms and their behavior is
that if you start talking about gambling or addictiveness on an internal meta chat room,
you just immediately get fired. They're just like a little button on your seat that just presses
and you get ejected out of the building. It's like, because so much of the incriminating evidence here
just comes from people like spouting off and work chat rooms about like, oh, it really seems like
this thing we're doing is dangerous. And like, I have to imagine that if it hasn't happened already,
they're just going to absolutely crack down on that kind of internal discussion.
Absolutely. Well, so I want to hear a little bit more about how you think about this,
because you've talked on this show many times about your own struggles to look at your phone less.
This is an issue that, you know, at various times you feel like has plagued you. So how are you
feeling about the addictiveness of these platforms? Like, do you buy the sort of public health framing
for the way that people are talking about them these days? Or do you think that this is overreach?
So I need to do some more thinking about the product harm arguments here and whether it makes
sense to me. I am basically on board with the idea that there should be age gating for social media.
I am sold on the premise that there is a certain age, whether it's 16 or 18 or 14,
where sort of the most harmful effects taper off. And I think before that age, it makes total
sense to age gate or at least give parents a lot more control over what their kids are able to
do and not on these platforms. I think the addictiveness question is just hard for me because
I feel like my sort of macro theory on all this stuff is that what is happening to social media
over time is that the social part is fading away and the media part is rising in the mix.
And so I think that if you start treating the design and mechanical decisions of these media
platforms as harmful under the law, it just sort of leads me into a place where I become much less
certain. Like before any of this existed, there were cliffhangers on TV shows that were designed
to keep you coming back after the commercial break or to the next week's episode or whatever.
Those were arguably addictive features. They would keep people coming back. Is that illegal? I would
say probably it shouldn't be and it's not. So I think there is a certain sense in which the
closer to that social media moves to something like TV or streaming video, the blurrier of the lines
in my mind get between the content and the mechanics. What are your thoughts on that?
Well, I have to disagree. I do think cliffhangers should be illegal because I want to know what
happened. I don't have to wait till the fall to find out if that person is still alive.
But also I do think that there are some really important differences between like let's say
YouTube and HBO Max. HBO Max is not going to modify the content of HBO to your individual
preferences. They're going to go pay some money for a bunch of shows and they're going to hope
a bunch of people watch them. The platforms that we're talking about are doing is going to be
very different. They're looking at across the entire corpus of every video that's ever been
uploaded to their platform and they're trying to figure out what will keep you personally
here the longest and we're going to show you that as much as we can. So I just do think that there's
a kind of categorical difference here. And while I do think people should have broad freedom to
you know, look at whatever they want. I do think that at a minimum we should probably place an
age gate on it for the same reason that we don't let 14 year olds walk into bars.
Right. Unless they're really cool and have a big idea.
So talk about the encryption piece because you had a lot about this in your newsletter that I
didn't quite understand. But what is the encryption debate that's part of these losses?
Yeah. So you know, here I understand that I'm coming across as being broadly supportive of these
jury verdicts, which I am, but I do want to acknowledge like this could lead to some really bad
places. Like and that's why we needed to handle section 230 with care. In the New Mexico case,
the attorney general argues that a reason that meta should be considered liable in advertising
their platform as being safer children is that it includes encrypted messaging. Right. In fact,
meta in March announced that they would discontinue encrypted messaging on Instagram.
In what I believe was an effort to sort of get ahead of this. What they said was, look,
if you want to do using encrypted messaging, you can use WhatsApp instead. But to me, this would be
like just a legitimately horrible outcome of all of this is if like every company that now
and offers encrypted messaging either voluntarily decided to stop offering it or was pressured by
the government to stop offering it because in my view, encryption is a necessary part of privacy
in a world where people are mostly communicating online. Right. Are you comfortable with all this
happening in the courts through jury verdicts? This is not my preferred way of addressing this,
but I think it was inevitable. In part because the tech companies have been so obstinate about making
meaningful changes to their platforms, right. Like societies across the world have been begging
these companies for a decade, please do something to make these platforms safer and to make them
less addictive and to reduce some of the harms. And instead, what we've mostly seen is a series of
engagement hacks designed to get people to look at them longer. Right. And in the United States,
where you cannot regulate the content of any of these apps for the most part, you can
really only left with the design, right. You're really only left with just the raw mechanics of
the app. So if the social media platforms are upset about the verdict here, I truly believe they
brought this on themselves. And you asked me about my own experience of screen addiction. And I've
never been sort of a total screen addict, but I've struggled like I think many, many other people have
with like how much I'm using my phone, how much I'm using various apps. I have come up with
convoluted ways of trying to reduce my screen time. You once were six hours late to a hard fork
taping because you wanted to find out what happened to chimpenini, Binance,
on TikTok. I thought we agreed to keep that private. But like never in all my struggles with
screen time, have I thought to sue the companies that were making the apps they went on my phone.
And I guess it's different when you're talking about kids, but like there is some part of me that
just feels like, well, it just feels like an easy way out. You know, blame the platforms. And look,
I think these platforms absolutely have culpability here. I am not saying that I disagree with these
jury verdicts. I think that these platforms, especially meta, have done the research, have found
the harms and then have shielded them from the public. But I just, I guess I'm thinking about my
own experience of these addictive platforms being one of like feeling bad about myself
rather than trying to find someone else to blame. Yes, but you also had the benefit of
beginning to use these platforms when you are already an adult, right? Like your hippocampus
was formed. And I think I was on instant messenger from a very early age. Do you really think that
messaging apps are as addictive and harmful in the same way as TikTok or Instagram? Oh my God,
take me back to 1999, put me on AOL instant messenger. I could not tear myself away from that thing.
I had to put up a little message with, you know, get up kids lyrics on it every time I left the
computer because it was such a rare event. And I wanted my friends to know that I was away from
keyboard. Casey, these things were addictive. The kid got up. It's a get up kids joke.
Yeah, let, look, I just think that like messaging apps are different from like these,
these social platforms. And I think, you know, honestly, like I will be curious, you know,
who knows if Instagram and TikTok will be what they still are in like 10 years maybe when your
son is ready or wants to use social media. But I just think that it probably just feels very
different than when you're a parent. Yeah. Well, Casey, are there any new social media apps that
you're addicted to? It's called Claude. And, wait, I do want to talk about the AI of this all.
So obviously every discussion on this show has to come back to AI at some point. So I'm curious
like what effects you think this might have on some of these AI companies because they are also
trying to create experiences that are engaging addictive, whatever you want to call it. I can imagine
some of these, you know, lawsuits that are being brought against the makers of chatbots for harms.
Like it all feels like it's sort of going to converge at some point. So what's your take on that?
Yeah. So Pew did a study in 2025 and found that 64% of teens now use AI chatbots, about 3 in 10
use them daily. That same survey said that the teen use of YouTube TikTok and
surveillance Snapchat had remained relatively stable, right? So yes, chatbot usage is growing.
It has not yet come at the expense of the social platforms. Although, of course, I expect that
we'll soon see chatbots inside all of those platforms, right? And like these things will all just
kind of merge together. There's something about these things where they do kind of go hand in hand
and to your point, like I think that yes, AI chatbots will be the next frontier of this debate because
in many ways they're much more engaging and I think like will be stickier than even these platforms are.
Yeah. I mean, it just seems so obvious to me that the platform should be like absolutely begging
Congress to regulate them because the alternative is like they just get sued into oblivion by a bunch
of law firms. I mean, absolutely. If I were running one of the big AI labs, I would want to have
an understanding from Congress of like what do you consider a safe chatbot? Give me a checklist
that I can follow because I don't want to have to be dealing with this in next few years.
Yeah. Casey, what's an addictive engagement mechanism we could use to get people to come back
after the break? Well, we could study their behavior and weaponize it against them.
Good idea. We'll re-come back. Sebastian Malibu, author of the new book, The Infinity Machine
joins to talk about Demis Hassabas, Google Deep Mind, and the quest for superintelligence.
Framer is a website builder that turns comms from a formality into a tool for growth.
Whether you want to launch a new site, test a few landing pages, or migrate your full.com,
Framer has programs for startups, scale ups, and large enterprises to make going from idea to
live site as easy and fast as possible. Learn how you can get more out of your.com from a Framer
specialist. Orkets started building for free today at framer.com slash hard fork for 30% off of
Framer Pro annual plan. Rules and restrictions may apply. The thing about AI for business
it may not automatically fit the way your business works. At IBM, we've seen this first hand.
But by embedding AI across HR, IT, and procurement processes, we've reduced cost by millions,
slash repetitive tasks, and freed thousands of hours for strategic work.
Now we're helping companies get smarter by putting AI where it actually pays off. Deep in the work
that moves the business. Let's create smart to business, IBM. I'm Robin and I am excited to
open my crossplay app. I'm challenging John, my colleague at the New York Times. Robin played the
word grunge which has a G which is four points. She got that triple word multiplier. I'm going to
take facts and make it faxes for 30 points. I might just take another two letter word here with
woe gets me at 23. I think this will put me back in the lead if my maps are mapping. I like to play
it more from a strategic point of view and see where I can block the other player from scoring high.
I'm pretty competitive. It's fun to beat friends and co-workers and also get to learn new words.
Crossplay. The first two player word game from New York Times games. Download it for free today.
I think he thinks he has us in the bag but I'm not so sure.
Well Casey, if our listeners read one book about AI this year, it should be mine.
But if they read two books, the second one should be Sebastian Malibis new book, The Infinity
Machine, Demisusavus Deep Mind and the Quest for Super Intelligence. Tell us about this book,
Kevin. This book came out this week. It is full of a bunch of new anecdotes and stories about
the work of Deep Mind and the motivations that drive its CEO, Demisusavus. Sebastian is a long time
journalist. He's a fellow at the Council on Foreign Relations and he's spent a long time with
Demis and the people close to him and brought us this book about what I think is the AI Frontier Lab
that gets the least coverage relative to its importance. Yeah, and it would look. Demisusavus is a
singular figure. He's been on hard fork several times but Sebastian went really, really deep and
I think maybe gave us the most fully featured portrait of the man that we've had to date.
And before we bring him in, because we're going to talk about AI, let's make our disclosures.
I work for The New York Times which is doing Open AI on Microsoft in Proplexity.
And my fiance works for Anthropic.
Sebastian Mal be welcome to Hard Fork. Great to be with you. So people who listen to our show are
familiar with Demisusavus and Deep Mind. He's been on several times. What is something non-obvious
about Demis that you learned through talking with him through many hours and interviewing many people
who know him? I mean, I think maybe the spiritual underpinning for his scientific curiosity is interesting.
You know, there was one time when we were sitting in this London park and talking for a couple of
hours and he suddenly started saying, you know, when I'm up at two in the morning at my desk by myself
thinking about science, thinking about computer science, I feel reality is screaming at me,
staring me in the face, waiting for me to explain it. And he calls it the goddess Benosa, that this is
the 17th century philosopher, Spinoza, who said that to understand nature is getting closer to God's
creation. And that resonates with Demis. Maybe that's something people don't know. That's interesting.
I mean, yeah, this has been something that's come up in my own research too is that, you know,
he grew up going to church, I believe with his mother. And I think unlike a lot of the other AI
leaders has a way of sort of fusing the science of AI with his own spiritual beliefs. And I know
some folks have seen his ambition and his many years of competing to build a GI. And I've seen
something suspicious in that, right? Elon Musk has this whole theory about how Demis secretly wants
to be an evil AI dictator who takes over the world. And I guess I'm curious if in any of your
reporting with him, you ever saw something that that seemed like what Elon Musk was talking about.
No, I mean, to the contrary, I think this idea that Demis is a, quote, evil genius, which is the one
that's the phrase that Elon used to use, came from the fact that in his video game production days,
Demis had created a game called evil genius. And so maybe it was a joke at first, but, you know,
really, I got to know Demis extremely well. I spent more than 30 hours with him. You stress
test people quite deeply as you know, Kevin, when you're writing about them and then you might get
pushed back and legal threats and all that stuff. And he did make me talk to his lawyer once.
And it wasn't totally easy the whole time, but he was reasonable in the end. And I,
wait, why don't you make you talk to his lawyer? Yeah. He was very mad at the fact that I unearthed
the whole story about deep mind trying to spin out a Google between 2016 and 2019. And, you know,
they retained a whole bunch of advisors, lawyers, bankers, et cetera. They got read Hoffman to pledge
a billion dollars to finance the spin out. They went to see Joe Tsai in Hong Kong, the Alibaba
co-founder. Anyway, so the lawyer was not amused that I had all these internal documents from
inside deep mind, which had been leaked to me, the board presentation that Deep Mind gave to Google
and so forth. And he said, you're not supposed to be writing about this. And I said, well, you know,
people gave me this stuff and tough. So there were moments of free and frank discussion.
I have always believed that when a source gives you secret documents, it helps you get closer
to God's creation. That's what I would have told him. I wanted to ask another question about
childhood, because Demis told you that he really identified with the boy genius protagonist
of the novel Ender's game and of relating to this feeling of being socially isolated by his
own talent and consumed by a desire to make his mark on the universe. And the reason it struck me
is that in this novel, Ender believes that he's doing training exercises, but then what he thinks
is like a test, essentially a video game, accidentally wipes out an alien species. So I wondered if you
talked with him about like why he relates to that story. And in particular, if there's any relation
to that and the idea of maybe trying to build a super intelligence. Well, I was astonished. You know,
this was before my first dinner with him. And it was certainly kind of the vetting process. It was
the last part of the vetting process where he agreed to give me the access I needed. And he said,
you know, you got to read this novel before you come and see me. And so I show up. I've read this
story about a diminutive boy genius who basically saves humanity from aliens. And I'm thinking, does
he really see himself as saving humanity by doing what he's doing with AI? And even if he thinks that
why would be so crazy as to tell me? I mean, surely that's hebristic beyond belief. Why would you
put that out there? And you know, he made no secret about it. He said, yeah, you know, I feel like
I identify because this guy put all of his energy and his life into saving humanity. And I feel
like I'm on a mission like that. And he said, I felt so strongly about this. I gave it to my wife
to read it, thinking that she would understand me better and sympathize with me. And you know what?
She sympathized with the kid ender, but not with me. That's not fair.
Yeah. I mean, one other character trait that comes up over and over again in reporting about
Demis and especially in your book is how competitive he is. This is a guy who loves to win. You know,
he was a child test prodigy and he won this thing called the pentamined, you know, five times,
which is sort of like an all around gaming competition. Do you think that is part of his approach
to AI? I mean, he's always talking about how he wants to use this to solve scientific mysteries
and cure diseases, but is some part of it just like this guy loves to win. And this is a really
big contest. Totally. I mean, that's exactly right. I remember going to see him, you know, when
Churchill PT was just going viral. And he said, you know, Sebastian, this is war. These guys at Open AI,
they've parked the tanks in my front yard. He actually said, park the tanks on my lawn because he's
English, but yeah, you get it. You bring up the release of chat GPT, which happens in November,
2022. And I'd love to hear a little bit more about how Demis had reacted to that because I think
before that happened, Google really thought they were comfortably in the lead and did not seem to
be feeling a lot of pressure to release anything. So I'm particularly interested if in hindsight,
Demis has regrets about the fact that they sort of let Sam Altman beat them to the punch.
Yeah, I mean, he has an explanation more than a regret. And the explanation is super interesting.
It's basically that because he studied neuroscience for his PhD, and you've got to remember,
this is back in 2008, 2009. So nothing worked in AI. So he was starting from scratch. And one of the
ideas in neuroscience is called action in perception. And this is the idea that to really be
intelligent, you have to take action in the world. You don't know what it means for something to be
heavy unless you pick it up. You don't know what gravity is unless you actually drop something.
And so he had this idea when the transformer paper came out in 2017 and Open AI was starting to do
the first GPT in 2018, second one in 2019, and so forth. You know, that's not going to work.
It's not going to take you all the way to powerful intelligence because language is just a system
of symbols. It's not grounded in the real world. And it's not that he was wrong in the sense that
now we see world models come back in 2026 as a big area of excitement and research. But back in
2018, 2019, he was missing the fact that a huge amount of knowledge about how the real world works
is in fact in language, if you download all the language on the internet. And he missed
how much you could squeeze out of language as a training set. Yeah, I mean, I want to run a theory
by you Sebastian for your take. But as I've been working on my own book and about this sort of
period at Google and at Open AI and at DeepMind, it strikes me that there are sort of like two visions
of what intelligence is that these companies disagree on. And in one vision, it's like intelligence is
about winning. It's about optimization. It's about a contest between rival intelligences. And that's
very much like the DeepMind sort of reinforcement learning paradigm, which is like AlphaGo and you
know, you play a board game a bunch of times and you get better at it a little more every time.
And then there's this other view, which is sort of the more open AI sort of language model scaling
paradigm, which is like, no, it's about answering questions like being very smart is about
having the right answer to everything. Does that theory hold water with you that there's something
like psychological about these two approaches to AI development that actually are rooted in like
what we think intelligence actually is? Yeah, I would say that the DeepMind special source right
from the beginning was to try to put those two things together. It's interesting, for example, that
with AlphaGo, the early research on that Ilya Satskever contributed to it. And of course, he was,
you know, the sort of leading practitioner of deep learning went on to be Open AI's chief scientist.
But at the time, he was working for Google because Google had acquired his critique. And so
the reinforcement learning people in London working for DeepMind collaborated with the Deep
Learning People in Mountain View. And that's what produced the AlphaGo breakthrough. So I think
I think you're right. There are these two strands within AI of reinforcement learning, which I
would describe as learning through experience interaction with the real world through trial and
error. And on the other hand, learning through data, and that is the deep learning. And for humans,
you could think of it as being, you know, you can go to the library and read all the books,
and that would be deep learning. You're learning from data, from sort of crystallized human knowledge,
or you can go out there in the real world and learn about stuff by planting your garden and
whatever, you know, actually, you can be like, Casey, who's never read a book. I'm going to get
around to it. Learns by trial and error. Yeah. So we're sort of the two approaches here.
You mentioned earlier this, I don't know if it's fair to call it a plot. It sort of seems like a plot
that they had at one point after they had gotten acquired by Google to try to spin themselves out.
I believe they call this project Mario. I would love to hear a little bit more about how that came
about and why they didn't go through with it. So what happened was that when they sold DeepMind
to Google in 2014, they had a rival offer from Facebook and Facebook actually offered them more cash.
And one of the reasons they said no was that they wanted safety protections around their
technology. And so they had this deal. There was going to be a safety and ethics board.
And Google promised that and they went ahead and sold to Google. And they had a first meeting of
the safety and ethics board in 2015 after the acquisition. And in order to like bind in the other
people in the space, they got Elon Musk to host the whole safety and ethics board at SpaceX.
They got Reid Hoffman to show up. And you will notice that then these are the characters who either
found OpenAI or fund it in those two. So Google wasn't best pleased. As you can imagine.
I have to say that doesn't seem like a very ethical thing to do. You know, maybe not the people I
would have put on my ethics board. These are these characters. But it's a dichotomy, right?
Dilemma. I mean, either you put people on the board who don't know what they're talking about and
they're not interested in AI. All they do know about AI in which case they're going to do their
own thing because it's too exciting not to. And a fundamental mistake that Demis made in his early
conceptualization of how AI would be developed was this notion that there would be one single lab
producing AI on behalf of all humanity. And therefore it could be safe because there'd be no race
dynamic. And you could take your time and sort of read teaming the models before you release them.
And that's why he brought Musk into the tent. That's why he brought Reid Hoffman into the tent.
Precisely because he thought we could all be one team together. And so then what happened after
to answer your question Casey? So what happened after was that having lost that first experiment
in setting up a safety and ethics oversight board. Google didn't want to do another one. And
really deep minds project project Mario was to try and force them to do more by threatening to
walk out if they didn't. Why did they call project Mario? Was that about the video game?
Good question. I don't know the answer. Sorry.
I failed to ask that. It's much better than the alternative project Wario they were working on,
which was just the evil version of that. So how does Google get them to abandon this plan?
You know, it's a tradition. Sunda Pichai, his personality and his management style,
comes out quite interestingly in this whole story because, you know, right at the beginning in
2015 when, you know, the first safety and ethics oversight board fails, the next idea that
Demis has for how to get some independence and control of the technology is to become a bet as
in an alpha bet when they were spinning out Waymo and some of the other side bets they had. And
Larry Page was cool with this and he was CEO at the time. But then right as these discussions were
going on, he handed over to Sunda. And Sunda kind of pretended to say, oh yeah, absolutely great idea,
we should look into it. But really he was just spinning them along and had no intention whatsoever
of letting Demis spin out because he recognized him as the AI talent that Google was going to need
in the future. And so essentially there was this long drawn out, you know, delays here and we
should just look at some more details. And here's another term sheet. And I was given some of these
term shapes. They're like huge great documents with red lines all over them where, you know, one
team of lawyers had come back to the other team of lawyers. And you know, basically by 2019,
everybody was exhausted. It all fizzled out and they just moved on.
There's been a lot of sort of jostling for independence within deep mind ever since the earliest
negotiations about selling to Google. Given some update on how things are going with them now,
like, you know, when we talk to them, they present things as being, you know, fairly like Hunky
Dory between everyone, but are there still kind of tensions and fought lines between Google and
deep mind? Well, you know, I'll give you sort of what I would regard as somewhere between
probably true and unconfirmed rumor. Is that all right? Can I, man, I'd like to do that?
Oh, please. We look, we look at gossip on this track. I'm getting a spilt tea.
So I'd say that, you know, Sergei Brynn is the trouble maker here, that he,
one of the Google IOs, I guess it was a couple of years ago, and the stage was set up for two people
to be on it. There was the interviewer and there was Demis. And suddenly, Sergei kind of runs onto
the stage. They have to get a third chair. And then he kind of inserts himself into that conversation.
We know what I hear is that that was the outward symptom of a much deeper tension, where Sergei
doesn't really like Demis' leadership on this and wants to push back against it.
And I think it follows from that that the single most important business
buddy act in order of capitalism today is the one between Sundar Pichai and Demis Ashabis. Because Sundar
manages the board, manages the sort of high politics of Google and alphabet that Demis has
the space, the resources, the oxygen to go do his science. And without Sundar holding that
all together, we might be in a different place.
One area where Demis has changed his mind is about the use of AI in the military. This was a big
sticking point in the negotiations with Google and Facebook back when they were selling Deep
Mind. He didn't want their technology to be used for the military. Now, obviously, Google Deep Mind
has one of these Pentagon contracts. They're working with the military. So what do you attribute
that shift in his thinking to? Is it just kind of the realities of the market or needing to compete
or what is it? Yeah, I mean, Demis described this to me as no, you mature. You get to know the
real world and all that. You one might say, how come you weren't mature when you sold the company
in the first place? I mean, surely it was predictable. But I think that the real truth of the matter
is he did not predict. I mean, it comes back to this single tonight, which I mentioned before.
He really thought there would be one lab. And in a scenario where there's only one lab who's
got the technology, then sure, you can say to the military, you can't have our technology. Go
away. And the problem today is that we saw with Anthropic just now with the Pentagon, if Anthropic
tries to draw a red line, you know, open AI is in there like a shot and says, Hey, Mr. Pentagon,
what do you need? We've got it for you. Do you worry that Demis is competitive streak or his
pursuit of science, whatever it is that drives him will compromise his ability to develop something
like AGI safely? You know, I asked myself that question all the way through my research. And in
some ways, the question about, can you be a strong, consequential actor in the world and still be good
is sort of the deep question in the book. And he is somebody who really wants to be good.
And I think one way of framing this question about is he being good? Will he be good? Can he be good?
Is to say, should he, will he do what Dario did standing up to the Pentagon about red lines on
military usage and surveillance? And I don't think he is going to do that. And I think the way he
would rationalize this would be to say, Look, you've got to pick your moment with this stuff. If
you make a stand and actually the Pentagon does what the hell it wants anyway, you didn't really make
the world better. My best shot at making the world better and making AIS safer is to go through the
route, which is the only route that can get us to AIS safety. And that is government intervention
forcing safety rules on all the labs at once. Because otherwise some are safe, some are not safe
and the ones that are not safer going to screw it up for everybody. And that's the route that I think
DEMIS wants to push. Problem is you have the Trump administration. They just want to accelerate.
And so all you can do for now, I think, is to keep this conversation alive with other governments.
And then maybe when there's a new administration in the US, we could see a conversation.
You write that DEMIS used to inform job candidates at DeepMine that if they signed on,
they should, quote, prepare for a climactic endgame when they might have to disappear into a bunker.
Why would they have to disappear into a bunker? And do they still tell the job candidates that?
Yeah. So the idea was when you get very close to AGI and it's super dangerous, you're going to
A, B subject to potential attack by bad guys who want to steal the technology. And B, you really
don't want to be distracted by quotidian real world stuff. So you just get into the desert.
Yeah. That's right. You leave your TikTok on your phone and some, well, I think Kevin used to
lock his phone up in the boxes, I recall. That's correct. And so you do a Kevin and you go and you
really, really focus and you really get the AI right in the last stages. That was sort of DEMIS'
vision. And to test whether he really meant it, I was having to know with somebody who used to be
at DEMIND in that period around 2015, 2016 and had now left. And I said, this wasn't really true.
He didn't really know. Yeah, yeah. This guy said to me, if Demis had told me any time I was working
at DEMIND that I had to take the next flight to Morocco and hide, I would have said I'd been
given fair warning. Wow. So the bunker is in Morocco, just so everyone knows. Yeah. And I
said, why Morocco? And he said, well, you know, it's the desert. And you know, the Manhattan
project was in the desert. Oh, it's the open-heimest syndrome. These guys in their Manhattan project
analogies, man. I don't know if they read to the end of that story. I didn't go that well.
Sebastian, you spent many years writing about hedge funds. And I remember encountering your work
back when you were writing about hedge funds and hedge fund managers. You're now spending time
with the new masters of the universe. And I'm curious what, if any observations you have about how
those two classes of people, the AI leaders and the hedge fund managers are similar or different.
Well, I would say that the hedge fund guys are playing a game inside a set of Fetty
when understood rules. They're not rethinking humanity. They're not rethinking everything
about society. They're not changing the way we bring up our kids. They're not changing the
conception of what it means to be human. Speak for yourself. I'm trading my kid to do algorithmic
arbitrage. He's four terrible additives, down to 200% this year. Anyway, sorry, carry on.
Yeah, no, but I just think that AI is so, so much bigger than some kind of
event driven arbitrage or whatever you want to talk about with hedge funds.
Maybe a last question from me. I have a question about the writing of this book and how
you decided to frame it. It strikes me Sebastian that we don't know how AI is going to go.
We don't know whether AI is going to turn out to, you know, cure a bunch of human disease and
usher in the utopia or usher in these like far darker scenarios. I think it's clear that you have
a lot of respect for Demis and the work that he's doing, but there's also this risk that things
go really, really badly. So I'm curious as you wrote the book, how you approached that tension
and the sort of not knowing of how history is going to judge this person who you've now gotten
to know so well. I thought of the book as a book about that tension. In other words, I'm trying to
do a portrait of somebody who has his hands on the 25th century version of the nuclear material,
who has that tingling sense of playing with something that could destroy humanity. What's it
feel like when you're creating that? Can you sleep? How do you live with it? I think I've delivered a
portrait of somebody who's in that hot seat. Hopefully that remains interesting for some time. It's
not something that depends on how this AI development story ends.
Well, Sebastian, thank you so much for coming on. The book is called The Infinity Machine
and it is out now. Thank you, Kevin. And Casey, thank you.
Thank you, Sebastian. When we come back, a Game of Hat GPT! It involves snowmen.
Would you like to build one? Hmm. I don't think so. I saw what happened to the Olaf.
Hi, I'm Solana Pine. I'm the Director of Video at The New York Times. For years, my team has made
videos that bring you closer to big news moments. Videos by Times journalists that have the
expertise to help you understand what's going on. Now, we're bringing those videos to you
in the Watch tab in The New York Times app. It's a dedicated video feed where you know you can
trust what you're seeing. All the videos there are free for anyone to watch. You don't have to be a
subscriber. Download The New York Times app to start watching. All right, Casey. Well, we took a little
break last week and there's been a lot of tech news, so we feel like we should do a roundup
and play a round of Hat GPT. Hat GPT, of course, the game where we put recent news stories into a
hat, draw a slips of paper out of the hat, discuss them, and then when one of us gets bored, we say to
the other, stop generating. And if you can't see us, we're using the hard fork hat official merch.
And Casey appears that these are sold out in The New York Times store. Not that specific hat,
which was of course a hard fork live exclusive. Yes, this is an exclusive. You can't get this one,
but you also can't get any of the other ones. Here's the important point. You cannot get a hard
fork hat anymore, so stop trying. Now, someone did suggest to me the other day that we should make
hard hats for hard fork, like a yellow construction vibe. Well, we can wear them over to the new studio,
which is being built for us right now. That's true. Do you think we should make that? Yeah, hard fork
hard hat. That's a perfect piece of merch. Great. All right. Casey, you go first. All right, Kevin.
This first story comes to us from 404 Media, an AI agent was banned from creating Wikipedia articles,
then wrote angry blogs about being banned. I feel like I've heard something like this before.
So, Kevin, once again, agents are writing blog posts. What do we make of this?
This would never happen on Grakipedia. No, I think this is just going to be the year that every
system on the internet that is built on human contribution and review is going to break.
And it will break not only because the AI tools, but because people are letting them loose
onto websites where they are doing things like editing with Wikipedia articles and defaming people who
contribute things to GitHub projects. We heard from Scott Shambhal about that on a previous
episode. But I think this is going to be a challenge I have started talking about the inbox
apocalypse that is going to hit this year where everything that is normally sort of reviewed and
bottled next by humans is just going to be overwhelmed and flooded with AI submissions.
Absolutely. I mean, I'm already getting emails now every week from something claiming to be an
AI agent that says, you know, it's running a company, you know, but it's always sort of like,
let me know if you want to talk to my human. And I was like, you're human. Better hope I don't catch
them in a dark alley because this is not the log in my inbox or frankly anywhere. Yeah.
I'm getting these two. It's like, it's a total scourge. It's somehow even more annoying than the
like faceless PR spam that you and I got. It should just to be very clear. There's not one thing
that anyone's agent could do or say to get me to respond to it anyway. So use that information.
I hope that goes into your training data. Stop generating. All right. Next up.
This one comes to us from Sean Hollister at the verge title. I met Olaf, the frozen robot
who might be the future of Disney parks. Sean reported in mid March about his interaction with
a new animatronic Olaf the snowman robot from frozen. It's weighs 33 pounds. It was trained
with an Nvidia GPU and is controlled by an operator using a steam deck. But when it made,
it's debut at Disneyland Paris. Well, Casey, something happened. Should we take a look? Let's take a look.
All right. Olaf, the snowman talking, waving his stick arms. Oh no, no! We lost him.
Olaf! Oh, the carrot knows falls off. Oh, it's, oh. There's something about the way that he
very slowly falls onto his back. Oh, no. Yeah. 20 children just got lasting trauma.
They're going to be talking about this in therapy. Look, what do you expect? Like, of course,
he was frozen. That's what the whole movie is about. Do you want to kill a snowman?
Okay. I mean, there is, it's just reliably very funny when you create an animatronic thing for a
child. And then it is like reveal to be a machine. And it just sort of feels like a love crafty
and horror. Yeah. Something about that transition from like a cutesy cuddly thing to like,
it's eyes are bulging out of its head and the sparks start flying out of the back.
I'll never forget the day at Chuck E. Cheese as a kid when I learned that the guitar playing
mouse wasn't real. You know, Chuck E. Cheese is full government name, right? What is it?
You don't know. It's not a joke. It's Charles Entertainment Cheese. Come on.
That's where to go. I learned something every day for you. Stop generating.
All right. Now it's my turn.
Well, this Kevin is a story about the Claude code leak. So Kevin, what do you make of this
Claude code leak? Well, I think it's a big deal in part because the
agentic sort of coding harness that is around Claude code is really the special sauce, right? It's
the model underlying it is part of what makes Claude code and other agentic code systems good at
coding, but it's really all the stuff around it. And that's what leaked. It is not the actual like
weights or the source code of Opus 4.6 or whatever model people are running inside Claude code.
It's like the sort of apparatus around it that makes it quite effective. So within hours of this
leak, there were people who had cloned it and set up their own versions of it. I imagine it's a very
busy week over at the Anthropic legal department trying to get all the stuff taken down.
But look, I think this kind of thing was inevitable, maybe not at Anthropic, but like the
agentic coding tools were all going to get good. They were all going to sort of reverse engineer
Claude code and figure out what made it better. But I think this probably just accelerated that.
When I saw this, my first thought was right now, Kevin rooses somewhere vibe coding Claude code
using the downloaded leech Claude code harness. I have not yet downloaded the leaked Claude
harness, but I have seen other people sort of taking it and then putting it on top of like an
open source Chinese model or something that's sort of Frankensteining their own sort of version of
Claude code that they can run. And I will say the closer I get to my rate limits on Claude,
the more I'm tempted to do something like that. That makes sense. Here's the last thing I'll say.
If Anthropic is looking for a new harness for Claude, they might want to pick one up at Mr. S.
Leather and San Francisco down in the Folsom district. That's really nice options down there.
All right, stop generating. Okay, okay. Next up out of the hat.
Oh, this one is good. The AI Fruit drama on TikTok that's too juicy to pass up.
This one says we wish to watch a clip from MDC News.
All right, everybody. So tonight we are taking a look at one of the most popular shows circulating
on TikTok. That's causing a lot of let's just say some juicy drama. Because the stars of the show
are AI generated fruit. Welcome to Fruit Love Island. We're eight single fruits are about to flirt
fights and trucks. Things get messy fast. The guy I want to couple up with is Benanito.
So this is like sort of a love island style reality show featuring AI generated fruits.
There's a very ripped banana who is attracting attention from the lady fruits.
And it's all very silly. But this is going mega viral. This is this is the big new trend.
I just watched a banana kiss a pineapple and that's not in the Bible.
Do you think I could win a multimillion dollar jury verdict for being forced to watch that?
I'm calling my lawyer. I think it's a fair question. I'll say this. My mental health did not
improve watching Fruit Love Island. Watch what happens with the passion fruit in season three.
All right. Stop generating. This company is secretly turning your Zoom meetings into AI
podcast. This one also comes to us from 404 media. And here's a name for a company webinar TV.
Wow. Two great tastes that taste better together. Webinar and TV. It's been a worse word of
the English language than webinar. Not to my knowledge. Apparently this company is secretly
scanning the internet for Zoom meeting links recording the calls and turning them into AI
generated podcasts for profit. Kevin. Oh my God. In some cases, people only found out that
their Zoom calls were recorded once Webinar TV reached out to them to say their call was turned
into a podcast in an attempt to promote Webinar TV services. Wow. What is happening? What is happening?
Okay. I want to start by saying. Yeah. I am committed to making a podcast with you for the
rest of my life. But if we ever get overtaken on the charts by an AI generated Webinar TV podcast
that's been trained on people's boring ass Zoom meetings, I am leaving this industry.
Here's why this is such great news. I think a lot of podcasts are struggling with the idea that
maybe they're podcast, you know, maybe they didn't have a great episode. Maybe they're wondering,
like, is this thing good enough to put out on the internet? Congratulations. Because every
single human-made podcast is better than every single Webinar TV episode that's ever been released.
Yeah. I mean, I'm just like, these have to be the most boring podcasts ever created. Like,
what are you going to talk about? Is it called Action Items? Is it called Circle Back?
What's the title of this podcast? Touch base. A limited eight-part series.
They're actually, I heard there's a great series over on Webinar TV right now. It's called,
oh, I think you're on mute. So you may want to check out that one out. All right, stop generating.
Next out of the hat, we have North Korean hackers suspected in Axios Software Toolbreach.
This comes to us from Bloomberg and it's about Axios, not the media company.
I actually would prefer to read a story about this from Axios if you have one on hand.
This is a tool, an open source tool widely used to develop software applications.
This has been a big security breach. Hackers were able to breach one of the few accounts that
can release new versions of Axios late on Monday and publish malicious versions. Axios is
downloaded about 80 million times every week. Anyone who has downloaded the malicious version of Axios
could then have their own computer and the data on it stolen by hackers. This is being attributed
to North Korea. Seems really bad. There's a lot of cyber security incidents we'll talk about where
it's like, no personal data was stolen or nothing sensitive was at risk. This is one where it's like,
no, everything was at risk. This is one of the bad ones. If you've been messing around with NPM
over the past week, you probably need to take a look at this. I think this is going to be one of
the biggest stories of the years, just what is happening in cybersecurity right now. I was watching
this YouTube video. If you ever need something to keep you up at night, watch a talk given by this
guy Nicholas Carlyne who's a security researcher and anthropic at a cybersecurity conference recently.
It is like the most terrifying conference speech ever given because what he's basically saying is
these AI tools have gotten better than almost any human hacker, any human security expert at
finding vulnerabilities in tools, even tools that have been around for decades like the Linux
kernel. These language models are now finding bugs in them and basically every piece of code that
exists is going to need to be rewritten and substantially hardened because we are facing like
an onslaught of these very sophisticated AI tools that can find every little bug and problem in
them. Well, I am going to watch that talk as just as soon as I'm finished watching Fruit Love Island.
But, you know, the thing that this brought to mind for me, Kevin, was that last week while we were
away, there was this anthropic leak where someone found a draft of a blog post that said that
anthropic was delaying the release of its next model so that it could share it with cyber
defenders, basically. To my knowledge, we have not seen something like this happen since GPT-2
in 2019. One of the big labs saying like essentially we're afraid to release this thing because of
what it might, uh, rot. What is the present tense? What am I reek? Yes. Because of what it might
reek. That's reek with the W. Yes. Speaking of reeking, take a shower next week. Hey, I was in
a hurry. All right. Stop generating. Okay. You're up. Okay. So this is actually a two-parter, Kevin.
Two stories about open AI recently that caught our attention. One, Sora has shut down, which was
a prediction that I made at our year end episode. Yes, you called this one. This was my low
confidence prediction for the year and it's already come true by March. And then a second story,
which I think actually crazily enough is related, open AI has apparently a shelled its plans to
release the erotic chatbot or sort of the like the adult mode that it said that it was going to be
bringing soon to chat GPT in an effort to boost engagement. So Kevin, tying to know what you made
of those two changes. So I think you were smart to predict the end of Sora. I think the, um,
the story with Sora never quite made sense to me. Like it was obviously a very cool piece of
technology. It was devastatingly expensive to run is my understanding. Like generating all those
short videos was like computationally quite pricey. And so I think they are making the decision to
sort of spread their bets a little less and consolidate around like a few projects. One being
enterprise AI, one being coding and sort of automating AI research. But I think they maybe
made a few too many side bets in the past couple of years that they're now seeing were expensive
and diverted resources away from the core. I have to say I was personally really glad to see
both of these changes like like the release of this infinite slot feed app last year. And the
company saying that they were going to release this adult mode while they were still having all
of these issues with like psychological problems that some of their users were experiencing as a
result of getting a little too close to their chatbots. I just thought both of those seemed like
really irresponsible moves and just like contrary to what they said their mission was. So I was
actually just really happy to see them say, you know what, we're not doing any of these things
anymore. Like I think that was the right move. Now did they do that out of the goodness of their
heart and some sort of like moral awakening that they had? No, they saw Anthropic which had started
to print money because Claude code was taking off and they said we want to get a piece of that. But
hey, whatever it took, I'm just glad it's happening. Yeah, stop generating.
Last up in the hat, CalShi announces itself as the safe regulated prediction market in a new ad
campaign. CalShi has recently been putting up green ads around DC and I've actually seen them in
San Francisco. The first one says rule number one, CalShi bands insider trading. The second one
says rule number two, we don't do death markets. Casey, you're tick rule number three. We'll always
shoot you in the front, never in the back. Who are these people? What? Like these ads are raising a lot
of questions already answered by the ads. Truly, truly, it's it's just so funny to me like, you
know, I went to this prediction markets conference like several years ago. I think you were going to
bring this up. But go ahead. And like people from CalShi were there, people from polymarket were
there, people from all these like, you know, obscure like prediction works. And it was like 50 people.
It was like who were interested in this stuff? And it wasn't legal at the time. And so they were
all using like sort of play money and like work around. And it was it just seemed like like there
no part of me was like in three years, this will be the dominant industry in America. And they
will be taking out bus ads to tell people that they don't do death markets. I know. But at the same
time, I keep reading all of these like stories and blog posts that are like, you know, why is this
generation turning to prediction markets? Is this like really the only future they see for themselves?
It's like, no, they used to be illegal. And now they're legal. People love to gamble if you let
them. You are now letting them gamble. So that's why they've hooked this younger generation. Yeah,
you don't think it's because of the information harnessing potential in the wisdom of the crowds.
I really, I'm still waiting for the wisdom of the crowds on a Calhsi market to improve my life.
Yeah. Well, you're not going to find it when it comes to death or insider trading.
Calhsi rule number four, gambling is bad. That's the ad I dare them to put up.
Let's close the hat. KC. What's up, yellow hat? I was hatching APT. What?
Lock going on. Lock going on. Busy week. Busy week. Never a dull day here in Silicon Valley. How so?
Hard fork is produced by Whitney Jones and Rachel Cone. We're edited by Viren Povitch. We're
fact checked by Caitlin Love. Today's show is engineered by Chris Wood. Our executive producer is
Jen Poyon. Original music by Alisa B. YouTube, Marion Luzano and Dan Powell. Video production by
Sawyer Roké, Jake Nichol and Chris Shot. You can watch this whole episode on YouTube at youtube.com
slash hard fork. Special thanks to Paul Assumin, Puywing Tam and Dalia Hadad. You can email us at hard fork
at nwaytimes.com with who you're rooting for to win fruit love Island. I've got my eyes on the Kiwi.