Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good Thing
2026-04-10 11:00:00 • 1:04:06
Maybe that's an urgent message from your CEO.
Or maybe.
It's a deep fake trying to target your business.
Doppel is the AI native social engineering defense platform
fighting back against impersonation and manipulation.
As attackers use AI to make their tactics more sophisticated,
Doppel uses it to fight back.
From automatically dismantling cross-channel attacks
to building team resilience and more.
Doppel, outpacing what's next in social engineering,
learn more at doppel.com.
That's d-o-p-p-e-l.com.
Casey, I got a haircut yesterday.
Thanks for noticing.
Kevin, it looks extraordinary.
Has this ever happened?
I went into the barber.
I sat down in the chair.
He did not ask me what I wanted.
He just started cutting.
Has this ever happened to you?
No, because they know I'm not straight.
With a straight guy, you don't need to ask them.
You just get the standard haircut.
The man gets.
He one-shoted my hair.
He said, yeah, I've seen this before.
I know what I'm doing here.
Where is the bar lock in?
It's like, okay, let me get out the schematics.
It's also like you two dead-hers.
It's also like you two dead-hers.
It's not like he knew me.
See, this is exactly the fact that you just go to random barbers
and will accept whoever happens to be.
This is why they can just start cutting your hair.
Oh, who is, yeah, I don't know this person.
Yeah, do whatever the hell you want.
See if I care.
That is the straight approach to hair.
But it's working great for you.
Thank you.
Appreciate it.
I'm Kevin Russo Tech-Column,
set the New York Times.
I'm Casey Neud from Platformer.
And this is Hard Fork.
This week, the dangerous new AI model
that has cyber security experts on high alert,
then New Yorker writers Ronan Farrow
and Andrew Moran's join us
to discuss their spicy new profile of Sam Altman.
And finally, it's time for one good thing.
Although I guess really there are two things in the segment.
Yeah, we should really rename the segment.
Okay.
Okay.
Casey, we have a big announcement.
Kevin, what is the announcement?
We're ending the show.
No.
No, you're finally free of America.
Yes.
No, on June 10th in San Francisco,
we are doing the second ever installment of Hard Fork Live.
It's too fast.
It's too furious and it's happening.
I tried to let them get them to let me call it
too hard to fork, but they decided that was not appropriate.
Kevin, where can people get more information
about Hard Fork Live too?
Okay.
It's happening on June 10th in San Francisco
at the Blue Shield of California, theater.
Bigger venue than last year.
Tickets will be on sale at nytimes.com slash events,
not today, but next Friday, April 17th.
So we're giving you a full week to get your act together
reach out to all your friends,
use meta AI to plan a trip to California,
use cloud code to build your scraper bots
to scoop up all the tickets.
And on Friday, the 17th, you can buy tickets.
And let and we will just say in advance,
last year the tickets did sell very quickly.
They did.
So get in there quickly if you want to go.
There would be more tickets available,
but Kevin reserves 50 for quote his team,
which I don't even know what all these people are doing
at this point, but they'll be there.
You say I did them too.
So get your tickets next Friday, April 17th at nytimes.com slash events.
Well, Casey, as you know, on this podcast,
we have a rule about discussing AI models called
Shippeter Zippet.
Shippet or zip it, unless you're actually putting it
in people's hands, we usually do not want to hear about it.
Yes, but today we are making an exception
for the new anthropic model,
Claude Mythos preview that just was announced,
but not released for reasons that we will talk about.
But first, since this will be a segment and a show about AI,
our disclosures, I work for the New York Times,
suing open AI, Microsoft, and perplexity
over alleged copyright violations.
And my fiancee works at anthropic.
Casey, this is, I want to say like the biggest story
of the year in AI.
I know there's been a lot of AI news.
I know that people are probably saying,
oh, here they go, talking about another model again.
I am telling you, this is something that people
need to be paying attention to because of the implications,
because of the way it was rolled out,
and because of the model itself,
which we will get to all of that.
But do you agree that this is a big deal?
Well, you know, when we were talking about the show this week
and we were kicking around the idea of like,
hey, exactly how big do we think this is?
You pointed out that one question people have been asking
this week is, are we going to have to rewrite all software?
And I feel like usually when folks are kicking that question
around, it's a big story.
Let's just talk through what was actually announced this week.
So on Tuesday, anthropic announced that it was starting
something called project glass wing.
The name project glass wing refers to the glass wing butterfly,
which has transparent wings.
And so it can hide in plain sight.
And that is thematically important for reasons
that we will come back to.
It's also a delicacy in some countries.
I've never had glass wing butterfly.
I've got to try it.
So notably, they are not releasing this model to the public
because they claim it is too dangerous to do that.
Instead, they are giving access to a consortium of tech companies,
including Cisco, Broadcoms, or makers of internet infrastructure
as well as Microsoft, Apple, Amazon.
Basically every big tech company that is not open AI
or meta is getting access to this model.
But not general access, just access to do defensive
cybersecurity testing, basically to go out and harden
their systems and their infrastructure and their software
before the general public can get its hands on this model.
So what are some examples of what mythos was doing
in training that so alarmed anthropic
that it came to this point?
So anthropic has been running this model internally
for several weeks now.
And they claim that this thing has found vulnerabilities
in every major operating system and web browser.
They gave some examples that have already been patched.
One of them was that this model apparently found
a 27 year old security flaw in OpenBSD.
OpenBSD is an open source operating system
that runs on firewalls and routers.
It is sort of like a critical security layer on the internet.
And it was designed specifically to be hard to hack.
And this model because of its advanced coding
and reasoning capabilities was able to find this bug
that 27 years worth of professional security researchers
had not been able to find.
What else?
Another example was that it found a bug
in a piece of popular open source video software
called FFMPEG that had, according to anthropic,
been scanned for bugs five million times
by automated security tools without finding
this critical exploit.
And that's why it's important to always look
the five million in first time
because you might find something.
Now, Casey, I think for people who are not
cybersecurity experts, it might be worth sort of
sketching the context here for like how software works.
So every piece of software, every operating system,
every app, every web browser that people use,
is built on a mixture of tools.
Some of those tools are proprietary to the companies
that make the software.
Some of them are sort of shared open source tools
that are just in everything.
Companies will just grab this open source thing
and plug it into their thing.
Because that's compatible with everything
and I'll save you a lot of time and trouble.
It's already been security tested by decades,
sometimes of researchers.
And this is sort of a big piece of kind
of the foundation layer of the internet
or these open source software projects.
What is happening now, according to Anthropic,
is that they can basically use this model,
Claude Mythos Preview, to sort of proactively go out
and find all of the unfound bugs they call these
zero-day exploits with a sort of speed and efficiency
that no human security research team could do.
Yeah.
And, you know, I would say that it can be difficult
to talk about cybersecurity in a way
that resonates people for a couple reasons.
One is just that cybersecurity as a field
exists essentially almost entirely to alarm people
and say, here are a bunch of problems
and these are really scary.
You know, I hope that folks in the cybersecurity field
would not mind me saying, like,
it is just like kind of an alarmist profession
and that when I've talked to these people
or the last 15 years, they've been telling me like,
look, the entire internet is held together with spit and glue
and we're very lucky that there hasn't been a catastrophe yet.
Okay.
So after all of this news came out,
I was like, I want to talk to some people
who are at least not working for anthropic
or this consortium to try to give me a gut check
on how big a deal this is.
And so I talked to Alex Stamos who formally led security
at Yahoo and then Facebook.
And Alex said, like, yes, this is a big deal.
And he was hoping for a long time
that we would see a consortium come together like this
because of exactly what you just said, Kevin,
the intelligence in these machines
and their ability to work autonomously
are now great enough that they can chain together
exploits that human beings either would never see
would take them a long time to see
or they would just never get to
because we're limited in ways that these machines are not.
So that got my attention.
Now, we should also talk about like,
what the strategy is here from anthropic
because I think a lot of people see an AI company
that is known for sort of being alarmist about safety
say we've created this powerful spooky new model
and we're not gonna show you
because it's too powerful and spooky
as some kind of marketing tactic.
So I think what you just say, like,
that is not to my understanding the case here.
No, in my mind, it is obvious why,
like if you're a corporation and you release a tool
and people with no real technical expertise
are able to use it and within a few hours
discover a novel exploit in the Linux kernel
and then take over other people's machines
to cause crimes, you might be held liable as a corporation.
You will get in trouble.
Like there will be congressional hearings.
So companies just in their rational self interest
do not want to sell cyber weapons on the open market.
Yes, it's also like if this was a marketing strategy
it is a horrible marketing strategy.
Like the government already thinks
you're a bunch of panicky doomers.
You have a new model that you claim is the most powerful
model in the world.
So instead of selling it,
you give a hundred million dollars of
clawed credits away to a consortium of companies
that includes many of your competitors,
which is what Anthropic is doing.
That is not how I personally would market a spooky new model
if I were in the business of marketing spooky new models.
Yeah, now look, it may be that despite everything
that we just said, there is still some marketing benefit
to Anthropic from doing this, right?
Like we know that they saw a huge increase in their revenue
after they took that stand against the Pentagon.
And that is in that stand.
They said like we are determined to do things
in a really safe way.
It seemed like the business world really liked that.
And so I could imagine there being a business benefit
to Anthropic of coming out and saying,
we have the most powerful model in the world
and we're not releasing it.
Like yes, I'm sure that there are plenty of businesses
that are salivating over the chance together hands on it.
But they can't unless they are part of this consortium.
So they are at least claiming that they are trying
to get ahead of what they envision will be a reckoning
was what was the word they used for cybersecurity.
And it seems plausible to me that in the next kind of six
ish months, every major piece of software in the world
is going to need to be patched, rewritten, and re-released.
So just an absolutely massive project.
Let me ask you this.
You know, Alex Stamos, the security expert that I mentioned,
told me that he sees essentially like two broad possibilities.
One is, and this is the good scenario,
there are a finite number of critical bugs
and vulnerabilities to be found.
And that maybe if we all work really, really hard
over the next six months or however long it turns out to be,
we will be able to patch those vulnerabilities
and our infrastructure will remain safe and stable.
The other possibility is that this model is already good enough
that it can just simply invent exploits
that we never would have thought of.
And so this will essentially just be a really, really big problem
that potentially just keeps growing in scope
because maybe eventually you hit some sort of true, super intelligent point.
So I'm curious if you've talked to people
about what they see the scenarios are
and if you have any thought as to which of those two is more likely.
So I think it's possible that they will patch
this sort of top 1% of critical software, right?
The stuff that everyone knows is important.
Your Linux, your very popular open source libraries,
your routing equipment and networking equipment.
Like it seems plausible to me that a couple of companies
with the right resources and the right models
could like find and fix the worst security vulnerabilities.
But I also talk to people who are telling me that
it's not as simple as that because once you get outside
that kind of top 1% of critical infrastructure,
there's just a lot of machines that are running on old code, right?
So it's theoretically possible that all of these fixes
could be submitted to the people who maintain these software projects.
But that a, there aren't enough humans to review
all of the proposed bugs and fixes
so that just sort of is a human bottleneck there
or that there is just a lag in the time between
when a piece of software is patched
and when the person running the router
at the medium-sized business in Tulsa
decides to update the firmware or install the security patch.
So people can expect a lot of apps that are asking them
to update their software or reinstall their software
over the next few months.
I've started getting a few of these already.
Have you started getting these?
Yeah.
So I think this is going to be a kind of forced reset
for the entire cybersecurity industry
and a very significant event in the history of technology.
Yeah, well, and just to make it concrete,
like we are currently at war with Iran
and Iran is currently hacking our critical infrastructure.
There's a story in wire this week about them successfully hacking,
like water and energy infrastructure.
Right now they're able to do that without a mythos quality model.
I would be quite nervous about what they could do
if something like that fell into their hands.
So this really is not an abstract concern that we're laying out.
Right.
And we should talk about this government piece of this
because one weird characteristic of this moment
is that this very powerful advanced model
that anthropoclames is capable of doing autonomous
cybersecurity research and attacks
is also a company that the US government
has spent the last several months trying to kill.
And has tried to declare anthropoc a supply chain risk.
They have ordered all federal agencies to stop using cloud.
And so my understanding is there have been some conversations
between anthropoc and parts of the national security
establishment and apparatus about this model.
But it is also simultaneously true
that they cannot use this model
without sort of running afoul of the administration.
So a private company right here in San Francisco
currently has a technology that they claim
is capable of finding critical security vulnerabilities
in every major operating system and web browser in the world.
And the US government to my knowledge
does not have access to this technology.
Yeah, it does seem like something that like our national security
infrastructure would want to have access to.
One more piece on the regulatory front.
It is crazy to me that model development
of this scale and seriousness remains essentially
unregulated in this country.
Right here you have a private company saying,
well, we have now created software that can create
so many different kinds of novel exploits
that all software might have to be rewritten.
And they are not really under any kind of regulatory regime.
And the regulatory regime that the previous administration
tried to put into place was thrown out by the current one
because it might harm American competitiveness.
So I just want to say that makes me really, really uncomfortable.
I think that if you are making stop this powerful
regulators ought to be paying attention.
Yeah.
One interesting sort of historical note that I'll make here
is like for the past few years at least
there has not been a significant gap between
what the AI companies have built internally
and what the public has access to.
Yeah.
Maybe there's a slightly better model
that the companies are working on
that they need to spend a few months testing before they release it.
Or it runs a little faster than the one that you have access to.
Yeah.
But there has not been kind of a significant gap.
Since I think GPT-2, which was in 2019,
which involved some of the leaders of Anthropic
who were then at OpenAI,
who made a decision to hold back this model, GPT-2,
out of fears that it could be used for things like
automating propaganda and misinformation.
Right.
In reality, it could barely write a limerick.
Yes.
You know, they aired on the side of caution.
They did.
And they got a lot of crap for that.
People sort of said, oh, you're using this to hype some of the same stuff
we're hearing this week about Anthropic.
And I think in that case,
they were probably a little over-excited about what this model could do.
But they wanted to make sure that they weren't wrong.
And so they held this back.
And that created a gap of at least a couple months to maybe a year
between what the average person could see
and what was happening inside the AI labs.
That gap is now open again.
There is now a model that you and I cannot use
that our listeners cannot use unless they work at one of these companies
in cybersecurity defense.
And what the AI companies are claiming.
And I think that is just a very tenuous situation.
And I don't like it, but I also understand why...
I think in this case, this was the right decision.
Well, what do you mean when you say that it's tenuous then?
I think as hostile and suspicious as people feel toward the AI industry,
that only gets worse if they think that there are secrets
being kept in a basement that they can't access.
And I think that it creates paranoia and fear.
I think that it is generally responsible to have transparency
from the AI companies about how capable their models are.
And I understand in this case that Anthropic felt like it had to make an exception.
But I think this gap may be here to stay
is the thing that I'm wondering about.
I think it probably is.
I mean, it's worth saying that Anthropic was founded on the idea
that if it could build models that were at the state of the art,
at the frontier, that it could have some influence over that frontier
and it could guide it to a safer place than it otherwise might have gone.
To me, the Pentagon fight and now mythos are examples of that thesis in action, right?
Where it made the best model and that gives it some room to try to do a little bit of good.
So blocking domestic surveillance and autonomous weapons for a little while
or preventing bad actors from getting their hands on tools that could create novel exploits.
At the same time, in order to do that, they had to build the model in the first place.
And there is a risk that there is some sort of, I don't know, intellectual property,
leakage that sort of somehow all of the innovations that they're building
are going to trickle down into other places.
And my fear is just that it becomes this sort of self-fulfilling prophecy, right?
Where we have to build this frontier even though it's dangerous
and we're going to guide it to this safer place.
But you know, you did build the thing in the first place.
So I just like reminding people of that tension because it is not actually inevitable
that we build these systems.
And yet we do often act as if that were the case.
Yeah.
Last thing, a lot of the people I know who are plugged into the cybersecurity world
are being asked right now what people should do about their own security.
If they are worried that models like this will become public,
it's funny.
Should they be like locking down all their accounts
and moving their cryptocurrency into cold storage?
Like what do you think people should be doing in anticipation
that something like this will become public?
You know, it's funny.
I had a friend ask me that just this morning as I was preparing for the podcast.
And I said, you know, a couple of things.
Like one to some extent we're just going to have to wait.
I mean, to the extent that any of what we've just described is good news,
it is that the defenders appear like they're going to have some runway
to fix some really bad problems before the bad guys catch up.
So I think we should give them a little bit of room to see what they can do.
If it does emerge that there is a similar model that can wreak havoc
like rest assured, there'll be segments about it on hard fork
and we'll have some updated guidance.
But I asked my friend, do you have a password manager
and do you reuse passwords for the same thing?
And she said, you know, I've never really been able to get one of those
password managers to work for me.
And I do sometimes reuse my passwords.
So I said like, look,
if you're looking for something that you can do, just make sure that you
have done your basic online cybersecurity hygiene, you should use a password
manager. I use one password.
There are many of others out there that are just as good.
Don't use the same password for anything.
Your passwords should be randomly generated and not, you know, the name of your
pet or whatever.
And then use multi factor authentication where you can.
Right.
So don't let anybody get into like your Gmail or your banking account
just by typing in eight letters.
You should also be using an authenticator app.
And so those are some of the basic things that I would tell people to do, Kevin.
Yeah.
I am planning to deal with the possibility of a massive cybersecurity breach
by just sort of selectively dribbling out and cramitating things about myself.
Okay.
Just sort of trying to get ahead of any hacks that might expose my, you know,
emails going back decades or anything like that.
So I'll just say in that spirit, I used to like the black eyed piece.
And I still do.
Let's get it started.
Now that was a critical vulnerability that I just exposed.
When we come back, we'll talk to New Yorker writers Ronan Farrow and Andrew
Moran's about their investigation into Sam Altman.
I also said something to stuff about you.
Oh boy.
I'm going to get it started.
Most all in one HR systems are a patchwork of disconnected and manual tools.
Ripling is totally automated.
If you promote an employee,
Ripling can automatically handle necessary updates from payroll taxes and
provisioning new app permissions to assigning required manager training.
That's why Ripling is the number one rated human capital management suite on G2.
Trust radius and Gartner.
If you're ready to run the backbone of your business on one unified platform,
head to Ripling.com slash hard fork and sign up today.
That's RIPP, LNG.com slash hard fork to sign up.
Hard fork is supported by Adio, the AICRM that knows what's going on.
Set up in minutes, get powerfully enriched insights and surface context on every
deal.
Need a prep for a meeting?
Done.
Got to follow up to right.
Drafted ready to close this deal.
Just ask Adio with universal context.
Adios intelligence layer.
You can search, update and create with AI across your entire business.
Ask more from your CRM.
Ask Adio.
Try Adio for free by going to adio.com slash hard fork.
That's ATTIO.com slash hard fork.
Thousands of businesses from early stage startups to Fortune 500s are choosing to
build their websites in Framer.
Changes to your Framer site go live to the web in seconds with one click,
publish without help from engineering, helping your team reduce dependencies
and reach escape velocity.
Learn how you can get more out of your.com from a Framer specialist or get
started building for free today at Framer.com slash hard fork for 30% off a
Framer pro annual plan rules and restrictions may apply.
Well, Casey, the talk of the town in San Francisco this week has been,
well, there have been two talks of the town.
One we already covered in our a that was the cloud myth.
This town conducts multiple conversations at the same time.
Where the amazing multitasking.
The other big talker this week has been this big piece in the New Yorker
about Sam Altman.
Yes, more than 16,000 words devoted to a question that has come up.
What's our twice on a hard fork, Kevin, which is can Sam Altman be trusted?
Yes, the writers on the piece are Ronan Pharaoh,
famous for his work on the Harvey Weinstein investigation and others.
And Andrew Moranse, who is a good friend of mine and a long time writer at the New
Yorker, they worked on this piece for a very long time.
Talk to many, many people in and around Sam's orbit and tried to answer the
question of like, who is this guy?
Yeah.
And also, why does that matter?
Right?
We're talking during a week where these systems have arguably experienced a
step change in what they can do.
And I think those kind of advances just naturally should draw more scrutiny
onto the people running these companies.
What do they know about who they are, how they operate, are they honest with
each other?
And this piece offers one of the more comprehensive portraits that we have had
so far, I would say, on that question.
You know, Ronan Pharaoh investigating you has to be one of the scariest experiences.
I know you pick up the phone.
It's like, hi, it's Ronan.
And also, but it seems hot too, you know, that's what everyone wants is just a,
you'd really handsome man asking them a lot of questions.
You know, okay.
So let's bring in Ronan Pharaoh and Andrew Moritz.
Ronan Pharaoh and Andrew Moritz, welcome to Hard Fork.
Thank you guys.
Happy to be here.
I mean, truly long time first time.
And in fact, I brought receipts to that effect.
I'm, this is your show you can take or leave this in the edit, but I wanted to show
what a devoted, long time fan I am of Hard Fork.
I know the show well.
I know you guys like merch and I know you guys like disclosures, but you don't
have any disclosure merch to my knowledge.
So I had these made for you.
Come on.
One for you.
One for you.
I'm going to put it in the mail after we get off.
But one, one of them says I work for the New York Times, which is
doing open AI Microsoft and perplexity for alleged copyright violations.
The other one says in my fiancee works and then drop it.
Oh my gosh.
That is amazing.
So I think time limited.
It's going to be a time capsule.
But I mean, made at the print shop in Brooklyn, one of a kind.
Existing.
That's incredible.
You are here.
And I also, I think I should also make when I gave you a hat at your wedding.
And I gave you one at your wedding.
So I thought we were even we have a sort of a theme going on here.
Okay.
Right.
Well, and that's also our disclosure, which is that Kevin and I are buds and have known
each other forever.
So actually Casey, you can come to me anytime.
I know you guys like to rib and roast on the show.
So you can come to me behind the scenes for any roastable Kevin material.
My dream has been to get the New Yorker to investigate Kevin Roots.
So you guys really could not have long had a better time.
We're on.
Don't attempt us.
I'm not picking up the phone.
Okay.
Let's talk about this big piece that you both just published in the New Yorker.
The title of the piece is can Sam Altman be trusted?
Now, usually there's this sort of folk rule about headlines that end with question marks,
which is that the answer is always no.
So I want to put this question to you.
Can Sam Altman be trusted?
Well, I think one important thing to note is the piece is really forensic and even.
And actually to a point where I've been happy to see there's a range of reactions, right?
There's people who have answered that question in a very severe way and looked at the fact pattern
that is laid out here and the documentation that's laid out and said, you know,
this is someone who poses an acute danger and should be kept away from an authority position.
And then there's people who, I mean, hilariously enough, my mother called me and she's like,
you know, I kind of like him.
And so I think that is a true reflection of our intentions.
In this case, as you might imagine, there is deep consultation with all of the subjects of the
reporting to really understand their feelings.
And anytime we thought there was a persuasive argument from Sam or anyone else that, you know,
something shouldn't make it in or something would be sensationalist, we really carefully discuss
that editorially.
So the result is very even and I would say on the question itself,
what we lay out is something that is remarkable, I'd say, even against the backdrop of
the culture of mistrust in Silicon Valley where everybody understands and expects, right?
That being a founder means telling different audiences different things at times to some extent
where everyone understands that the entire enterprise is building based on hype long before
there is actual actionable, deliverable product.
Even against that backdrop, there is an extraordinary preponderance of people who emerge from
interactions with Sam Altman, including close years long ones with really active complaints
and allegations that he lies repeatedly about things big and small.
Well, one of my favorites was when you quote him telling you that he wears a gray sweater every
day to avoid decision fatigue and then he shows up for his next interview in a green sweater.
That felt like a really sad.
That was just for you, Casey.
I was wondering if you were going to catch that.
I appreciate that eye for fashion that you so rarely get in these tech profiles.
Andrew was our fashionista in the writer's room.
But that's the kind of thing where we didn't want to make too much of that,
because it's like, oh, we caught you in this deep hypocrisy of choosing a green sweater.
This is consistent with a lot of the things people say throughout the piece and throughout
the career of Altman and OpenAI is that there isn't this one smoking gun thing where he's caught,
you know, with his hand in the cookie jar.
It's this sort of allegedly longer, more subtle accumulation of facts, which my kind of like
glib and annoying way of describing it is like the fabled memos and documents that were compiled
that led to him being fired in 2023 and that have kind of dogtem throughout his career.
They really shouldn't have been like a secret bullet pointed list.
They should have been a 16,000 word in New Yorker piece because when it only really makes sense
when you like, lay them all out together in narrative form.
Yeah, I mean, you guys mentioned in your story that there have been sort of these rap sheets
that have been circulating about Sam inside OpenAI and other parts of the AI industry
for years. One of them was compiled by Daria Ombede when he worked at OpenAI under Sam Altman.
One of them you said was maybe circulated by some allies of Elon Musk and people who are
opposed to OpenAI. So give us some sort of behind the scenes details about what is being said by
whom and how and to what ends about Sam Altman in Silicon Valley.
Well, it was really important to us to filter for the obvious competitive incentives out there.
There are people who are massively incentivized to go after Sam Altman.
And the reality is that there are very firmly evidence-based critiques,
many of which are promulgated not just by the rivals, although they're certainly amplified by
them happily, but also by more neutral figures and people who are just kind of technologists who
aren't in the fight. And then there is the White Hot Center of the rivalry, the stuff you mentioned
that I think is in a very different category, which is Elon Musk and other direct competitors
really amplifying everything they can come up with. And in some cases, we document things that are
inflated or trumped up or just seem to not be true. So Elon Musk in particular has intermediaries
circulating some pretty spicy and pretty unsubstantiated material in Silicon Valley. And we
talk about that. I really appreciated that about the piece because this has become more salient
over the past year as these rivalries heat up and you hear more and more of these scurrilous
rumors. And while I do think this winds up being a pretty damning portrait of Sam on the whole,
you do also point out that in some very real ways, he's the subject of legitimate smear campaign.
Yeah. Oh, yeah. I think that's absolutely accurate. And we were trying not to go in, you know,
with naivete of like, can you believe business titans are being mean to each other? But like,
the level of this really does seem kind of shocking and unprecedented. And you know, it's kind of
consistent with people who think of this as like whoever gets the ring first will control the world.
Like it just seems like all bets are off. And so as a reporter, it's very challenging to be like,
do you bring up the scurrilous rumors to knock them down? And so we had like months of conversations
about how best to do that. So there's been a lot of reporting on Sam Altman, especially around the
the board, who a few years ago, could you maybe give us like the two or three things that you think
are new and important from your reporting that rise above the rest in terms of people's understanding
of Sam Altman and open AI. So I think there are things here that put to rest some of the long
standing rumors, right? I mean, Altman has always said, and Paul Graham at White Combinator has
always said he was not pushed out. He left of his own volition. It really seems from our reporting
that that was not the case. They have talked a lot about their fundraising in the Gulf in the
Middle East as innocuous all businesses do this. It really seems from our reporting that the
relationships that Sam has cultivated with some Emirati and Saudi royals is deeper than was
previously realized, Ron, and what am I missing? There are several things like this.
We just didn't really know in full what was in those Ilya Satskiver memos. We didn't really have
the detailed multiple-sourced, heavily documented accounts of the individual proof points that were
offered in those memos. We didn't have the contents of those Dario Amade notes, and we didn't
have a lot of these people on the record yet. So I think actually in a way that was a disservice,
not only to Sam's critics, but also to Sam himself, there was a bit of a veil of mystery, and that
wasn't purely accidental. One of the things we document that's new here is as a condition of
the exit of the board members who had moved against Sam that he wanted out. They insisted on an
outside investigation. What happened there is in my view quite extraordinary, which is yes at
private companies, sometimes reports of this type when a law firm is brought in to restore legitimacy
can be kept out of writing, often it's to limit liability, and often legal experts say it's a bit
of a red flag. This is a different kind of case. This isn't just any private company. This is a
high profile scandal that engulfed Silicon Valley when Sam was fired. And at a non-profit.
At a 501C3, exactly. So there were stakeholders, not just in the public, but within this company,
that would be the bare minimum threshold, where senior executives thought, okay, we're going to get
some kind of at least detailed summary of what this law firm investigation found when they
invoked to rubber stamp, Sam coming back. And instead, what happened was an 800-word press release
that said there had vaguely been a breakdown in trust and offered very few other details.
And what we reported in this piece for the first time is there wasn't a report. For years,
people were like, where's the report? Where's the report? There wasn't a report because it was kept
out of writing. And this is no longer just a speculative supposition. One of the two board members
Sam helped select to oversaw this process just explicitly says, well, a written report was not
needed as now their line on this. Yeah, I'm glad you brought it up. It was actually my favorite
detail in the piece because it was something I'd been curious about forever. I mean, the thing that I
found most interesting from the piece were the people who spoke on the record, or at least gave
you quotes. Some of them were unattributed about Sam who, I think previously might have supported
him or at least felt like there was no upside in sort of talking about him in a negative way in
public. There was a Microsoft executive quote in your piece as saying that there's a small but
real chance he's eventually remembered as a Bernie made-off or Sam Bankman-free level scammer.
There's another unnamed board member who said, quote, he's unconstrained by truth and said that he
has quote, an almost sociopathic lack of concern for the consequences that may come from deceiving
someone. I haven't been on a lot of corporate boards, but I think that it's something that's
quite rare to hear a board member say about a CEO of a company. I'm just curious, like when you were
weighing these statements, did you feel like there are people who used to be fans of Sam, who
have soured on him, or are these people who have really held a grudge against him for a long time?
The thing that you point out about people changing their tune over time, I think is an integral part
of what we document in the piece, which is the fact that Sam Altman comes up through this Y
Comminator world is not incidental. The fact that he has an investment portfolio by his own
estimation, about 400 other tech companies, the fact that he has sat on everyone's board and
everyone has sat on his board, I think our sort of line about this in the piece is like,
we spoke to people who are Sam's friends, Sam's enemies, and given the mercenary nature of Silicon
Valley, some people who have been both. Given that that's the landscape, you are going to have
people who changed their tune as the wind blows different ways, and that's a lot of how
Altman's been able to weather a lot of this stuff in the past. One thing that results from that
spread of opinions is to your question about evolving, takes on Sam, there's definitely a class of
nuts and bolts investors, prominent people in Silicon Valley who are really pragmatists,
not just safetists, and who are growth and business oriented, who told us that at the time of Sam's
firing of the blip, they gave him the benefit of the doubt, and especially because of the factor
we talked about before, where there just was a dearth of clear information. In that void, a lot of
prominent people gave him the benefit of the doubt and saw only upside in bringing him back,
and removing the board that tried to fire him. There are a number of those prominent people in
that category now who say, I don't know that I would have given him the benefit of the doubt if I
knew everything then that I now know. It just strikes me though that everyone who digs into this
winds up coming back with essentially the same story. You know what I mean? There are not like
17 versions of Sam Altman out there, depending on which reporter calls which different source.
I feel like we now sort of know the broad outlines of this person's psychology.
I don't know. I want to challenge that. I do talk to people who are big fans of Sam,
some of whom work for him, some of whom don't. Clearly, this is a guy who has been able to,
at various points, lead very important technology projects and rally people behind a vision.
These people are not mindless sheep. They're critical in discerning and thoughtful people.
I don't want to seem like I'm taking Sam's side on anything, but I just like, I think that
there are a lot of people with very strong feelings about Sam Altman. Positive and negative. I
think the positive side tends to be more like people defending him in private, and the public side
tends to be more people criticizing him. But I don't know. I guess for Ron and Andrew,
do you feel like there are vocal supporters who you came across in reporting the story who had
no direct employment relationship with OpenAI or Sam, or we're leading companies that he
invested in or something who were like, yeah, this guy seems pretty good and smart and talented.
Yeah, I was on an 11-year-old who used chat GPT to pass sixth grade.
Oh my god. No, no. There were legit defenders of Sam on a number of these friends who we
talked to for sure. I think a lot of this has to do with what baseline expectation are you
starting from. If you think of this as a business and you start from the premise that people who
run giant successful businesses have to say a lot of different things to a lot of different people,
why is anyone even, why is this a story? I think though there's a kind of level setting here where
one of the things you can do when you take a big sort of putting everything in one place narrative
effort like this is you can start from the beginning and remember what the original pitch was.
And when you go back to what the original pitch was, the defense of why are you guys being so naive?
This is a normal competitive business. Like, okay, so when you pitch this as a nonprofit,
safety-focused research lab that would aggressively comply with all regulation,
like, were the people who believed that naive to believe it at the time? So that's when the
defenses start to feel a little more pressured to me. Yeah, also like for what it's worth,
you know, it's like, oh, you know, is it really a story that this guy's telling different things
are so many different groups? It's like, that's not like really a story that gets told about
Satana Della. It's not really a story that gets told about Sooner Pachai. It's not really a story
that gets told about Tim Cook, right? Like, there does seem to be something really unusual here.
And my question for you guys now that you've sort of spent so much time immersed in this company
is, what do you think it means for OpenAI? Well, I mean, luckily we have a really robust
independent tech media to, you know, so I was going to tune into TBPN and see
that there are independent journalistic take on this would be. Do you want to give listeners
who may not be familiar with what you're talking about some context? I think the day after our
piece closed, Ronan, or something like late last week, OpenAI acquired TBPN, which is this big
sort of tech chat show. So that's one aspect of this answer, right? That as OpenAI expands and
grows, they seem to be sort of buying up more of the press infrastructure to tell their own story.
Relatedly, by the way, a lot of announcements over there right? Yeah.
concentrated around when they knew we were going to be running and right developed in the period
where we were in these intensive conversations with them. And many of them sort of pointed at
the topics in the piece, you know, they announced this new safety fellowship that's very
airy. They announced this new governance plan. That's very sort of airy and ethereal.
But are meant to, I think, you know, occupy space in the conversation on the same topics.
And look, I mean, everyone Ronan, you should say more about this, but everyone, including
Altman and the OpenAI execs, we spoke to, recognizes the economic pressures here. I mean,
I think you guys were there when he said, oh, yeah, it's definitely a bubble and someone's
going to lose a phenomenal amount of money, right? So even putting sort of the sci-fi
SkyNet stuff aside, you know, the economic pressures are unavoidable. And a lot of it has to do
with this sort of pitch man rhetoric, the exact thing we're talking about, right? Because these
things are contingent. It's not like, oh, will it be a bubble or not? It's like, how hyped up will
the cycle get is a byproduct of how people like Sam go around the world talking about it?
Yeah. I want to ask sort of a basic question that I think people have probably raised with you,
which is like, why does it matter who Sam Altman is? If what we are talking about is a technology
that could have profound implications on national security, the economy, potentially the future
of humanity, it doesn't seem obvious to a lot of people why it matters who is running these
companies. Because a very nice person who is very honest and very transparent in all their
dealings could still release a rogue super intelligence that blows up the world. And a very,
you know, manipulative person could release a very aligned model. And so what we should be paying
attention to are the models themselves, not the people running the companies that make the models.
I'm not saying I believe that, but I'm curious, what do you make of that argument that we are focusing
too much on the humans and not enough on the technology? We probably both have thoughts on this.
I think I have two. The first of which is it's worth noting that while reasonable minds could
perhaps differ on the question you just posed, the answer provided by Sam Altman and the founders of
OpenAI was very clear, which is actually part of the way the entire enterprise was structured when
it was founded as a nonprofit was they talked a lot about avoiding an AGI dictatorship. They really
believed that actually the person who gets there first and has the most power over this technology
is pivotal. The individual integrity is formative to the way the technology goes and the way it's
controlled and the way it's used. The other thought that I have is in my mind, Ureza valid point,
and more significant than any of this is the structures around these individuals.
We have a technology emerging that could really affect us all in all of the existential ways you
just mentioned. And we don't have the regulatory guardrails to keep an eye on these folks. We are
completely seeding the power to these individual companies and their whims, the mud fight between them,
the quality control that each of them has or lacks. I think that to me is the big question.
And the integrity of an individual figures in that and it's important, but it reveals the
weaknesses in the system. If you have someone who potentially lies all the time, could in the
eyes of any critics be a danger, the important thing is to have the structures that account for that.
You, there's a great quote that you guys have in the piece from one of his former co-workers who
talk about how Sam now has this track record of setting up these elaborate guardrails to keep
him in check and then skillfully navigating around them. And it made me wonder if you had seen
this piece in the information this week about tensions that are being reported between Sam and
his chief financial officer, Sarah Fryer. She's reportedly expressed doubts that open AI will be
ready for an IPO this year. And according to the story, Sam has noticeably and awkwardly excluded
her from some conversations related to the company's financial plans kept her out of some key meetings.
I read that and I was like, well, this is exactly what you guys are writing about in your piece,
right? You sort of bring in somebody whose job it is to look over the finances of the entire
company, get it ready for an IPO, but then for whatever reason, we're going to sort of exclude
her from some meetings. So anyway, I just sort of feel like we really are seeing the exact pattern
that you guys are writing about now repeating in real time. Yeah. And I mean, just to agree with
all of this, I think the thing that Kevin's bringing up about given the power of this,
why are we focusing on one personality? I think that's very legit. I think that this is way
beyond one person. This is way beyond one personality. It's not like the point of the piece is,
Sam shouldn't be a GI dictator. So Elon should or Demis should or whatever, right? It's to point out
the fact that we're having a discussion about a GI dictators at all is insane. These guys know it's
insane. And yet this seems to be the race that they see themselves being in. When he was fired,
he was brought back in part because I think no one could really imagine an open AI without
Sam Altman. Do you think that's still the case? I don't think it's unimaginable anymore. I think
that part of reaching the scale that they've reached is that you can have a, you know, Steve Jobs
figure be replaced by a Tim Cook figure, right? It seems like it's inseparable from reaching
this scale that that becomes at least a possibility in people's minds, right? Ron, I mean,
does that strike you that way? Absolutely. I think the landscape has changed substantially over the
period of time. We were reporting this story. The fact that gradually more and more people were
talking openly about this critique is very telling. We report in the piece that there are periodic
spasms of senior executives at open AI talking about succession again, of course, naturally the
company denies this, but also very interesting that in recent forms of that discussion,
there has been talk about Fiji Simo being sort of the first potential successor candidate who
could slot into any ideas of that type that circulate between our asking about that and the
piece coming out. Obviously, Simo has now gone on leave for medical reasons. There's a lot of
reshuffling. We see it in the Sarah Fryer case. I think you're right to link it to that quote that's
in the article about constraints being sidelined. Yet, I think these doubts and questions persist
and are now much more out in the open. On the leadership question, it just strikes me that for
somebody who I assume wants to stay CEO for a long time, it's interesting to me that he's hired so
many former public company CEOs to be his top lieutenants. He has the former CEO of Instacart there.
He has the former CEO of Nextdoor there. He has the former CEO of Slack there. So, you know,
that's you're bringing a lot of really sort of sharp and pointy elbows into the room when you do
something like that. I'm trying to tell Sam that there's danger here.
Pro tip. If you're listening, Sam, you know, there are people in this piece talking about earlier
tracks of Sam Alman's career where they feel he was deliberately avoiding that. Actually, part of
what underpinned the terrible, terrible thumbling of the firing effort was a feeling that Sam had
kind of stacked the board with as one former member put it, JV people, you know, certainly if
we're being more charitable than that, people who were unprepared for the ruthless corporate warfare
that ensued. And, you know, I think one thing that is accompanied to the emergence of this as a
more openly discussed critique is that there's more people around this company, more stakeholders,
wanting, you know, professionalizing influences in the mix. I have to ask about one detail that I
loved in the piece, which is that the first time that Sam Alman and Dario Amade were scheduled to
meet, they were going to meet an Indian restaurant for dinner. This was back in, I think it's 2015.
And Sam texted him and said that his Uber had gotten in a crash and he was going to be 10
minutes late to dinner. Now, you did not editorialize on that piece, but knowing you both, I'm sure that
you went back through the Uber FOIA requests and found the logs of Sam Alman's Uber ride that night.
Is it your belief that Sam Alman's Uber actually got in a crash?
I think we're just going to leave that as non editorialized and let it stand right there by itself.
We also, I will say, I had this conversation and really liked just presenting that uninflected
for consideration. Okay, if you are the Uber driver who was driving Sam Alman to dinner with Dario
Amade and you are listening to this show, we do want to hear from you. We do want to hear your
side. Hard for it and why times.com. We will get to the bottom of this. Well, it's a great piece.
People should go, read it. Please do not investigate any other AI companies before my book comes out.
It was a very stressful week for me. Yeah, why don't you guys take a nice long spring summer break
before you get back? Yeah, look into some politicians or Hollywood executives or something.
We'll send you some names. Yeah, luckily it takes us as long to write a piece as it takes you to write a
book. So I think you'll be does if we do anything else. There's two of you. It should be faster.
Totally. Ronan Andrew, thanks so much for coming. Thanks guys. Thanks guys. Your hats are in the mail.
When we come back, what are Spanish language friends would call Una Cosa Buena.
Do you just google that? No. You plotted it? Yes. Okay.
The right technology can strengthen human judgment. That's why Deloitte brings together AI and
data analytics with multi-disciplinary teams who can help you connect the dots across your enterprise
from risk to operations to customer needs. So opportunities don't slip by and surprises don't spread
because the smarter your systems, the sharper your instincts. That's how technology makes people
better at what they do best. Deloitte together makes progress. Learn more at Deloitte.com slash
together makes progress. The thing about AI for business, it may not automatically fit the way
your business works. At IBM, we've seen this firsthand. But by embedding AI across HR, IT and
procurement processes, we've reduced cost by millions slash repetitive tasks and free thousands
of hours for strategic work. Now we're helping companies get smarter by putting AI where it actually
pays off. Deep in the work that moves the business. Let's create smart to business, IBM.
Big jobs don't need 10 different suppliers. It's time for one partner. For every size, finish,
and bulk order delivered on your schedule, the Home Depot Pro, it's about time.
Well, Casey, it's been a pretty heavy show today. So we thought we wanted to end on a positive note
with our segment called One Good Thing. One good thing, of course, our segment where we each talk
about one thing that's been tickling our fancy lately. Kevin, why don't you go first this time?
Okay, Casey, I am in love with this space mission. Yes. The NASA Artemis 2 mission, I have been
totally and earnestly obsessed. My wife was like, you're sure are talking about this space mission a
lot. I have been glued to this thing. And I have been filled with a childlike glee and wonder
that I did not know I still had the capacity to feel. Now, what exactly are they doing on this
mission? Or putting the moon? They are going further than any humans have gone from Earth before.
252,756 miles from Earth. And if you're wondering how many miles is that, well, the New York Times
had a helpful comparison list. And what do they find? You would need a chain of 2.37 billion of Nathan's
famous hot dogs to cover the distance that this spacecraft has gone from Earth. That's great.
Something we can all easily visualize. Thank you for that comparison. Casey, I am learning the things
that I never expected to learn. I've been watching this with my kid. I have become completely obsessed
with like concepts and terms that I did not know a week ago, including corona structure.
The terminator line, which I know you're wondering, that sounds scary. Yeah. It's actually the line
that separates the sunlight side of the moon from the side that is dark. Oh, I also learned that
we don't call it the dark side of the moon. That's not the preferred astronomical term.
Yes. We call it the far side of the moon. The far side of the moon. I am obsessed with all
of these astronauts. There are four of them up there. Victor, Christina, Jeremy, Reed. This is my
mountain rush more. I love these people who I've never met. They are adorable. They are incredibly
brave. I think we should go to the moon every single year. I think we should give NASA whatever
budget it needs to do because this has reignited my faith in humanity. Absolutely. I also saw
somebody on social media was posting that because the mission specialist Christina Koch had
communicated with Houston's Jenny Gibbons during the mission, this mission actually passed the
Bechtel test, which you don't often see on these missions. So I thought that was cool. I also,
somebody pointed out they said, you know, the coolest thing about going on one of these missions,
Kevin would be leaving Florida at 5,000 miles an hour. So that resonated with me as well.
Okay. You're more interested in the jokes. I am filled with childlike wonder over here.
I just think this is the coolest thing imaginable. It is very cool. You know, recently, I had an
opportunity to go stargazing. I'm not sure if you've been stargazing recently. I was up on
Monicaia on the island of Hawaii. And we had a really cool telescope there with our guide. And I
got to stare at the face of the moon. And it inspired a childlike sense of wonder in me as well.
But it did not make me want to go there because it looked quite bleak actually. You wouldn't go to
the moon. No, there's no Wi-Fi. Okay. Casey, what is your one good thing this week? Today, Kevin,
I want to talk about the only thing that can compete with the moon when it comes to inspiring
childlike wonder in a person. And that is a weather app. Okay. I'm listening. So recently,
I was reading about these entrepreneurs, Adam Grossman, Josh Reyes, and Dan Abrouton. And they
are the team behind ACME weather, which you probably have not heard of yet. But I bet you've heard
of Dark Sky. Yes. Dark Sky was by consensus the best weather app on iOS. And while it rained during
the 2010s, and I'm using rain in the sort of like non-medialogical sense, the non-medialogical
sense, it would tell you whenever it rained. And now I am using the meteorological sense.
Very good apps. Yeah. This app was bought by Apple in 2020, which was like kind of a head
scratcher. Apple already had a weather app. It was fine. And then Apple sort of integrated some
of its forecasts and some of its other features into its weather app. And then shut Dark Sky down
in 2022. And this made people really sad because I think a lot of us feel myself included like
the Apple weather app has never lived up to what Dark Sky was in a Tadeh. It's like a prediction
market. It's like, there's, you know, maybe it's going to rain. Exactly. Well, so these guys get
back together and they say, Frickit, we're doing weather apps again. And they make Acme weather.
And so you can download this now for iOS. It is apparently coming later to Android. And I know
what you're thinking, Kevin, which is what could you possibly build in 2026 in a weather app that
could differentiate it from all the other weather apps that are already on the market, right? Yes.
Are you wondering this? I'm wondering this. Well, let me tell you a few things. Number one,
they don't just tell you the weather. They show you a range of possibilities and a line chart.
So most of the time, it'll be like, yeah, it's going to be 63 degrees in San Francisco today.
But every once in a while, there's a lot of volatility in all the different signals that they
use to predict the weather. And then you say, okay, I don't actually know what I'm walking into today.
I better bring a couple of layers. This is the weather app for rationalists and other believers in
Bayesian statistics. Exactly. Some of the other things that this app does, they will send you a push
notification if they think there's going to be lightning in your neighborhood. Okay. They will
also do that when they think a sunset is going to be beautiful wherever you happen to be. Wow.
They'll send you an umbrella reminder if it's going to precipitate in the next 12 hours. And they'll
send you a sunscreen alert when the UV index is high. But I've saving my last two favorites for
the end. Number one, there will send you an alert when the Aurora Borealis may be visible where
you are. That's beautiful. I haven't gotten that notification yet. But I wake up every day,
hoping I'm going to get my Aurora Borealis notification. You got to go to skin and maybe I think.
Number two, and this is just some time for pride. They will tell you when there is a rainbow in
your neighborhood. Wow. Are you kidding me? This is such a good idea for a weather app. Who does not
want to be sitting at your wage slave job? You haven't been outside in like seven and a half hours
and then ask me whether it tells you, hey, guess what? There's a rainbow in your neighborhood.
You're going to book it outdoors and you are going to behold the majesty of creation. How really
possibly collecting that data? Well, interestingly, they're taking this ways like approach where they're
inviting their community to submit reports. And so if a bunch of people say, hey, rainbow in my
neighborhood, they're going to go out and send out a notification. Wow. So now, look, this app does
cost $25 a year. And I know, you know, probably most people out there are perfectly content with the
free weather app on their phone. That is fine for you. But as somebody who loves cool things, new
ideas, people having fun, I just wanted to shout out, ask me whether because I think it's a really
cool thing. Now, what is the likelihood that this app will be purchased by Apple and then shut down?
I mean, if that happens, I hope these guys get paid again because somebody has to move the weather
app industry forward. And these are the folks who are doing it. I love that. Like, Grandpa, how
did you make your fortune? Well, I built 17 weather apps that were identical and then sold them
all to Apple. I just also think it's inspiring that at time when some companies are like, we're going
to make a system that is going to force the world to rewrite all software. There are other
guys who are like, what if there's a rainbow in my neighborhood? I want to find out about that.
And those are the people that I want to highlight on today's show, Kevin. Okay. Well, download
Acme weather before the heat death of the universe renders weather irrelevant. And tell us whether you
liked it. That was a good thing. Thank you. Thank you for alerting me to this wonderful rainbow
detector. Well, thank you for alerting me to the existence of the moon. I know you weren't a big
believer in the moon before, but hopefully I've convinced you today. Well, somebody told me
something about a sound stage and maybe the landing was fake, so I've just been curious.
I think, you know, were the only podcasters who actually believe in the moon?
That's our competitive advantage.
Hard for where we believe that people have been to the moon.
The right technology can strengthen human judgment. That's why Deloitte brings together AI and
data analytics with multi-disciplinary teams who can help you connect the dots across your
enterprise from risk to operations to customer needs. So opportunities don't slip by and surprises
don't spread because the smarter your systems, the sharper your instincts. That's how technology
makes people better at what they do best. Deloitte together makes progress. Learn more at Deloitte.com
slash together makes progress. So there's a lot of noise about AI, but times too tight for more
promises. So let's talk about results. At IBM, we work with our employees to integrate technology
right into the systems they need. Now, a global workforce of 300,000 can use AI to fill their HR
questions, resolving 94% of common questions. Not noise. Proof of how we can help companies get
smarter by putting AI where it actually pays off. Deep in the work that moves the business.
Let's create small to business IBM. Big jobs don't need 10 different suppliers.
It's time for one partner. For every size, finish and bulk order delivered on your schedule,
the Home Depot Pro, it's about time.
Before we go, we are saying goodbye this week to our wonderful executive producer,
Jen Poyant. Jen has been with the show for years since almost the very beginning and she's been
a critical force in helping us make the show and conceive the show. So Jen is leaving the New
York Times for a new adventure, but we wanted to just give her a special shout out and say,
thank you from the entire HardFork team for all of the amazing work you've done.
It's true. Jen has been a friend and mentor to us both and we will miss her terribly, but she
will always be part of the HardFork family, which means she has to bring a dish to the potluck.
Thanks Jen. HardFork is produced by Rachel Kohn and Whitney Jones. We're edited by Vierran
Povitch. We're fact-checked by Katen Love. Today's show was engineered by Chris Wood.
Our executive producer is Jen Poyant, original music by Marion Luzano, Diane Wong,
Rowan Niemistow, Alyssa Moxley, and Dan Powell. Video production by Sawyer Roké, Pat Gunther,
Jake Nichol, and Chris Chott. You can watch this full episode on YouTube at youtube.com slash
HardFork. Special thanks to Paul Assuman, Puywing Tam, and Dalia Hadad. As always, you can email us
at HardFork at nytime.com. Send us your zero-day critical security vulnerabilities. Actually, please don't.
How much time do you waste searching for things you know you've seen? Little Bird is an AI
assistant that remembers everything on your screen, every doc message and website. It builds a
secure memory of your work, so when you need something, you just ask. Little Bird finds it in
seconds, and when you need to create, it uses what it already knows about your work to help you
start stronger. Stop searching. Start creating. Download Little Bird for free at Little Bird.AI.