Sam Altman's Big Little Lies

2026-04-11 07:30:00 • 55:56

-

Offline is brought to you by IndyCloud. April's funny.

0:03

Half the internet is talking about spring cleaning.

0:05

The other half is already planning there for 20.

0:08

Wow. That's where IndyCloud fits in.

0:10

IndyCloud is your fully legal online cannabis dispensary for gummies,

0:14

exotic flour, premium pre-rolls, and zero sugar THC sodas.

0:18

A clean, alcohol-free way to relax without throwing off tomorrow.

0:22

Everything available is federally legal, hemp THC,

0:25

lab tested, and shipped discreetly to your door.

0:27

And this month, new customers get 40% off all month long

0:31

with their biggest sale of the year.

0:32

Sleep gummies for nights that actually restore you zero sugar THC sodas

0:36

for social plans without alcohol.

0:38

Premium pre-rolls for intentional wind downs.

0:41

And 70 dollar ounces for consistency that feels sustainable.

0:45

Boy.

0:46

We love IndyCloud.

0:47

Yeah, it's great.

0:48

It's great to have a wind down.

0:50

I like an intentional wind down.

0:52

I love them. You know it's intentional because I take the gummy.

0:54

Yeah, listen, honestly, in a pinch,

0:56

I'll take an unintentional wind down.

0:58

I just want to wind down.

0:59

I want to get down.

1:00

Why I want to wind down.

1:01

I'm up. I want to be wound down.

1:04

And that's what IndyCloud can do for you.

1:06

That's what it can do for you.

1:07

If you're 21 or older and a new customer,

1:10

go to indyCloud.co.

1:11

That's dot-co.

1:13

Not.com.

1:14

And use code offline for 40% off your first order.

1:17

That's indyCloud.co.

1:19

Offline.

1:20

That's indyCloud.co.

1:22

Code offline for 40% off all month long ship discreetly to your door.

1:26

Plus free shipping on orders over $50 and $30 in free gifts

1:30

on qualifying orders.

1:31

Don't forget to fill out the quick survey

1:32

when you order to support this show.

1:34

As always, please enjoy responsibly and mega thanks

1:37

to IndyCloud for supporting your 420 plans this year.

1:41

At Arizona State University,

1:43

we're bringing world-class education

1:44

from our globally acclaimed faculty to you.

1:47

Earn your degree from the nation's most innovative university.

1:50

Online.

1:51

That's a degree better.

1:52

Learn more at ASUOnline.asu.edu.

1:56

Yo, hey, if you're thinking about a career change,

1:58

but not sure where to start, this is Jay Cruz.

2:01

American Career College offers 13 healthcare training

2:03

programs for people just like you looking to move

2:05

into healthcare.

2:06

Recognized by USA Today as one of America's top vocational schools

2:10

in 2025, ACC helps students train for new careers,

2:13

offering multiple programs like vocational nursing

2:15

and medical assistant that can be completed in a year or less.

2:18

Financial aid and scholarships are available for those who qualify.

2:21

It's easy. Just go to ACC-future.com.

2:24

That's ACC-future.com. Get started now.

2:27

ACC cannot guarantee employment.

2:30

Another person who told us that this is probably a bubble

2:32

is Sam Altman, who has said multiple times

2:37

that he thinks it's a bubble and that someone is going

2:39

to lose a phenomenal amount of money.

2:40

I believe that's a direct quote.

2:42

So yeah, I worry about the potential for a bubble here.

2:47

I'm John Favreau and you just heard from today's guest,

2:51

the New Yorkers Andrew Morantz.

2:53

Andrew, along with a fellow New Yorker journalist

2:55

you may have heard of, Ronan Farrow,

2:57

just published an incredible expansive investigation

3:00

about one of the most important figures in tech,

3:02

Sam Altman, the CEO of OpenAI.

3:05

Over the course of hundreds of interviews,

3:07

including over a dozen with Altman himself,

3:09

Andrew and Ronan unveiled a picture of a leader

3:11

who is widely distrusted by the people

3:13

who worked with him closely and who tells people

3:15

exactly what they want to hear, whether or not it's true.

3:18

Just like the AI model he created,

3:21

Andrew and Ronan raised the question,

3:23

can the man behind the most influential

3:25

artificial intelligence company in the world

3:27

who's going full steam ahead on a potentially

3:30

civilization-destroying technology actually be trusted?

3:34

I'm sorry to say the answer will not make you feel better.

3:37

I talked with Andrew about the contradictory narratives

3:40

coming up with an idea of how to make

3:42

contradictory narratives coming out of OpenAI.

3:44

Why this is so much more complicated than good guys

3:46

versus bad guys, and how Altman's resolve to go founder mode

3:50

means he may be headed down the same well-traveled path

3:52

as many tech oligarchs before him.

3:55

We'll get into that conversation in a moment,

3:57

but before we do, please consider becoming

3:59

a crooked media subscriber if you haven't already

4:01

so that you don't miss out on any of the great content

4:03

we're putting out for our friends of the pod.

4:05

Subscribers get our new extra episode of PodSave America

4:07

called PodSave America Only Friends.

4:09

Other subscriber only shows like PolarCoster

4:11

with Dan Fyffer, access to all of our excellent

4:13

sub-stack newsletters like PodSave America OpenTabs,

4:15

add free episodes of all your favorite crooked pods,

4:18

and you get to feel good about supporting one of the few

4:20

independent, proudly pro-democracy media outlets

4:23

left in Trump's America.

4:25

So head to crooked.com slash friends and subscribe.

4:28

Here's Andrew Moranz.

4:30

And you're welcome back to Offline.

4:40

Thank you.

4:41

Always a pleasure.

4:42

I want to talk to you about your big Sam Altman piece

4:45

in the New Yorker that you wrote with Ronan Farrow.

4:48

You and Ronan spent 18 months reporting this piece.

4:51

You sat down with Sam Altman, I think, more than a dozen times.

4:54

You get access to hundreds of pages of internal memos, documents.

4:59

And, you know, on one level, it's a story about the internal drama

5:04

of a company where people no longer trust the guy who runs it,

5:07

to the point where multiple people described Altman to you,

5:10

unprompted as a quote sociopath.

5:14

But this also happens to be one of a tiny number of companies

5:19

building a civilization changing and possibly civilization

5:23

destroying technology.

5:25

So I guess my first question is, after spending 18 months on this,

5:29

what if anything changed for you personally in terms of your perspective

5:34

on AI and the people building AI?

5:37

Yeah.

5:38

I mean, this is a really critical backdrop for this, right?

5:41

Because, you know, all people who are at a certain echelon of power

5:46

and wealth deserve serious scrutiny.

5:49

But I don't think I would have been that interested

5:52

in this level of individual scrutiny for someone who, you know,

5:56

was the CEO of a really big, you know,

6:00

the transportation structure.

6:01

Yeah, exactly.

6:02

Like, or a shoe company.

6:04

I mean, this matters because of the structural impacts of AI specifically.

6:08

And so there's a lot we can get into about Sam Altman,

6:12

the person, the personality, the persona.

6:15

But the reason this matters at all is because I think AI really matters

6:19

and I think I see a lot of people who are worried and scared

6:25

and therefore want to put their heads in the sand and say,

6:28

it's a parlor trick, it's a trick of the light,

6:31

it's not real, it's hitting a wall, it's stochastic parrots,

6:34

it's whatever.

6:35

I don't think that is tenable anymore.

6:38

Like, I just don't think we can sit this one out as a society.

6:42

And so I think we need to bring serious scrutiny to bear on the people

6:47

who are building it and on just like knowing what the thing is

6:51

to the extent that anyone knows, including the people who are building it,

6:54

because this is not like a news cycle that you can just sit out.

6:59

Like, AI is part of, you know, weaponry at the highest levels of the military.

7:04

It's part of surveillance.

7:05

It's part of basic transportation infrastructure and weather prediction.

7:09

It's, you know, liquefying our brains with slop.

7:13

It's contributing to what experts call human infebolment,

7:16

which is basically like the more you outsource to LLM's,

7:19

the less you're able to think and write and perceive the world.

7:22

So like, these things are happening, whether or not you think that you should

7:26

spend time worrying about the more sci-fi scenarios where it kills us all.

7:30

And by the way, we can get to this, but I think the sci-fi scenarios

7:33

where it kills us all are also worth worrying about.

7:36

Yeah.

7:37

Did you leave the reporting more alarmed about what,

7:41

where we're headed?

7:43

I did.

7:44

I did.

7:45

And this is not just, again, this is not just an open AI thing or a Sam Altman thing.

7:51

I think before I really started reporting on AI in earnest,

7:55

I kind of thought, you know, of course, like nerds are going to nerd.

8:00

And like, you know, sci-fi people are going to sci-fi.

8:03

And like, yeah, everyone has some apocalyptic fantasy about how their generation

8:07

will be the last one ever on Earth.

8:10

And there's definitely truth to that.

8:12

I mean, there are these narrative things that, you know, in the nuclear age,

8:16

we get Dr. Strange Love and, you know, now in the age of AI,

8:19

we get AI dystopian fantasies.

8:21

And it's even weirder than that because the AI's are trained on data that includes dystopian sci-fi.

8:27

So they themselves start spitting it out sometimes.

8:30

Yeah.

8:31

So I'm not sitting here and saying like, the sky net scenarios are likely.

8:36

But the more I looked at this stuff, the more I kind of understood what the arguments are

8:41

from the people who are really worried.

8:43

And they were not all arguments that I could immediately refute.

8:47

And so I think the fact that you now have members of Congress on the left and the right,

8:52

you know, saying, let's take these nerds kind of more seriously than we did,

8:57

it's not incidental.

8:58

I think it's because they're actually listening to the substance of the arguments for the first time.

9:03

And even though the arguments might be hypothetical and even though they might be technical,

9:07

they're not ones that you can just immediately bat down without giving them serious thought

9:11

and without actually trying to regulate or control our way out of it.

9:15

Yeah. And the other thing is we talk a lot about the technology itself,

9:18

but you can't divorce the technology itself from the people who are building it

9:23

and then the people who are in charge of it and the people who may or may not regulate it in the future.

9:28

Right.

9:29

I would place my money on may not.

9:31

Right. It seems like the entire, the governance structure of AI in the broadest sense,

9:37

not just from actual governments and politics, but from what's happening at these companies,

9:40

it seems critical here, which is what your piece gets into with regard to SAM.

9:44

So let's just, I just want to get into a few of the bigger revelations in the piece.

9:50

I thought one of the more damning revelations is what happened with the allegedly independent investigation

9:56

of Sam Altman after the board fired him in 2023 for essentially lying to them.

10:01

And so Altman sort of engineers his own return a few days later.

10:06

And one of the conditions of his return is this outside investigation led by Wilmer Hale,

10:11

which is the same firm law firm that investigated and run.

10:14

A few months later, open AI announces that the investigation has cleared Altman,

10:19

but there's no written report.

10:21

There's nothing's made public.

10:23

That's it. And a board member told you this could prompt a need for another investigation.

10:28

Has anyone reached out to you guys since the publication, anyone in the Delaware or California AG's offices,

10:35

or do you think there's an appetite for a real investigation now, or do you think that that chapter is closed?

10:40

Yeah. I mean, we, I think, really nailed down in report for the first time that there was never a written report

10:47

because it appears that a report was never written.

10:50

And it seems from all of our reporting that that was intentional, that the goal seemed to be to clear Altman,

10:59

or at least that if that was where it was heading, a lot of sources told us like,

11:04

well, then why should we create a paper trail that could create complications for us

11:09

if where we're heading is to exonerate him?

11:12

And this gets to sort of one of the persistent patterns that comes up in the reporting of this piece,

11:19

which is, you know, everyone knows that Sam Altman was fired in late 2023, and everyone knows that he came back.

11:26

What people didn't know before we got our hands on all these documents,

11:32

and by people, I mean, not just the general public, but like Microsoft executives, like investors, open AI employees,

11:39

there was a ton of confusion at the time of like, why is this person being fired?

11:44

Like, what did Ilya see became the meme around Silicon Valley?

11:48

Because Ilya Satskyver was the co-founder, member of the open AI board,

11:52

who kind of became the swing vote in the firing.

11:55

And we have now reviewed a lot of documentation, including the full memos that Ilya Satskyver sent to the board,

12:03

backing up why he thought Altman should be fired, lots of other notes that were kept by Dario Amade,

12:09

and other employees, also some employees who have left and have gotten out of the game,

12:13

are not part of rival companies, but who are just sort of concerned citizens or whistleblowers.

12:19

And what it all redowns to is basically like, if it had been one really simple smoking gun that you could have put in a tweet,

12:27

we would know about it by now.

12:29

Right? The reason that this remains mysterious on some level is that it wasn't one thing.

12:35

It wasn't like Ilya walked in on strangling a bunch of baby kittens and was like, you know, this guy needs to go, right?

12:41

Normally, when you fire a CEO, it's because of a pretty clear bright line pattern of behavior.

12:47

In this case, what we document, and the reason it took such a long and meticulous process and piece,

12:53

is it's kind of this accumulation of small details where people feel that he's telling mutually contradictory stories

13:00

to different sets of people both inside and outside the company.

13:03

He's telling people what they want to hear.

13:05

You know, these are the allegations that one hears, and honestly, any one of them in isolation,

13:11

you might kind of think like, okay, a CEO who tells people what they want to hear,

13:15

like is it had a fireable offense?

13:17

And it's only over kind of the accumulation of these details that it starts to add up to something.

13:22

Well, and also alarmingly, it seems from your piece and from everything we've seen that since he has returned,

13:29

none of that has really changed.

13:31

None of the complaints or concerns about him have really gone away.

13:35

He hasn't changed. He's still sort of doing the same thing.

13:37

Yeah, I mean, if anything, one thing we document in the piece is that he's sort of gone more into what's called founder mode in Silicon Valley,

13:45

which is like, yeah, it's my company, and you know, I'm not going to be as much of a people pleaser anymore.

13:51

You know, one of when we talked to him, and we actually, you know, did talk to him extensively,

13:55

he did kind of cop to this and say, yeah, you know, at certain times in the past,

13:59

I've been sort of too much of a people pleaser, and I've been too conflict diverse.

14:03

And he said, I'm going to work on being less conflict diverse in the future.

14:07

So if anything, it's sort of more control at the top, which I think it's important to point out,

14:13

like this is directly flying in the face of the way that OpenAI specifically was pitched from the beginning.

14:20

Right.

14:21

There's a way of looking at this that's like, again, wow, so crazy that a CEO has control of his own company.

14:27

Like, how naive could you guys be?

14:29

I think for people who are not inundated with this stuff, it's important to start from the beginning,

14:35

and to remember or recognize the ostensible purpose of OpenAI.

14:40

The reason that Sam Altman said it needed to exist was as a counterweight to the big evil megacorporation Google,

14:49

because AI was such a powerful technology that it couldn't be left to the profit motive to develop and deploy.

14:55

It had to be in the hands of a small safety-focused nonprofit research lab,

15:01

which was what OpenAI was supposed to be at the beginning,

15:04

because it could only be built slowly, cautiously, with aggressive support for maximum regulation,

15:11

and that to do it quickly, to do it in a race dynamic,

15:14

would be potentially devastating, or could potentially destroy or kill everyone on Earth.

15:19

That was the pitch.

15:21

And then they just decided, well, we're going to actually have a for-profit company.

15:25

That did become, actually, while we were working on the story, they made the final conversion.

15:31

And speaking of Delaware and California, this was challenged in both of those states,

15:35

because their original articles of incorporation, their original binding fiduciary duty,

15:40

was as a nonprofit to benefit all of humanity.

15:44

And you can say those are sort of airy words, and all tech companies sort of say some version of don't be evil, right?

15:52

But they really said, and their employees, to a large extent, really believed,

15:59

that the whole purpose was to be different.

16:01

They had all these different Byzantine corporate structures where they were, at first, totally a nonprofit,

16:07

and then they were a capped profit owned by a nonprofit, and the board of the nonprofit had exclusive control.

16:14

And they also had this charter where they said, if someone else is developing a safe version of AI before we do,

16:21

we should merge and assist with that project.

16:24

Like we should merge our resources into the safe AI project, even if that happens to be at Google or at the US government.

16:31

So they were saying these things that no normal company in the history of capitalism would ever rationally say,

16:38

but that's because they weren't supposed to be a normal company.

16:41

What did Sam Altman say to you guys about that shift?

16:47

So we had several conversations about this, and one of the things that comes up is, you know,

16:53

we didn't realize how much money we would need to get this off the ground.

16:58

Like we knew we would need money. Basically, I mean, Sam didn't say it to us in these words, but what's clear from talking to him and from reviewing the documentation is,

17:08

his initial pitch in May of 2015 is to Elon Musk, who was then merely the hundredth richest person in the world, and not the single richest person.

17:16

And he says, because AI is so dangerous, and because Google is doing it and Google is the bad guy, we need to start a Manhattan project for AI.

17:25

And we might need up to a billion dollars to do it. Fast forward to now, their most recent round of funding alone was 122 billion.

17:34

And we kept having to update that in the piece because we would write in the piece their most recent round of funding alone was 40 billion.

17:41

And then by the time the piece went to revision, they had done another head spinning.

17:45

Like the numbers here are literally like impossible for a human to conceive of.

17:51

And so to answer your question, this is the story that Sam tells is that, yes, we thought we could be this little David versus Goliath safety lab, but we just didn't realize how compute intensive and how cost intensive the project would be.

18:05

And there's truth to that. This stuff, you know, it gets smarter. Apparently the more data and training you feed it.

18:12

And that's really expensive. And you need to build these massive data centers. They suck up a lot of power. You need to cite them somewhere.

18:19

So these are all like infrastructure challenges that we're not foreseen at the beginning of this.

18:24

But it doesn't fully explain how aggressively and how long standing according to a lot of private records, the intent to ditch the nonprofit structure actually was.

18:37

Offline is brought to you by delete me. Delete me makes it easy, quick and safe to remove your personal data online at a time in surveillance and data breaches are common enough to make everyone vulnerable.

18:51

It's easier than ever to find personal information about people online. Having your address, phone number and family members names hanging out on the internet can have actual consequences in the real world.

19:00

It makes everyone vulnerable more and more online partisans and nefarious actors will find this data and use it to target political rivals, civil servants and even outspoken citizens posting their opinions online.

19:11

With delete me, you can protect your personal privacy or the privacy of your business from doxing attacks before sensitive information can be exploited.

19:18

The New York Times wire cutter has named delete me their top pick for data removal services.

19:23

Someone with an active online presence privacy is important way too much on there about yourself.

19:30

If you're online a lot, it's probably more info about yourself and people you know than you even imagine.

19:36

Have you ever been a victim of identity theft harassment doxing? If you haven't, you probably know someone who has delete me can help.

19:42

Take control of your data and keep your private life private by signing up for delete me now at a special discount for our listeners.

19:47

If you're 20% off your delete me plan when you go to join delete me dot com slash offline and use promo code offline at checkout.

19:53

The only way to get 20% off is to go to join delete me dot com slash offline and enter code offline at checkout that's joined delete me dot com slash offline code offline offline offline is brought to by one skin.

20:04

You've probably heard us talk about one skin for their best selling skincare, but now they're bringing that same longevity science to address hair loss with their scalp serum OS one hair.

20:13

Spring can bring an increase in seasonal hair shedding happens all the time and changes in routine can trigger stress related hair loss at any time of year.

20:21

That's right yikes one skin's OS one hair serum is formulated to address those concerns at the source powered by their proprietary OS one peptide.

20:29

This scalp treatment targets the hair follicles to support an environment where hair can feel thicker fuller and more resilient best of all OS one hair is drug free delivering effective results without any harsh side effects experience the difference of a peptide driven approach to scalp health.

20:42

And see why users are prioritizing OS one hair in their daily routines born from over 10 years of longevity research.

20:49

One skins OS one peptide is proven to target the cells that cause the visible signs of aging so your scalp and your hair stay healthy now and as you age for a limited time try one skin with 15% off using code offline at one skin dot CEO slash offline that's 15% off one skin dot CEO with code offline after you purchase they'll ask you where you heard about them.

21:10

Please support our show and tell them we sent you.

21:15

I still the the countries plan you report on is pretty incredible Greg Brockman the president of open AI allegedly proposed that they play Russia and China and the US against each other basically starting a bidding war for advanced AI.

21:30

Brockman half denies this.

21:33

Yeah so I was going to say how a how confident are you in the reporting and be what does it tell you about how the founders actually thought about humanity benefiting from this technology so it's actually we feel really confident in the reporting you know it's funny like I think people really are right to be skeptical about any of these industry stories and especially to be on the lookout for you know competitors trying to sleep.

22:02

There's trying to sling dirt at each other and sort of launder it through the press there are several parts of this story where we really really try to put pressure on things that seem like they you know are.

22:14

Flinging you know mud at open AI so that a competitor like Google or and Thropic or X AI can you know benefit from that and we go to great lengths and the story to kind of tease those apart and try to be fair.

22:27

Something like this country's plan is not you know everyone in the room basically agrees that some version of this happened and they kind of just recall it differently now to be clear we are talking about hypotheticals right we're not talking about a scenario where they did sell right I to put inertia but basically everyone in the piece

22:48

agrees that some version of a country's plan happened and that basically I mean people should go read the piece but basically in the early days of open AI they are all talking about this mission of how when they achieve the most powerful advanced AI ever and it's kind of the most powerful invention since electricity they need it to benefit humanity rather than destroying humanity.

23:15

How will they do it what what does that mean in practice and they're kind of bouncing around ideas like in a you know in a conference room with a whiteboard and they actually hired someone whose entire job was to make a game plan for like okay how did they do it with nooks well they had this whole thing called a barook plan and you know let's write up a whole proposal about what a barook plan for AI would look like right and the allegation is that over time this kind of non zero some non competitive vision kind of.

23:45

Morphs into a fundraising pitch basically and that then it morphs into well what if we like sold it to world governments now Greg Brockman denies that that was the idea he says it was actually like something less scary than that but nobody just denies that this took place at all these these are the kinds of things that were being.

24:04

Bad it around and that apparently they were also pitching to outside investors at least one investor so these things sound crazy on their face because they kind of art but.

24:17

It's also like this is how they were talking about it at the time this wasn't just a public you know rhetorical display this wasn't just like what they put in commercials this is how they talk about it among themselves you know there will be an agi dictatorship and whoever gets their first will you know can.

24:34

So I think that's the way they control the ring of so on I mean these were like routine metaphors that they used in their private correspondence on the country's plan thing.

24:42

Greg Brockman does say we were never going to auction this off to evil world powers so his story is that there was a more collaborative effort that he was envisioning but these are all different versions of the way people remember the same set of discussions.

24:59

So there was argument for the country's plan that was that is not like diabolical and just about like you know playing these countries off each other to make money.

25:08

There were several iterations of it so there could have been what we were told is that there could have been a version where it was like trying to make it like mutually assured destruction so that everyone had an equivalent arsenal so that nobody blew each other up.

25:23

I think people who deeply study nuclear deterrence would find some flaws in that analogy but this is how it was talked about right they want to give everyone the nukes exactly exactly I mean I think we're pro nuclear proliferation exactly exactly we like the proliferation yeah okay I mean you know they wouldn't be the first people in history I mean the thing is like in this story as with all these stories you don't find people who are sitting there twirling their mustache and saying how they'd be evil today.

25:52

What they saw themselves as trying to do and this is Sam Altman Greg Brockman like I do believe based on the body of evidence they were trying to find a way to be the good guy and I think the story that you tell yourself if you think that you are in this world historical position I mean remember.

26:11

These are people who routinely compare themselves to Robert Oppenheimer and all the characters in the making of the atomic bomb and they sort of say like okay who are you like he's Edward Teller I'm up in Heimer who are you going to be right so if you think and not for no reason that that's your role in future history books then you have to come up with a way to be not villainous in a way that you're going to be in the way you're going to be.

26:41

It's also realistic and that also wins the race before the bad guys win the race and so then it does become a kind of Manhattan project thing right why would you build an atom bomb well you would do it if the bad guys are going to do it first yeah and I mean I think Sam acknowledges this to people in your piece which is I think from the outside.

27:00

You're like oh these these rich people just want more money right well they're all rich and yeah of course money is a driving motivation for a lot of people for all people in business.

27:09

But I think what people sometimes miss is how power and and not even power in the sense of like again twirling your mustache but influence and this notion this great man theory of which they think like yes this is going to be legacy defining and I'm history and so I must control this because other people are bad and if I control this it's good and maybe they not maybe they don't think to themselves that they're going about in the bad path but when you believe that you are the only person who's going to be in the bad path.

27:38

You are the only person that can do something and then you just keep getting more and more control it's going to lead to bad outcomes historically right and it's going to lead to race dynamics which was another thing that open AI set out to avoid ostensibly from the beginning on the sort of foreign entanglements one line in your piece I keep going back to is the former open AI executive saying quote we're building portals from which we're genuinely summoning aliens and that Altman has now placed one of those portals in the middle of the world.

28:07

So national security officials in your reporting clearly alarmed about this as I think they should be.

28:15

Altman's foreign financial entanglements are compared to Jared Kushner's.

28:21

Can you talk about why this alarmed so many people and my reaction was like how is this not a bigger story in Washington?

28:29

Oh, I mean Altman's foreign entanglements were compared to Jared Kushner's in the process of him trying to get a security clearance or or at least considering getting a security clearance when it emerged that members of royal families from I guess it was the UAE in that case we're giving him very expensive cars as personal gifts so yeah there is a level of foreign entanglement here that is at the very least eyebrow raising look at the

28:59

whole story of these companies and their involvement with the government and with intel agencies and national security agencies could totally have been its own piece I mean there's a lot of really really rich suggestive reporting there.

29:15

So open AI was started under the Obama administration goes through Trump one goes through Biden goes through Trump to what you see and what you hear from talking to officials from these administrations is

29:28

because the allegation about Sam Altman is that he mirrors back what people want to hear what what you often hear from government officials is when the prevailing wins are toward regulation and toward export controls on sensitive chips and things like that.

29:50

You know there would be some push and pull and there would be some tension the way they're often is with industry but broadly a lot of the people we spoke to felt at least under the Biden administration.

29:59

Yeah I mean you know open AI is pro regulation and then we have a quote from someone basically saying as soon as Trump got reelected he said okay well now the shackles are off and I don't have to play that game anymore you know that was the perception of these government officials.

30:15

And then what you see is on the first full day of the second Trump administration this big announcement that open AI will do the biggest build of data centers in history with the support of the Trump administration and then you see Sam Altman who had been a stalwart donor to Democrats and Democratic packs suddenly saying Trump is such a refreshing change it's so great to have a pro business president.

30:39

Do you think is thinking is political views actually evolved or was this just does it seem more like opportunism it seems and we have people in the piece saying this like.

30:50

What he wants to do is win the a race and.

30:54

So his actions and rhetoric seem consistent with what he thinks will best achieve that and this is this is something that you see.

31:04

In closed door meetings with government officials this is something you see in public testimony before Congress this is something you see in his.

31:11

Interviews I mean one ability that people point to and this is you know coming from many many interviews.

31:19

It seems like he was particularly well suited to.

31:25

Sort of meet a particular historical juncture where you know it's 2015.

31:31

We've just gone through the tech lash social media executives have had this really blustery approach to you know if you regulate us you're a lot I and you're.

31:42

Seating the future to China and so Altman comes to the public with a very different pitch and says actually please regulate us.

31:50

What we're doing is so dangerous that if you don't regulate us you and everyone you love will die he goes before Congress and says.

31:56

I urge you to do more and and we have in the piece senator John Kennedy not usually charmed by tech CEOs says oh could you please like.

32:07

Right the regulation for us basically.

32:10

At the same time he's making a pitch to his own employees and recruits.

32:17

The engineers who are so terrified of the power of this technology that they themselves don't want to build it.

32:22

At least not until it's proven to be safe and he's saying to them I'm really one of you I really am so concerned about these safety things that I I need you involved because you alone can build it safely.

32:38

And then according to the reporting we have from you know investors.

32:42

He goes and you know does a pitch deck and says let's accelerate this and you know it'll it'll be really profitable for industries so.

32:51

Again it's like I don't want to be overly shocked by the fact that you know CEO makes different pitches to different people but the level of.

33:00

Difference and the level of existential stakes that are being invoked here is really unusual and that's also something that happens from one.

33:07

Presidential administration to the next.

33:09

Offline is brought to you by three day blinds at this point we can shop for groceries furniture and even cars from home so why is blind shopping still stuck on the stone age.

33:23

That's why you need to check out three day blinds there's a better way to buy blindshade shutters and drapery and it's called three day blinds they are the leading manufacturer of high quality custom window treatments in the US and right now.

33:33

If you use my URL three day blinds dot com slash offline they're running a buy one get one 50% off deal three day blinds has local professionally trained design consultants who have an average of 10 plus years of experience.

33:44

The provide expert guidance on the right blinds for you and the comfort of your home just set up an appointment and you'll get a free no obligation quote the same day not very handy the expert team of three day blinds handles all the heavy lifting they design measure and install so you can sit back relax and leave it to the pros love three day blinds have been using them for years and years and years before they were even.

34:03

They're great they come to your house you tell them what you want for blinds they give you a whole bunch of options then they help you pick them out they help you install them it's all very easy and the blinds themselves are just very high quality three day blinds has been in business for over 45 years and they have helped over two million people get the window treatments of their dreams so they're brand you can trust right now get quality window treatments that fit your budget with three day blinds hit the three day blinds dot com slash offline for their buy one get one 50% off deal on custom blindshade shutters.

34:33

And drapery for a free no charge no obligation consultation just head to three day blinds dot com slash offline one last time that's buy one get one 50% off when you head to the number three D.A.Y.

34:44

Blinds dot com slash offline.

34:47

Poor Jeep and don't know this video spot CRN J.J.O. 266 00 titled Jeep combo declaration of deals 30 seconds to ready with a two pop.

34:59

America's first pledge was freedom.

35:01

Jeep still carries that fighting spirit with the Jeep declaration of deals were pledging our legions to the American people with great deals on the luxurious grand Cherokee with available three row seating premium craftsmanship and tech that turns every drive into an adventure or Jeep Wrangler with legendary four by four capability and open air freedom.

35:17

Because freedom deserves the vehicle built to carry it Jeep there's only one hurry into your local dealer for the Jeep declaration of deals.

35:24

Jeep is one more once over its lifetime than any other SUV branch even the Jeep grow a registered trademark of XC a US LLC.

35:30

Yo, hey if you're thinking about a career change but not sure where to start this is Jay Cruz American career college offers 13 healthcare training programs for people just like you looking to move into healthcare.

35:40

Recognized by USA today as one of America's top vocational schools in 2025 ACC helps students train for new careers offering multiple programs like vocational nursing and medical assistant that can be completed in a year or less financial aid and scholarships are available for those who qualify.

35:55

It's easy just go to ACC future dot com. That's ACC future dot com get started now ACC cannot guarantee employment.

36:03

I want to ask about Altman's involvement in the battle between andthropic and the defense department so hexf blacklists and

36:13

andthropic is a supply chain risk because the company wouldn't drop its prohibitions on autonomous weapons and domestic surveillance hundreds of open AI Google employees sort of sign a letter defending them.

36:25

Meanwhile as you guys report Altman has been negotiating with the Pentagon for at least two days while signing an internal memo claiming open AI shared inthropics ethical boundaries.

36:38

Emile Michael defense department official who has been previously I guess Travis Kalinx right hand man at Uber says on the record I called Sam and he was willing to jump.

36:50

Is there a less cynical reading of that? Who says that just the reading?

36:56

I would say the less cynical reading of it is something we talked about before which is people don't think of themselves as being the bad guys people think of themselves as doing the best job they can to be the good guys in a tough set of circumstances so I think what Sam's defenders would say and we talked to multiple Altman defenders Altman loyalists people who stayed at the company for a long time people outside the company.

37:21

I think what a defender would say about this Pentagon interlude is okay he saw that you know the relationship between the Pentagon and andthropic was fraying and he wanted to come in and get those contracts so that someone worse couldn't get that probably someone worse would be Elon and that scenario.

37:40

So that's the most defensible and I think he said publicly Sam Altman has said publicly look you know this two hundred million dollar contract that we got from the Pentagon that's peanuts to us like it wasn't really worth the PR hit for me to do that I only did it because I was trying to help now people can believe that or disbelieve it maybe it's just that he's such an instinctive deal maker that he couldn't leave a deal on made when he saw an opportunity.

38:06

Maybe he believes in an anthropics red lines and maybe he believes that he has gotten a better deal we don't know because they haven't made the contract public they've just sort of said like the government says they won't do master valence and we believe them but we'll see I mean again it's just like one of the benefits of putting all this together in a big long New Yorker piece is you can really see the evolution from the start.

38:36

Of the open-eyed dream until now and I think if you could put someone who was one of the co-founders or one of the early employees from 2015 into a time machine and say we're swooping in to get the autonomous drone contract with the Department of War.

38:52

They would find that a little surprising based on the original pitch.

38:56

Yeah I mean reading the pieces just like watching the train come down the track and nothing stopping it.

39:02

So speaking of Anthropic the day after your piece dropped the company announced it's withholding its newest model mythos from public release because they believe it's cyber attack capabilities are too dangerous.

39:15

Meanwhile Simultman just told Axios this week that AI enabled cyber attacks are quote totally possible within the next year your piece reports on an open AI representative who literally asked you what do you mean by existential safety that's not a thing.

39:32

What do you make of Anthropics decision and how do you compare it to what is currently happening at open AI.

39:40

Yeah just to clarify you know I've seen some people sort of saying the existential safety thing was that like a gotcha journalist question where you know the question was worded in a confusing way and they didn't and I should just say like we put it multiple ways multiple times because there's a difference when you say safety.

40:01

Sometimes that means like user safety user privacy making sure people don't get doxed or you know making sure that the chatbots don't say naughty words or whatever and then there's existential safety which is making sure that the thing doesn't literally kill all of us which again I didn't invent that as a fear like open AI told me to be afraid of that.

40:19

So and that was just not something that this representative had ever heard of apparently look the thing with Anthropic is tricky because on the one hand.

40:33

This is apparently the first instance we've seen of a company being asked to do something and saying no we won't do it because that violates our ethical principles and therefore putting itself into a really perilous position as a business.

40:47

On the other hand like it's not like Anthropic is really acting like you know an AI safety lab nonprofit either I mean they were only in that position because they were the classified system of choice at the Pentagon to begin with and they've made many many other compromises I mean they're also raising money in the Middle East they you know so I think it's this very complicated game theory dynamic where everybody thinks or wants to think we're doing the best we can.

41:17

And we're between a rock in a hard place but it's not like Anthropic is acting super unblemished by their own lights either I mean the whole idea behind open AI and then Anthropic subsequent to that the sort of pristine rhetorical idea right is we're going to incentivize a race to the top so we don't have a race to the bottom and I don't see anyone race into the top I see a lot of racing to the bottom or somewhat slowing down the race to the bottom.

41:43

Yeah and this is something that I've come to think is is key to understanding this whole thing as I've you know interviews and people in these companies and done a lot of shows on this it's like we still think in terms of characters and villains and good guys and bad guys but there's a larger structural issue here which is yes Anthropic can seem right now like they're doing their best and maybe they're the best of the bunch obviously I don't feel like Elon's running a tight ship over there at XAF.

42:13

By reading your piece about say I'm zeggen open AI that doesn't seem so great either but like it's not that these are just individuals who have like personal moral failings or you know this profit motive above all else like there is a larger system here where if you have a competitive environment

42:30

competitive environment, both within this country and globally, where all of these different

42:35

companies and all of these different individuals are racing to build this technology, within

42:39

a capitalist system, this is what's going to happen.

42:43

Absolutely.

42:44

And to be fair to all the crazy hypothetical scenarios we were talking about with the

42:49

country's plan, this is something they foresaw and to at least some extent theoretically

42:55

tried to avoid.

42:56

The question is, A, was it ever avoidable and B, how hard did they try to avoid it?

43:01

But it is definitely true that there are structural things at play here that are more important

43:08

than any of the individual personalities.

43:09

And I would not want people to come away from this piece thinking, okay, Sam Altman should

43:15

not be a G.I. dictator, so clearly someone else should.

43:19

That's not the point here.

43:21

The point is, it is crazy that we're having a conversation about A G.I. dictators at all.

43:25

And it's crazy that that's not a super crazy thing to worry about.

43:29

Well, so that brings us to regulation because one way to deal with the systemic incentives

43:35

is to actually pass legislation, rules, regulations.

43:39

A few hours after your piece was published, open A.I. just happened to release a 13 page

43:44

policy blueprint calling for a new deal for the A.I. era, tax and capital, public wealth

43:49

fund, four day work week, one A.I. expert Anton liked to call it, quote, comms work

43:55

to provide cover for regulatory nihilism.

43:59

How are you reading the timing?

44:01

Do you think your story had anything to do with it?

44:04

Yeah, I'm, and they also hired a ghost hologram of FDR to roll it out.

44:10

No, I, look, I, I, again, I don't know what's in anybody's heart or mind, but it, it,

44:16

it definitely came out.

44:18

The day our story came out and they also acquired this tech talk show, TBPN, Maldives,

44:25

while we were closing the piece, they had a few interviews lined up that seemed thematically

44:31

related to the themes of our piece.

44:33

Look, I mean, it is the absence of a coherent regulatory regime that makes the PR battle

44:42

so intense to some extent, because if there were clear rules of the road, you could talk

44:48

about who's playing by the rules.

44:51

If everyone agreed on what to do technically to keep these systems safe, you could have

44:57

a purely technical or technological conversation.

45:01

But in the absence of those things, to some extent, it becomes a PR battle.

45:05

So you see these companies engaging more and more in a PR battle.

45:09

And one thing that people consistently say about Sam Altman is he's an incredibly gifted

45:14

pitchman.

45:15

And so the fact that he's given different pitches to different groups over time, you

45:20

know, you could say that's a feature not a bug, depending on your perspective on it.

45:25

Anyone who's played around with this stuff knows that they have certain kind of built-in

45:29

tendencies and ticks and traits.

45:31

And one of them that we talk about in the piece is Sikha Fancy, which is this problem that

45:35

the models can't stop telling you what you want to hear.

45:39

And we, that could be a feature or a bug, depending on what your goal is.

45:44

And so if you can't stop telling people what they want to hear, you might not always

45:50

arrive at the most blunt, true answer, but it could be a compelling or appealing answer.

45:58

Keeps you on the platform.

45:59

It sure does.

46:00

It sure does.

46:01

And the right, you know, I'm not here to say that I know what the regulation can or should

46:06

be.

46:07

I mean, to the extent that we are summoning aliens out of portals, like that's a very

46:11

hard thing to regulate.

46:15

But I do know that the regulations that OpenAI claimed to support, they no longer seem

46:22

to support.

46:23

And in fact, we have reporting showing that they were kind of going behind the scenes to

46:26

try to scuttle that very kind of regulation.

46:29

And like asking people to call Nancy Pelosi and Gavin Newsom to get it scuttled.

46:34

So we now live in a landscape where, you know, these things are being built.

46:41

And if you are a state politician who wants to introduce a state bill to control it in

46:50

New York or California, you might run for Congress and have a massive super PAC dropping

46:55

money against you because you support AI regulations.

46:58

So that's another kind of way that the ideal scenario as it would play out in an Isaac

47:04

Asimov novel kind of interfaces very uncomfortably with the realities of capitalism under politics

47:11

under capitalism.

47:12

Well, I noticed that even with the new deal for the AI era that OpenAI and Altman released,

47:19

it is heavy on sort of economic regulations and policy proposals, all of which would require

47:28

the government to deal with taxes.

47:32

And it basically wouldn't really hurt the company that much or stop the company from doing

47:38

what it wants to do.

47:39

And we were only, again, this is like last summer now, so it seems like old news.

47:43

But we came very, very close to living in a world where not only was there not robust

47:49

AI regulation, but where there was almost a federal provision mandating a moratorium on

47:55

state regulation.

47:56

Right?

47:57

I mean, remember this.

47:58

Yeah.

47:59

So we almost, and according to the reporting from that time, it was Steve Bannon and Mike

48:05

Davis and other people on the right who were lobbying against that.

48:10

So there's some strange bedfellows stuff going on here.

48:13

But we almost had a situation where not only do we not know how to regulate this new alien

48:19

technology and not only do we not have federal regulation to do it, all we're doing is federally

48:24

banning any regulation in the state.

48:26

So that's kind of where we almost were and where we are is, okay, now we just don't have

48:32

regulation.

48:33

There's like a couple of bills in California and other places, but it's, it's some very

48:37

rudimentary.

48:38

Well, and in the, in the open AI policy blueprint thing, the safety section is almost entirely

48:44

voluntary, what they're proposing.

48:46

There's, there's some regulations on, on economic dislocation, but not really anything

48:51

that they, they seem to be willing to accept on the safety side.

48:54

And look, again, like a lot of this stuff, there really is a good faith argument for and

49:00

against a lot of these regulatory proposals.

49:02

I mean, a lot of people watched this Pentagon thing go down and used that to say, okay, is

49:08

this the government that you want regulating this technology really, you know, so there,

49:12

there really are good faith arguments on all sides.

49:15

It's just when so much of the argument is being driven self-interestedly, it's hard to

49:20

know where the good faith arguments stop and begin.

49:23

You report to the companies are preparing for an IPO, the potential trillion dollar valuation.

49:29

One of the other neighborhoods told you that in other areas, some of the company's

49:32

accounting practices would have been borderline fraudulent.

49:35

A board member told you the company is, quote, lever it up financially, in a way that's

49:38

risky and scary right now.

49:40

Do you get the sense that this is a bubble that will pop, and if so, we, how do you

49:45

think that changes the story, you guys told in this piece.

49:48

Another person who told us that this is probably a bubble is Sam Altman, who has said multiple

49:56

times that he thinks it's a bubble and that someone is going to lose a phenomenal amount

49:59

of money.

50:00

I believe that's a direct quote.

50:01

So yeah, I worry about the potential for a bubble here.

50:05

And another thing, look, I mean, for people who are, again, not super red in on the technical

50:11

details and are kind of sitting a lot of this out, one kind of simple binary that often

50:17

gets tossed around is like, is this a bubble or is this like a really useful transformative

50:21

technology?

50:22

And I think it's key to remember that it can be both, right?

50:26

A lot of the biggest bubbles that we've seen are, you know, around the building of the

50:30

Transcontinental Railroad or the laying of fiber optic cable during the telecom.

50:35

These are massive infrastructure projects that ended up being really useful and economically

50:40

transformative and also created massive bubbles followed by recessions.

50:44

So you can end up using all that.

50:48

Now a lot of people say it's even worse in the case of the data centers because, unless

50:52

unlike train tracks or fiber optic cable, these chips depreciate so quickly that it, you

50:57

know, basically you're, you know, paying for them and then three years later they're not

51:01

usable and you have to do the investment raise all over again.

51:04

So it's definitely an overheated moment economically.

51:08

And basically the only way that based on what the experts told us that we come out of it

51:13

without a bubble is if these models just keep leaping and bounding and growing in their

51:19

capabilities year over year and month over month and week over week and just nobody

51:23

knows that's impossible to predict.

51:25

So you can raise investment based on promises but the technological breakthroughs either happen

51:30

or they don't.

51:31

Yes, so it's either a massive economic bubble that bursts or technology that quickly becomes

51:37

the killer robots that were all afraid of or perhaps both.

51:42

It could always be both.

51:43

It could always be both.

51:44

So you spoke to a lot of people who left open AI and over the concerns that we've talked

51:49

about, Satsukovar, the emotes, the whole super alignment team.

51:54

These are people who took huge pay cuts to work on what they thought was the most important

51:57

problem in the world.

51:59

Most of them end up leaving in disillusionment.

52:01

Like you said, summer competitors but some have just left.

52:04

What did you take away from talking to them about that loss, that disillusionment?

52:08

Yeah.

52:09

And this is another area where we were trying to filter really hard for competitor gossip

52:15

and competitor gripes.

52:16

And one of the strange things about this industry is that as soon as they leave one company,

52:23

they go off and raise a billion dollars and start another company.

52:26

So they're all kind of rivals at this point.

52:28

So Ilya Satsukovar has his own company now called Safe Superalignment.

52:33

Dario Amade obviously has his own company called Anthropic.

52:36

So we were trying to really filter and not just like Laundry people's grievances and

52:40

complaints.

52:41

One thing that does become pretty clear is there were some people who were really close

52:47

to this technology who really, really believed that it could be massively dangerous.

52:53

And so again, this is something that often gets discounted as, oh, this is just an attempt

52:58

at regulatory capture.

53:00

This is just people trying to hype up their product.

53:03

I am here to tell you there were and are people close to this technology who really,

53:08

really think it's dangerous.

53:11

Now, why are they still building it?

53:14

Good question.

53:17

There's kind of a selection bias problem here where the people who are so scared of it

53:21

that they don't build it, they're not in the peace because they stopped building it.

53:25

Right?

53:26

So you do have this kind of weird game theory problem of you only end up dealing with

53:31

the people who are scared of it and yet continue to be in the race.

53:35

But the scenarios where this thing goes off the rails, there are more of them than I

53:40

realized and they are less far-fetched in some ways than I realized.

53:45

I mean, still far-fetched, but they don't require necessarily for the thing to wake up and

53:53

become sky-net and decide that it hates humanity and destroy us.

53:58

Right?

53:59

I mean, there are many, many other ways that this thing can go wrong.

54:03

You know, I'm actually just going to read you one thing because I thought, I think it's relevant.

54:12

This is a quote from a blog post, superhuman machine intelligence, quote,

54:16

does not have to be the inherently evil sci-fi version to kill us all.

54:20

A more probable scenario is that it simply doesn't care about us much either way,

54:24

but in an effort to accomplish some other goal wipes us out.

54:27

That's a quote from a blog that Sam Altman wrote in 2015.

54:31

And so it's an oopsie that destroys civilization.

54:35

An oopsie.

54:36

You know, some of the best sci-fi stories involve oopsies, but, you know, again, like we made it through

54:42

the nuclear age so far, maybe this week that'll change.

54:46

And we may make it through this too, but it's not to be taken lightly.

54:51

And I think a lot of people take it lightly or ignore it.

54:55

And look, I don't know what's going to happen.

54:57

The people who are building this stuff don't know what's going to happen.

54:59

And I don't know if, to the extent that AGI is meaningful, I don't know if it will arrive

55:04

in six weeks or six years or six years or never.

55:07

But I know enough to be concerned about the power of this stuff.

55:14

And being concerned about the power of it doesn't mean you think it's good or bad or this

55:19

or that person should be in control of it.

55:21

I think it just means taking it as seriously as the people who are building it.

55:25

Yeah, it's going to say just a final question because you worked on this for so long.

55:29

What's the response to this piece that would tell you it moved the needle?

55:33

And have you seen any version of it yet?

55:36

Yeah, I mean, I don't go into these things with a like,

55:40

oh, I hope it does this or a kind of activist thing.

55:43

Obviously, even if I wanted to, journalism is not really that powerful.

55:48

But I would like for people to reckon with how serious could this be.

55:54

And again, I'm not here to say like everyone should be a doomer.

55:57

And all I mean is it would be nice if people lived in the timeline that they happen to live in.

56:06

And in the way, you know, and you guys do this with politics all the time,

56:09

dealing with people who don't want to live in a world where we have a president

56:13

who's saber-reddling with taking out all of Iran's bridges and power plants

56:18

for a war that he started for no apparent reason.

56:23

But that's the timeline we do live in.

56:25

And so I think an equivalent of that with the AI stuff,

56:29

you can think that people are, you know, spinning out and, you know,

56:33

getting wrapped up in hype cycles and you can think all that stuff.

56:37

But none of that is mutually exclusive with taking the underlying thing seriously

56:41

and taking some of the concerns seriously.

56:44

Because like it or not, it's here.

56:49

And that's only going to get as far as I can tell, more powerful.

56:53

Well, glad that you and Ronan took it seriously and wrote this piece.

56:57

Everyone should check it out.

56:59

Andrew Morence thinks as always for joining offline.

57:01

Thank you. Really appreciate it.

57:14

Offline is a crooked media production.

57:16

It's written and hosted by me, John Favreau.

57:18

It's produced by Emma Ilek Frank.

57:20

Austin Fisher is our senior producer and Anisha Bannerji is our associate producer.

57:25

Audio support from Charlotte Landis.

57:27

Adrian Hill is our head of news and politics.

57:29

Matt DeGroat is our VP of production.

57:31

Jordan Katz and Kenny Siegel take care of our music.

57:33

Thanks to DeLon, Villain-Wave, Eric Shoot and our digital team

57:36

who film and share our episodes as videos every week.

57:39

Our production staff is proudly unionized

57:41

with the writer's Guild of America East.

57:50

Why stop at one flavor?

58:01

The Rita's Gelati Sunday brings Italian ice, frozen custard and toppings together in

58:06

one cup.

58:07

Find your closest three days in order on the app.

58:09

Let's get a free small ice after downloading and signing up.

58:12

If you're a maintenance supervisor at a manufacturing facility, and your machinery is in working

58:19

right.

58:20

You need to understand what's wrong as soon as possible.

58:23

So when a conveyor motor falters, Granger offers diagnostic tools like calibration kits and

58:28

multimeters to help you identify and fix the problem.

58:31

With Granger, you can be confident you have everything you need to keep your facility running

58:36

smoothly.

58:37

Call 1-800-GRanger-click-granger.com or just stop by.

58:40

Granger for the ones who get it done.

58:45

Why stop at one flavor?

58:47

The Rita's Gelati Sunday brings Italian ice, frozen custard and toppings together in

58:52

one cup.

58:53

Find your closest three days in order on the app.

58:55

Let's get a free small ice after downloading and signing up.