Big Tech's Big Tobacco Moment
2026-04-04 07:05:00 • 58:56
Offline is brought to you by Zbiotics Prealcohol.
Let's face it.
After a night with drinks, we're not bouncing back.
Like we used to.
Cause we're in our 40s.
And honestly, if you're 30 or over
and you're not using Zbiotics,
like I need I want to pull you aside
and have a personal talk with you about your life choices.
And also, don't think you're so great
if you're in your 20s.
You could use Zbiotics in your 20s too.
It's very good.
I think you just feel better.
You probably should be more varsity.
Zbiotics Prealcohol Probiotic Drink
is the world's first genetically engineered probiotic.
It was invented by PhD scientists
to tackle rough mornings after drinking.
Here's how it works.
When you drink alcohol gets converted
into a toxic byproduct in the gut.
It's a build up of this byproduct,
not dehydration, it's to blame for rough days after drinking.
Prealcohol produces an enzyme to break this byproduct down.
Just remember to make prealcohol your first drink
the night drink responsibly
and you'll feel your best tomorrow.
I have Zbiotics everywhere all over the place.
My house and in my bag.
I kept running out so I bought like a giant stockpile.
Yeah.
It's like it's my strategic oil reserve.
That's my sprout.
I gotta release the Zbiotics.
It's indispensable.
From the fairways in Augusta to the first pitch
of baseball season in the start of festival circuits,
April is a sprint of outdoor celebrations.
Don't let a rough next day keep you on the sidelines
drink prealcohol to stay ahead of the game
and make the most of every sunny Saturday.
Go to zbiotics.com slash offline to learn more
and get 15% off your first order
when you use offline at checkout.
Zbiotics is back with 100% money back guarantee.
So if you're unsatisfied for any reason
they'll refund your money.
No questions asked.
Remember to head to zbiotics.com slash offline
and use the code offline to checkout for 15% off.
In moments like these, it's easy to feel overwhelmed
and even easier to feel powerless, but we are neither.
I'm Stacey Abrams and on my podcast,
Assembly Required, I take on each executive action,
legislative battle and breaking news moment
by asking three questions.
What's really happening?
What can we do about it?
And how do we keep going together?
This is a space for clarity, strategy and hope
rooted in action, not denial.
New episodes of Assembly Required, drop Tuesdays,
tune in wherever you get your podcasts and on YouTube.
With so many options, why choose Arizona State University?
For me, the only online option was ASU because of the quality
where faculty was really involved with their students
and care about your personal journey,
the dedication to my personal development
from my professors.
That's extremely valuable to me.
Earn your degree from the nation's most innovative university,
online, that's a degree better.
Explore more than 350 plus undergraduate,
graduate and certificate programs at ASUonline.asu.edu.
Once a juror understands that a company has been researching
this and that the more they looked into it,
sort of the worse stuff they found
and then also that research kind of gets canceled
or the researchers get moved to other projects,
it kind of does start to feel like a big tobacco moment, right?
Yeah.
Yeah.
I'm John Favreau and you just heard from Tech Journalist
Casey Newton, who is our guest this week,
along with New Mexico Attorney General Raol Torres.
I think there was a huge development last week
in the fight to free kids from having their lives
controlled by what's on their screens.
Something that worried me before I had kids
and keeps me up at night now that I do.
It has to do with Mark Zuckerberg,
one of the world's richest men
who runs one of the world's richest companies.
Someone who spent most of his charmed life
using money and power to remove whatever obstacles
get in the way of what he wants.
And what he wants always seems to be more.
More users, more money, more market share,
growth at any cost, even if that's meant
violating people's privacy.
Even if it's meant stealing data or lying to investors.
Even if it's meant trying to bury mountains
of Metta's own internal research
about the harms that Facebook and Instagram have unleashed,
about how addictive their products are,
especially to children.
Something the employees knew and talked to each other about.
Quote, oh good.
We're going after 13-year-olds now.
One wrote,
targeting 11-year-olds feels like tobacco companies
a couple decades ago.
Here's another.
No one wakes up thinking they want to maximize
the number of times they open Instagram that day,
but that's exactly what our product teams are trying to do.
Then this exchange between two meta researchers.
Oh my gosh, y'all.
Instagram is a drug.
We're basically pushers.
We are causing reward deficit disorder
because people are binging on Instagram so much
they can't feel reward anymore.
Mark Zuckerberg has basically escaped
any kind of meaningful accountability for this
or anything else.
Huge regulatory fines haven't faced him.
They're basically a rounding error.
Congress halls them in to testify from time to time
so they can yell at him,
but they haven't really touched him.
Whistleblowers from inside his company
who've come forward have been smeared
and threatened with lawsuits.
I know, I was supposed to interview one
until she got hit with a gag order.
Zuckerberg is so used to getting his way
that when local Hawaiians objected to his $300 million
2000 acre compound and underground bunker
because it was their land
where their ancestors are buried,
Mark actually tried to sue them
because he thinks the money and the power
allow him to get away with anything.
And until last week, he was basically right.
But now he isn't.
When he walked out of a courtroom here in Los Angeles
last week, after taking the stand in front of a jury
for the first time,
Mark Zuckerberg was finally held accountable.
Not by government regulators or board members
or shareholders,
but by a young woman named Kaylee
from his own backyard in Northern California.
Kaylee was on YouTube when she was six
and Instagram by nine.
She said that she initially got a rush
from all the likes and notifications,
which during class,
she would run to the bathroom to check
because she was panicked
that she might be missing out on something.
Pretty soon, she was spending all her time on the platform.
She stopped hanging out with her family,
she stopped making friends,
she hit 16 hours a day on Instagram.
She tried setting time limits, it didn't work.
Her mom tried parental controls,
but didn't work either.
She was bullied and sexually extorted
and she still couldn't keep herself off the platform.
She bought likes and she added filters,
but all the other filtered photos
made her more insecure about how she looked.
She couldn't sleep, she became depressed,
she started cutting herself,
she contemplated suicide,
but eventually she got help.
She also got a lawyer,
and when she was 17 years old,
she sued Mark Zuckerberg.
For the first time in history,
a jury held that both Meta and YouTube
also were defended in this case,
were negligent in the design of their platforms.
The jurors found that the company's negligence
was a substantial factor in causing Kayleigh harm,
and that they had failed to warn users
about dangers that the companies themselves
had long been aware of, but there was more.
The day before the LA verdict,
a jury in New Mexico found that Meta violated
their state's consumer protection laws
by designing a product that fails to protect children
from predators.
The result of a lawsuit brought by
a new Mexico attorney general Raul Torres,
whose office set up an undercover investigation,
where they created a fake Instagram profile
of a 13-year-old girl that was almost immediately flooded
with messages from child predators,
three of whom were then arrested.
The combined damages in the LA in New Mexico cases
amount to a few hundred million dollars,
which is, again, a rounding error for Meta.
But the money isn't really what matters here.
What matters is that Meta and the rest
of the social media giants have now lost
a legal shield that has protected them for 30 years.
Because Kayleigh didn't sue them
over the content on their platforms.
She sued them because their platforms are defective,
because the product's design isn't safe for all users,
especially children.
Meta knew that and didn't tell us that.
None of them did.
And so for the first time, these verdicts
might finally force tech giants
to do what no one else has been able to make them do.
Fix the design, make it safer,
get rid of social media's most addictive,
harmful features, infinite scroll, auto play,
push notifications, beauty filters,
even algorithmic recommendations.
This is all on the table now for these juries and judges.
And there will be many more.
2000 similar pending lawsuits will now move forward,
including a massive federal case with 1,600 plaintiffs
that starts this summer.
Meta is not happy.
They will appeal.
They will keep making the same argument
they made with Mark in the LA trial.
The Kayleigh's problem was an Instagram,
it was Kayleigh, or her mother,
or anything else in her life that was an Instagram.
They'll keep arguing that their right to free expression
protects them from being forced to change their platforms.
And to be honest, I totally get why so many people are concerned
that these verdicts could also end up forcing
social media companies to have more censorship
and surveillance on their platforms.
Ideally, you would pass a law that deals with social media's
most harmful features while still protecting speech
and privacy, especially for adults.
But that would require a function in Congress,
and a president who wasn't the most powerful living example
of social media brain rot.
So here we are.
And I think that whatever reservations people might have,
most Americans understand what those jurors understood.
That freedom of expression does not include the freedom
to design an addictive product that you know to be harmful,
especially to children.
This isn't some abstract legal debate.
It isn't some moral panic.
It's what the people who've built and sold these products
have said themselves, even though their bosses
tried to bury the truth.
And most of us are sick of it.
All kinds of people.
People with different politics, different backgrounds,
people without kids, people with kids,
and the kids themselves.
They don't want to spend their childhoods
stuck in their feeds.
Most of us don't want these tech companies
to keep stealing more and more of our attention
just so they can make another billion.
And we certainly don't have much confidence
that the next set of tech gods creating super intelligent robots
will do a better job than the geniuses who
blessed us with the algorithm, probably
because they're run by some of the very same people,
like Mark Zuckerberg.
The anger and disgust that most Americans
feel towards big tech is real.
It's become a potent political force
with an organized, growing movement behind it.
What's needed now are political leaders
willing to listen, take up this fight,
and rally the country around a future
where we control the technology that shapes our lives,
not the other way around.
At the end of the day, that's all the families who
filed these lawsuits and cheered these verdicts really one.
As the trial ended here in LA, some of those families
were standing outside the courthouse,
holding up photos of their children.
Their sons and daughters who struggled with depression
and eating disorders, kids who had taken their own lives.
These parents have been showing up to courthouses
and congressional hearings and school board meetings
for years now, holding up those photos, begging someone
to listen.
Thank God that last week, a jury of 12 people
on Los Angeles finally did.
Up next, my conversation with New Mexico Attorney General,
Raul Torres.
The Attorney General Torres, welcome to offline.
Thanks for having me, I appreciate it.
So you just want to landmark verdict against meta
based on a lawsuit you filed in 2023
after an undercover operation where your office
created a fake profile of a 13-year-old girl.
What happened after you created that profile
and what did it tell you about what meta already
knew about their product?
Well, what we were trying to do is recreate
the actual experience of a young person who is new
to the platform.
We had been hearing from our law enforcement officers
inside the agency that a lot of the predatory behavior
that we were most concerned about had migrated
to these spaces.
And so we were just trying to test and see what happened.
She was flooded with sexually explicit material requests
for some kind of real world interaction.
And what was most shocking is instead
of flagging this explosive growth in this young girl's account,
the company actually sent her information
about how to monetize her following
and how to grow her following.
And that was the moment for me.
I was like, we really got to dig into this
and go a whole lot deeper.
So I guess the sort of parental controls
that Instagram offers didn't really do anything in this case?
Yeah, no.
I mean, what you saw again and again,
every time we pulled back another curtain inside the company,
you saw all of these communications, emails,
and information that was being shared about not only
the addictive nature of the product,
how harmful it was to kids,
but they're very clear awareness of all the predators
that were there.
And to match that and compare that with what they have been
saying publicly with what Mark Zuckerberg has been saying
publicly was something I think really prompted the jury.
I mean, they heard six weeks of testimony
came back with a decision in less than a day.
And in my senses, they were trying to send a message.
And so hopefully everyone who's been paying attention
to this case really starts to understand the sense of urgency
that I think people in the community have about it.
So the jury awarded the maximum penalty per violation,
$5,000 each, but the total of $375 million
was under the $2 billion you asked for.
One juror said they compromised on the number of violations
but maxed out on the penalty per child.
How did you read that?
Well, I mean, to your point, I think they did compromise.
We were looking for something that captured
the full extent of the harm, all the underage kids
that were on the platform.
I think they took a compromise and went with a number
that represented the estimate of kids
that might have actually been harmed.
The thing to remember though is that that $5,000 penalty
hasn't been changed since 1970.
Since we first enacted this consumer protection law,
had it been adjusted for inflation,
it had been a $40,000 per violation.
That would have pushed the result
to just under $3 billion.
And so one of the things that we're doing
in the aftermath of this verdict
is pushing to both expand the definition
of what's covered under the act,
but really ratchet up those penalties.
Because I recognize, I think everybody recognizes
that that's not a big enough stick
for a company that has this many resources
and engage in this kind of commerce all over the world.
We need to have stronger deterrence.
It's something that I'm working on.
I'm also really trying to push the other AGs
around the country to really re-examine
our consumer protection laws.
Because most of them haven't been updated in years.
I read that you're also going back to the table
and made ask the judge for additional financial penalties
and a ruling that would force meta
to make changes in their apps.
Can you talk more about what specifically
you'll be asking for?
So the judge separated out our public nuisance claim.
And so we're gonna come back.
We're gonna really present more evidence
about how much harm the company's products have caused here
in New Mexico will be asking for additional monetary penalties.
But the more important piece of the presentation
that's gonna happen in May
is on our request for injunctive relief.
That means real age verification changes to the algorithm
where they stop bombarding kids with notifications
during the school day and the middle of the night.
Changes to infinite scroll to auto play of videos.
And we're gonna actually be asking the court
to set up an independent monitor,
hopefully relying on technologists and experts
from around the country to help us design
very clear and specific features
to create a safer environment there.
The cool thing about it is that if we can do this here
effectively, we can actually establish a blueprint
for what can happen around the rest of the country
and around the world.
So I think it's a real opportunity
for us to change fundamentally
in the way this company does business.
So this one case, if it's held up on appeal
and if the judge agrees, it could lead to maybe the end
of infinite scroll of some of these notifications,
push notifications for children, age verification,
just all across meta and perhaps other
social media companies as well.
The jurisdiction of this court
is obviously limited to the state of New Mexico.
So what we would effectively be doing is asking them,
if they're gonna continue to do this business in New Mexico,
they're gonna have to come up with a different standard
of doing that business here.
But once they've established that,
like once we've gone through the process of doing it,
if we prevail on appeal and can establish
the feasibility of implementing these changes,
we could actually change it across the board
for this company and set a new benchmark
for the industry.
Now, look, I wish Congress would wake up
and like put this at the top of their agenda.
I think this is a place where there's a lot of bipartisan
opportunity for meaningful change,
but they have been stuck in place.
And so if we have to do this through a court process,
through a litigation process,
I'm gonna just push forward.
But I think this is an opportunity to kind of use litigation
to prompt some higher level policy engagement in Congress.
And that's what we're really hoping for.
So meta's argument was that this case is still really
about content, not design.
The calling it consumer protection
is just a way to get around section 230,
which is essentially shield social media companies
like meta from being held liable for the content
on their platforms.
But it's not just meta making this argument.
Mike Maznick, a tech dirt called your verdict,
quote, a really problematic result
that easily should have been tossed on 230 grounds.
What's your response to people who say,
this is a speech case,
dressed up as a consumer protection case?
Well, I think they don't really understand
the nature of the evidence that was presented.
I don't think they understand the nature
of the legal arguments that were made.
We weren't focused on specific third party content,
which is what section 230 is all about.
This is about specific design choices and features
that have made this an addictive and dangerous product.
And it's also about the affirmative misrepresentations
that the company has made.
And one thing is clear is that when you build a product
and you in the design choices that you have
they're built into it create known harms
and then you lie to people about those harms,
that is outside of the ambit of section 230.
And so again, meta and other tech companies
have been hiding behind section 230 for the last 30 years.
And I'm assuming they're going to be,
essentially focused on that in their appeal.
I don't have a sense that this is going to change
at least with respect to the judiciary here in New Mexico.
Now, whether or not they can get some
of the more conservative justices on the court to bite
or even some others who are concerned about that aspect
of their defense, it remains to be seen.
I think from the public's perspective,
we ought to be able to create some basic safety standards
around these types of spaces without infringing
on expression content, things of that nature
because I'm sensitive to that.
But I also don't want to live in a world
where we have to live with exploitation and addiction
and all of this harmful activity as a price
that we're forced to pay because Mark Zuckerberg claims
that he's some pamphleteer from the 18th century
when he's not.
Yeah, I wanted to get into some of the tension
around balancing sort of protecting users
with protecting privacy.
So internal meta-document showed that encrypting messenger
would impact roughly 7 1,5 million child sexual abuse
reports to law enforcement and then mid-trial meta-announced
that they were going to roll back encryption
on Instagram direct messages.
I also talked to a tech journalist, Casey Newton,
for this episode, who noted with some alarm
that this is the first time a major platform
has ever rolled back encryption protections
and he said that we shouldn't have to give up
our basic right to privacy so cops can make fewer phone calls.
What do you say to that?
Sort of the general concern about,
because I've heard this from a few places,
that like, infinite scroll, auto play,
some of these features, people could live without
and they say, okay, those aren't two 30,
those aren't content, but encryption,
once government, especially this government,
could break encryption, that's not only going to protect
children, but protect people's privacy
all over the country.
Yeah, so I read that same comment.
Again, I think this is probably the view of somebody
who doesn't share the perspective of people,
like myself who worked in child pornography
and child solicitation cases for a number of years.
And one really important piece of context is meta
and Mark Zuckerberg decided to go to end encryption
the day after we filed this lawsuit.
So I'll leave it to you to decide whether or not
their motivation was really protecting the privacy interests
of their users or whether it had to do
with shielding themselves from liability.
My view is that the lawyers got around the table and said,
hey, as long as we can see all of this solicitation
between miners who we've lowered onto this platform
and predators that we've failed to kick off,
we're on the hook.
But if we blind ourselves by implementing
end-to-end encryption, we get to hide behind that.
And by the way, you can tell the marketing department
to dress it up as privacy, even though we literally
track every single piece of information
that we can track about every single user that we have.
I don't think people were buying that.
And I also think that to your point,
the fact that they were as a result of that decision
shielding referrals to law enforcement,
I think that got to the ultimate decision
to roll that back because it wasn't something
that was defensible and court.
And to that last piece about cops having to make phone calls,
it's not cops making phone calls.
I don't have access to that information.
This is about a company that can see whether or not
a 40-year-old man is trying to solicit a 12-year-old girl
in their platform for sex.
And if they have that information, then I
would hope that they were going to be sharing that
with law enforcement.
But I think it's a distortion to equate the lack
of end-to-end encryption with someone in government
having immediate access to everyone's private communication,
because that's not what this has been about.
The other piece is when it comes to having kids online,
if look, if it's adults communicating with other adults
and there's end-to-end encryption, I don't have any problem
with that.
When it's a 50-year-old man communicating
with a kid down the street from me,
I have a very serious problem with that.
And I think most Americans can walk in Chugam
at the same time.
We can craft solutions that both protect
basic privacy interests without putting kids at risk.
Yeah, I mean, the way I was looking at this
is I can see on an app like Instagram where it seems like
if you're going to have encrypted DMs on an app that
is also algorithmically connecting you to strangers,
then that's a problem, especially for children.
I wonder, does this mean that for encrypted apps,
WhatsApp, Signal, even I messages,
that there has to be age verification
because you don't want kids on encrypted messaging apps at all?
Yeah, age verification is going to be key.
It's going to be part of what we talk about
in the May presentation on public nuisance.
And we're going to be asking the judge
to really start exploring real age verification
for precisely that reason is that we have
to have different guardrails based on the ages
of the users that are in these spaces
and the potential harm to those users.
Again, if it's end-to-end encryption
between adults in these spaces, I'm not really interested
or talking about that.
You could solve part of the end-to-end problem
by just mapping a blanket rule where no one over a certain age
who is unknown to a minor can connect with that minor,
they can't communicate with them.
There are companies in the space
that have taken that step.
With respect to coming up with a more nuanced solution,
there are opportunities to develop actual technology.
It's imperfect, but it can do age estimation
based on some of the sort of the angles, right?
Every time you look at a camera,
it has the ability to estimate age.
Now it's not perfect, but it sidesteps the problem
that other people have correctly identified
of uploading and sharing maybe sensitive personal information
on an ID or something like that.
But I think the real way we have to start thinking about it
is lawmakers and policymakers,
if they're gonna engage in meaningful tech regulation,
they have to start iterating the way technologists do.
The problem is we created section 230 in 1997
and we walked away and decided not to do anything.
It sat there for 30 years and it went from a moment
when I was waiting for my dial-up tone on ALL
to now a time where there's more computing power
in my pocket than there used to be in my laptop
and we haven't changed the regulatory or legislative framework
to keep pace with technology.
I think policymakers have to just get comfortable
with iterating around these spaces,
understanding that you're never gonna be completely in alignment
but having some basic priorities
and that should start with making sure kids are safe
in these spaces.
So there are now over 40 state AGs with lawsuits
against meta, thousands of pending cases
that will now move forward.
Are you coordinating with the other attorneys general
is there a legal strategy here that is analogous
to what happened with Big Tobacco in the 90s?
Yeah, I mean, I've been hearing from my colleagues
around the country, I'm aware of the action
that they put together.
ours was a little different because we focused on
exploitation so heavily.
And so there was a different sort of evidentiary basis
but we did have elements or we talked about addictive design,
we talked about some of those other features.
We are sharing some of the notes and the feedback
that we have from our litigation team with them
to sort of inform how to make those presentations
and those arguments.
I think more generally I'm trying to get all of my colleagues
to re-examine their underlying consumer protection laws.
I'm in the process of trying to redesign hours, right?
I mean, 1970s a long time to go without meaningful changes
in those spaces, but I think that instead of coming up
with all of these specific sort of bespoke solutions
to technology challenges that are really pressing
in the moment but change over time,
I think we should look more broadly
at the kind of authority that we have
to really get into this space and try to protect people.
So we're working both on litigation
and potential legislation at the same time.
And hopefully, like I said, it's a moment where
after six weeks of evidence,
this jury came back in less than a day.
That's a pretty powerful signal.
And I hope that the company's heard that signal
but more importantly members of Congress did too
because I think that's where we really need
to see some action taken on these issues.
You mentioned the Supreme Court where this case
or one of these cases could end up.
Have you thought about this court
with this composition of justices,
what kind of arguments you think would be persuasive
to some of the more conservative members of the court
or just members of the court who maybe haven't been
as forward leaning as you were on this case?
Yeah, I actually think it's actually something
that will be centered probably more at the middle
of the court because I can see folks on both the left
or the right who have a maximalist interpretation
of some of the sort of the free speech rulings
when it comes to corporations being more susceptible
to an argument advanced by Meta.
But my sense is that there is a middle ground
where you can start identifying the unique harms
that product design and misrepresentation
presents to kids and to young people
in the vulnerable populations
and that that will be a way to distinguish
this type of action from those that are obviously
based on content obviously motivated
by a political or ideological motivation.
I think by keeping this centered on child welfare
there's a real possibility that you can get
some combination of moderates or persuadable Republicans
to step up and sign on to a decision
that better protects these kids.
Attorney General Rolto has thank you so much
for taking the time and talking about this case
and the strategy going forward, really appreciate it.
Thanks for taking the time.
Up next my conversation with Hardfort co-host
and platformer author Casey Newton.
But first, if you love Dan's analysis on Pots of America
take a listen to our subscriber exclusive pod Polar Coaster.
It's like having a really smart friend
break it all down for you.
I love Polar Coaster.
I never miss an episode.
It is great to hear Dan one of the smartest political
strategists I know, break down polls.
It's also one of the biggest polling nerds I know
and it's fantastic show so check it out.
You can get that show and a whole bunch of other subscriber
only shows if you subscribe to Friends of the Pod.
You can also get ad free episodes of Pots of America offline.
Love it or leave it, Pots of the world.
All your favorite crooked pods.
We have an extra episode of Pots of America
called Pots of America Only Friends.
The subscribers get access to,
you also get access to our growing list
of excellent substack newsletters
and you get to feel good about supporting
independent pro-democracy media.
So hit pause and subscribe to Friends of the Pod
right now at crooked.com slash friends.
This episode is sponsored by BetterHelp.
Whether you're dealing with anxiety, depression,
conflict and relationships or simply need an impartial
third party to help you deal with daily stress,
BetterHelp is there to connect you with the support you need.
BetterHelp Therapists work according to a strict
code of conduct and are fully licensed in the US.
BetterHelp does the initial matching work for you
so you can focus on your therapy goals.
A short questionnaire helps identify your needs and preferences
in their 12 plus years of experience
and industry leading match fulfillment rate
means they typically get it right the first time.
If you aren't happy with your match,
switch to a different therapist at any time
from their tailored wrecks.
With over 30,000 therapists,
BetterHelp is the world's largest online therapy platform
having served over 6 million people globally
and it works with an average rating of 4.9 out of 5
for a live session based on over 1.7 million client reviews.
When life feels overwhelming, therapy can help.
Sign up and get 10% off at BetterHelp.com slash offline.
That's better H-E-L-P.com slash offline.
Offline is brought to you by Mint Mobile.
I don't know about you, but I like keeping my money
where I can see it.
Unfortunately, traditional big wireless carriers
also seem to like keeping your money too.
After years of overpaying for wireless,
if you're fed up with crazy high wireless bills,
bogus fees and fed up and free perks
that actually cost more in the long run.
Say free perks?
Oh, different thing.
Yeah.
Then switch to Mint Mobile.
You could be saving a lot with Mint Mobile.
Have you checked how much you're paying a month
for your mobile phone bill?
Probably not.
I get unfortunately my mobile phone bill gets texted to me
by my mobile phone company.
And every time I think that is insane,
how is that possible?
That is an outrageous number.
So maybe you should think about my mobile phone to switch.
Stop overpaying for wireless just because that's how it's always been.
Mint exists purely to fix that.
Mint Mobile is here to rescue you
with premium wireless plans starting at $15 a month.
All plans come with high speed data
and unlimited talk and text delivered
on the nation's largest 5G network.
Bring your own phone and number, activate with eSim
in minutes and start saving immediately.
No long-term contracts, no hassle.
Ditch over price wireless
and get three months of premium wireless service
from Mint Mobile for $15 a month.
If you like your money,
Mint Mobile is for you.
Shop plans at MintMobile.com slash offline.
That's MintMobile.com slash offline.
Up front payment of $45 for three month,
five gigabyte plan required,
equivalent to $15 a month.
New customer offer for first three months
only then full price plan options available.
Taxes in fees extra,
see MintMobile for details.
Casey, welcome to offline.
Hey, thanks for having me, John.
I want to talk to you about Metta's rough week in court.
Jury's in two different cases,
held the company liable for designing a product
that harm consumers in these cases.
Children, I'm also talking to New Mexico Attorney General,
Raul Torres for this episode.
The other big case was here in LA
where Mark Zuckerberg himself took the stand
and the jury found that Metta's design features
as well as YouTube's harmed a young woman's health.
I know you've heard that case closely.
Commentate I've seen is that this is a big text,
big tobacco moment.
Do you agree?
How big of a deal is this?
I agree that it is a big deal.
And I think that over the past couple of years,
the world has been coming around more and more
to this framing of the issues surrounding social media
as a kind of public health crisis, right?
It seems like there is something about these apps
that produce really harmful effects
for some subset of the population.
And this was the first moment that Jury's actually
were able to find a legal path to hold them accountable.
What was some of the most damning testimony and evidence
against Metta in your view from this trial?
Yeah, so I mean, in the trial itself,
it seems like jurors were really swayed
by the internal research that Metta had done
in which their own researchers had found that again,
for some subset of users of Instagram,
there were negative mental health effects.
Now, you know, Metta would say,
well, you know, those effects were exaggerated
and you're sort of leaving out a lot of context here.
But I think once a juror understands
that a company has been researching this
and that the more they looked into it,
sort of the worse stuff they found.
And then also that research kind of gets canceled
or the researchers get moved to other projects.
It kind of does start to feel like a big tobacco moment, right?
Yeah.
Well, what was Metta's defense to that in the trial?
Well, they said essentially the effects
that you are talking about at trial were cherry picked
and we can show you lots of other data
that shows that the vast majority of people
never experience a problem here.
And also some of the research that we have done
is why we have added various features that are designed
to help you mitigate the effects of the thing that we built.
Yeah. And it seems like they also tried to argue
that this young woman had pre-existing problems
and issues with her family and with other struggles
and that somehow because of that,
they couldn't be held liable.
Yes, although the surgeon general under President Biden,
when he did a big report on this subject,
one of the things that he found was that it was precisely
the teens who have pre-existing mental health conditions
who are at more at risk of these terrible outcomes
on these platforms.
So simply to say, oh, well, she does it count
because she was already having mental health problems.
It's like the whole problem is that you're serving millions
of people who have mental health problems
and we just know that Instagram
and other social apps can be really bad for those folks.
Yeah, and it seems like the key is that the jurors didn't have
to prove that meta and YouTube were the sole cause
of the mental health problems,
but that they were a, I think it was like a significant factor.
Yeah. And again, that really is a big deal
because for the past 30 years,
platforms have been insulated from these kinds of attacks.
They've been able to hold up Section 230
and say, we are not responsible essentially
for anything that happens here.
And so what's really been fascinating to me about this case
is that it seems like the plaintiffs' lawyers
have finally found a way through that shield
and juries are responding to it.
Yeah, I want to get into that shield even more,
but I just see the jurors said they were unimpressed
by Mark's testimony, shocking, I know.
The judge also didn't seem all that impressed
with his team recording the proceedings via their meta AI glasses.
Guess that was a no-no.
What did you make of Mark's testimony
in his general posture throughout the trial?
I think basically since Cambridge Analytica,
the 2017 post-Trump election backlash,
meta has been in this posture of delay, deny, deflect.
And Zuckerberg has been carefully trained
to give the least that he can get away with.
And this is just mostly work for him.
This is a guy who's gone before Congress a lot,
has been asked a lot of the same questions.
He chokes out a few words, then he gets interrupted.
And I think it really wasn't all that different at trial.
He doesn't really give folks almost anything,
but that wound up costing him.
Because I think a lot of what the jurors are responding to
is the idea that the people who are getting hurt,
the plaintiff in this trial, this is a real person.
This is not some statistical abstraction,
and there are a lot more people like her.
And because the executives of these companies
can't really speak to that,
increasingly they're getting in trouble.
So you just published a really thoughtful piece
about what these verdicts mean for the wider internet.
And you sort of laid out three camps,
three different reactions to the verdict.
The plaintiffs who are euphoric, the defendants,
who plan to appeal.
And then writers and thinkers who worry these verdicts
could break the basic compact
that holds the internet together.
This is what you were just getting at with Section 230.
Talk to me about the concerns of that third group.
So a good thing about the internet that we have,
arguably, I don't know, maybe some people would disagree,
is that you can have very wide-ranging political discussions on there.
You can say really edgy things.
You can say ideas that are sort of fringy
and even a little bit dangerous.
And one of the big reasons that you can do that
is that the platforms are just confident
that if they get sued over this,
they can get the suit tossed rather easily.
So you can imagine a lot of stuff
that people were saying about COVID in the early days.
Like, turned out to be true,
but with super edgy at the time,
the platforms just sort of mostly let it happen.
The fear is that if the Section 230 shield disappears,
all of a sudden platforms are going to start
overmoderating content.
They're going to say,
hey, this is starting to feel a little bit spicy.
Like, maybe it's a red state where we have a lot of laws
targeting LGBT people.
Maybe in that state, we don't want to permit
quite as much discussion of LGBT issues, right?
And all of a sudden,
like the surface area available for us
to have public conversations shrinks.
So that's one of the big fears,
but depending on how the cases get adjudicated,
there are even worse ones.
And there's one in particular about New Mexico
that I love to talk.
Yes, and I do want to get to that.
But like, it seems like with this case,
and this is what the New Mexico,
what happened in New Mexico,
which is about encrypted communications,
I think we should put that aside for now.
Because this case, and I think what was novel about it,
and innovative in the legal strategy,
is they did not go after content moderation.
And they basically said,
yeah, of course platforms can still be shielded
from legal or have legal liability shields
from getting sued for content, for user content.
But this is about the design itself.
And so we should be able to regulate
some of these features, infinite scroll, algorithmic choices
that are made, some of the,
trying to think of what are the other ones.
Auto play video.
Auto play, yeah, that was the other big one.
Auto play.
And those don't have to do with,
necessarily with free speech and free expression.
Right. And so like, this is the argument
that I'm trying to make is that,
content and design sort of exist along a spectrum.
There are some things that I think most of us can agree
are mostly just design.
Like the decision to send you 12 push notifications
after midnight when you're a teenager trying to sleep,
that's really like a design decision,
not a content decision, right?
And then, then there's like, literally what subjects
can you talk about and will we remove them from the platform?
That's like obviously a content decision.
My argument has been like,
let's try to find those design things
that like we can develop a consensus around.
And like, particularly when they seem to serve
no real social purpose,
I would argue that like auto play video,
infinite scroll are like probably in that category.
And maybe we can go after those
and still have a section 230 that enables the rest of us
to have political discussions.
Where I think I guess really tricky is around the algorithms.
Because I think most of us have this sense in our gut
that the reason that I can't stop looking at Instagram
and the reason I keep reinstalling it every time I delete it
is because I just know it's going to show me something good, right?
That casino effect is working
and I just want to pull the lever of that slot machine.
There are real difficult questions there
about whether these algorithmic recommendations
are protected speech under either section 230
or the first amendment.
And that I think is just going to be a lot harder to untangle.
Yeah, that seems the feature that's the trickiest to me
because infinite scroll, auto play, notifications.
I do think it's hard to argue that those are expressions
of free speech.
But a recommendation algorithm, like basically
if you're telling a platform what it can and can't recommend,
does that start to feel like regulating speech?
Because is that like telling a newspaper
or a TV news program,
which stories they can air and which they can't?
Absolutely.
And you can just see the way that that could be used
against the media in ways that we wouldn't really like.
I do think there is a potential path forward here, though,
which is just trying to regulate this by age, right?
I think, look, once you're an adult,
your hippocampus is fully formed.
If you want to spend eight hours staring at TikTok every day,
like God bless, go for it.
If you're 14, we might want to give you a little bit more protection.
And so maybe they don't regulate the actual content of the algorithm,
but they say, look, if you're under 18,
we're going to prevent these companies from personalizing it too much, right?
Like maybe we'll allow them to do some very high level personalization,
but we're not going to like fixate on your absolute exact interest.
So if there's any path forward there, I think it might look something like that.
So you said the most alarming part of these verdicts
was how the New Mexico case implicated encryption.
Meta actually, and you wrote about this as well,
actually ended encryption on Instagram DMs mid-trial in the New Mexico case.
You noted that that's the first time a major platform has ever
rolled back encryption protections.
You know, agitoras would say that encryption enabled predators to go after children in the dark.
You'd say, and I'm quoting your piece here, that, you know,
we shouldn't have to give up our basic rights of privacy,
so cops can make fewer phone calls.
How do you resolve that?
Well, I think agitoras needs a minus business.
Like we know that cops want to spy on us.
They have always wanted to spy on us.
And what we have said is, no, you're not allowed to,
because we have privacy rights.
So like, look, I don't want to be too glib about this.
I understand there are really painful trade-offs involved
when you allow folks to have encrypted speech.
But in the world we're living in, I truly do not want the state to be able to spy on all
of my communications.
And I think we just have to absorb the cost of that and find other ways to catch predators.
And by the way, there are other ways to catch predators, right?
Yeah.
To me, I've thought a lot about this.
And it feels like you need spaces like,
or you need platforms like WhatsApp,
Signal, where I message, I guess,
where encryption is protected and guaranteed.
And there are places where you can communicate with people
where you do not have to worry about the government spying on you.
Just like in real life, right?
Just like pretend we didn't have any of these.
There should be places where you can go with someone one-on-one
and have a conversation with them.
I wonder, I was thinking about the Instagram DMs and encryption there.
Platforms where they also have these recommendation algorithms discovery,
where they are connecting you with a bunch of strangers.
And then those strangers can have conversations with you that are encrypted.
That seems like less of a, you know, a sure thing in terms of like keeping that encrypted.
Yeah, I think that's fair.
And I've spoken with employees at Meta who have made the same case to me.
Like even folks who are generally pro encryption,
they're like, look, on the subject of Instagram,
because it is a place where strangers meet,
we might want to make encryption like at the very least not that a fault.
I talked to some who are sort of happy to see it go away.
I can live with encryption on Instagram going away.
In fact, they never even rolled it out to most people.
But what I object to is for the Attorney General of New Mexico
to be able to say that because Meta offered encryption,
the platform was inherently unsafe.
In fact, I'd be willing to bet that to the extent
any of these teenagers did have encryption on Instagram,
it probably did keep some of them safe.
But just by allowing them to have private conversations
without the state's doping.
And by the way, I guess if that is the finding,
then that means that WhatsApp is a defective product just by its nature.
And so it's signal and so are these other places
where people are having encrypted communications.
Yeah, that just feels like a true slippery slope.
And it is why like, you know, I want to be reasonable on most issues of tech policy.
I try to be just kind of a real hardliner about encryption
because it's just so easy for the whole thing to unravel
once we start going down this road.
Yeah.
Offline is brought to you by Quince.
This time of year might make you rethink what's in your closet.
You want to move away from clutter toward high quality pieces
you can actually live in.
That's why you should check out Quince.
The fabrics feel elevated, the fits are thoughtful,
and the pricing actually makes sense too.
Quince makes high quality everyday essentials
using premium materials.
They're 100% European linen pants and shirts for men are lightweight,
breathable and comfortable,
basically the perfect layer for spring.
The pants strike the right balance between laid back and refined.
So you look put together without trying too hard
and they're flow-net active wear, moisture-wicking, anti-odor.
I love anti-odor.
That's important.
It's soft enough that you'll actually want to wear it all day.
The best part is their prices are 50 to 60% less than similar brands.
How? Quince works directly with ethical factories
and cuts out the middlemen.
So you're paying for quality,
not brand markup.
Everything is designed to last and make getting dressed easy.
Love Quince,
go online and get some more spring stuff
because I go online like once a month to go to Quince
and see what they got and they always get new stuff
and it's always comfortable and it's always important.
It's getting hot out.
Refresh your wardrobe with Quince.
Go to Quince.com slash offline for free shipping
and 365 day returns.
Now available on Canada too, go to qunce.com slash offline
for free shipping and 365 day returns.
Quince.com slash offline.
And hope rooted in action, not denial.
Eric Goldman, the section 230 scholar,
you cite him in your piece, says the social media industry now faces
existential legal liability and will need to reconfigure their core offerings
if they can't get really fun appeal.
There are about 2000 pending lawsuits, massive federal trial this summer
with 1600 plaintiffs, 40 plus state attorneys general
have filed suits against meta.
Do you agree it's existential and like what kind of design changes
do you think meta might contemplate making or be forced to make
to settle or prevent future litigation?
Yeah, it's a great question.
Is it like existential in the sense that maybe meta will be out of business
by the end of the year?
No, I don't think it's existential in that way.
Are they going to have to rethink some of the features of the platform
if these cases get upheld on appeal?
Yeah, I think they will.
Where it gets tricky is, and this is one of the problems with having
juries decide this sort of thing instead of Congress,
is there's no legal standard now for what constitutes a safe platform.
Right?
Like there's no rule anywhere that says, well,
if you just get rid of auto play video and infinite scroll
and you don't personalize the algorithm too much,
we will consider you non-defective.
And so to some extent, the platforms are just going to have to guess.
On the other hand, these platforms also employ behavioral scientists
who add PhDs who are working around the clock,
try to exploit every feature of your brain that will get you to stare at the glass
rectangle longer.
Maybe the platforms could just say, hey, stop that.
Knock it up.
Let's maybe roll back the last 15 things we did in that regard.
Maybe they would be a little bit less hypnotic.
Yeah, because I thought about this too.
And I'm like, okay.
What makes this different from any kind of media company trying to keep its audience?
Right?
Which is you design your programming, whether it's TV, whether it's film,
whether it's a newspaper magazine,
because you want people coming back from even a book.
Right?
Right.
He has cliffhangers.
Right?
You want people coming back for more.
But what's different is at least, you know,
all of those media are produced for,
it's the same media produced for everyone.
This is now like individualized,
bare down into your brain.
No, what you want, spells of, that we've just never dealt with before.
And so the psychological effects of that, as we're seeing in the psychological harms,
are just so much different than any other media we've had.
Absolutely.
Like again, section 230 was a lot past because people were defaming each other on platforms,
and people were suing the platforms.
And lawmakers at the time said, hey, we're just never going to have an internet.
If you can sue a platform out of existence, because two users were mean to each other.
We did not predict the world of infinite scroll and auto play video and cognitive scientists
who were measuring the scroll depth on your phone to the exact pixel that you scrolled down.
And understanding exactly what video you were watching,
and how that relates to the 80 million other videos they might show you in the moment.
Right?
So we just have to kind of account for the growing technological sophistication of these platforms,
and how good they've gotten at hacking our brains.
I do want to just zoom out on meta for a second.
They have pivoted away from the metaverse,
despite renaming the entire company after it.
To the tune of about $80 billion in losses,
hundreds of layoffs, just this month.
What is meta's identity right now?
Does Zuckerberg have a coherent strategy, or is he just trying to survive?
I think he really has been in survival mode.
You know, interestingly, the metaverse was also a survival thing,
because at the time he was just having such huge conflicts with Tim Cook over at Apple,
he felt like unless I own the hardware of the next generation,
like I'm always going to be subject to this one person's whim,
so he wanted to go out and build it himself.
It turned out not too many people wanted to follow him along on that journey.
But while they were building headsets and glasses,
Silicon Valley started to make huge advances in AI.
Meta in fairness also made big investments in AI.
There's a sten work out as well, right?
And so now Zuckerberg is in a situation where he's really behind in AI,
and I think just having a very difficult time getting the company anywhere close to the frontier.
So I mean, look, if you look at most of the numbers that investors care about,
meta is still doing just fine.
But I do think you're starting to see some cracks in the armor there,
and the next couple of years, like there are scenarios where it just goes pretty badly for them.
What's their case on AI?
Like what is there?
What do they think they're competitive advantages in this field with all these other AI giants?
I mean, it's so grim, John.
I mean, like the true vision for like an AI version of meta is that, again,
using all the tricks we've just been talking about to understand what are your exact particular interests,
they're going to use the models they have to generate synthetic content
that keep you looking at the glass rectangle as long as they can.
So, you know, this is a company that to the extent it had any social mission at all,
it was to like connect human beings.
That has been thrown out the window because they now want to connect you with personalized slop.
And like I'm not even exaggerating.
Like this, it just is the vision of the company now.
Yeah, they're connecting us just with robots, not even robots now.
I mean, we just met this whole conversation talking about sort of the harms of the algorithmic
feed internet. AI, is there any reason to think that the AI internet will be better for people?
Or we just do think we're just going to have the same conversation in five years about chat
bots and AI agents?
Well, you know, there was an interesting study this week that said that large language models
generally do a better job of connecting people to expertise, right?
Like the big language models, they're less likely to like guide you to,
I don't know, you know, bright bar and gateway pendant.
Like they'll tell you something that actually happened.
So, that's a good thing.
But on the whole, I basically just was worried about the AI era, if not more so,
because we've already seen how hypnotic these chat bots can be for some people.
I get emails every day from people who think that they've woken up,
Claude or chat GPT.
And some of these people have really terrible outcomes, right?
So, my fear is, particularly, again, for the young folks whose brains haven't fully developed,
there are so many, like it's very hard to be a teenager.
And it's just so easy for me to imagine a generation getting addicted to these chat bots
that never really push back on them, but always tell them they're doing great, they look good,
you know, and I just think it's going to be a big problem.
Yeah, it reminds me of sort of the first wave of concern about social media or at least,
you know, a couple of years ago, was like the misinformation.
And it's going to push us into like political bubbles and all that.
And that was like the first.
And there could be some of that with AI, like when I look at GROC,
I certainly see, like, you got Elon Musk, you know, doing his own thing with the biased AI,
LLM, and I guess other companies could do that as well.
But I am more concerned about what the second wave of concern was with the,
with the social media companies, which is people spending all day long just hooked
to AI that is going, that is already sycophantic now,
because it wants to keep you on the platform, because that's how they make money.
Right. You know, when there's a company character AI, maybe you're familiar with it,
and they came along and they said, we're going to let you create a chatbot out of any
fictional character that you can imagine.
It started to get some momentum. And so Zuckerberg said, like, oh, we just need to do that.
And so this now exists on Metas platforms.
You can connect with any number of chat bots.
There was one they got in trouble for named Nasty Nancy, who I guess was sort of a
a stepmom who was doing things she shouldn't. But yeah, that's the kind of the present of Metas.
So yeah, you can imagine what that's going to look like in two years as the models improve.
The last thing is, there seems to be this growing gap between people in Silicon Valley,
in the tech world, who use AI, and they're saying, oh, it's here, the future is here.
You wouldn't believe what you can do with this. And then everyone else who either isn't using AI,
or who is just asking an LLM some basic research questions, can you talk about that gap and
sort of what the people who use AI all the time and are very proficient with AI are
like why they're so excited or why they've been so compelled by this?
My view of the gap is that it's really about the folks outside of the bubble,
not wanting to believe what the folks inside the bubble are saying. And I think they have a lot
of really legitimate reasons for that, right? Because what are the folks inside the bubble saying?
We're creating an existential threat to humanity. It's probably going to take your job.
It requires the largest energy and infrastructure build out in the history of America.
And that might wind up in your backyard and raise your electricity prices.
Like, of course, Americans do not want to have that vision come true.
I think the AI industry is doing a really bad job, selling itself in that way.
I think what the technologists would say is like, look, whether you want to believe it or not,
we actually do basically have the recipe figured out. We know we can just keep pouring more data
and compute into these systems and the amount of intelligence that we have is going to scale.
And so that just is going to create huge consequences for all of us.
So to me, it's really not about like, who is right and who is wrong? It's about like, what do you
want to believe? Yeah. And that it's like, if the future is here and this is happening, then like,
what is the best way to adapt in a way so that we don't find ourselves in a really bad
situation? Yeah. And I mean, one of my big critiques of the AI industry is it's just like,
so anti-democratic, right? It's like, like, I mean, you know, a big criticism that it just
leagates is like, nobody asked for this, right? Like people aren't asking for their jobs to be
taken away and for all of the rest. So I wish we would bring more kind of democratic governance
to these systems. Agreed. Agreed. Casey Newton, thanks for, thanks for jumping on offline and helping us
get smarter on this. It has my pleasure, John. All right. Okay.
Offline is a crooked media production. It's written and hosted by me, John Favreau.
It's produced by Emma Ilek Frank. Austin Fisher is our senior producer and Anisha Banner G
is our associate producer, audio support from Charlotte Landis. Adrian Hill is our head of
news and politics. Matt DeGroat is our VP of production. Jordan Katz and Kenny Siegel take care
of our music. Thanks to DeLon, Villain-Wave, Eric Shoot and our digital team who film and share our
episodes as videos every week. Our production staff is proudly unionized with the writer's guild
of America East.
In moments like these, it's easy to feel overwhelmed and even easier to feel powerless.
But we are neither. I'm Stacey Abrams and on my podcast, Assembly Required, I take on each
executive action, legislative battle and breaking news moment by asking three questions. What's
really happening? What can we do about it? And how do we keep going together? This is a space for
clarity, strategy and hope rooted in action, not denial. New episodes of Assembly Required,
drop Tuesdays, tune in wherever you get your podcasts and on YouTube.