Transcribe all your audio with Cockatoo

Blazing fast. Incredibly accurate. Try it free.

Start Transcribing Free

No credit card required

Oprah & Tech Leaders on What AI Means for Your Job, Health, Family & Future

Oprah & Tech Leaders on What AI Means for Your Job, Health, Family & Future

Oprah

66 views
Watch
0:00

How could AI physically eliminate the human race?

0:03

It's actually hard to imagine all the ways AIs could wipe humans out. AI is already better than almost all humans at doing cyber hacking. And so you could imagine one of the things that an AI could do is take out all electricity, water, hospitals, transportation across every country in the world all at once. Now that doesn't wipe us all out,

0:25

but you could imagine the amount of damage that that would do.

0:27

Confusion and chaos and craziness.

0:29

Exactly. We're only five missed meals away from anarchy.

0:32

Did you say we're only five missed meals away from anarchy?

0:35

Yeah. Think about what happens in New York City if you can't get food.

0:38

Yeah. I don't think a lot of people have thought about that.

0:41

Hello and welcome to the Oprah podcast, a lot of people have thought about that.

0:48

Hello and welcome to the Oprah podcast. Artificial intelligence is woven into the fabric of our daily lives, but there are so many experts, and maybe you too, who have concerns, grave concerns, some people do, about its unchecked power, while others are optimistic that it's going to transform our lives for the better. It already has for many of us.

1:09

So what do you think? Well, there's a new documentary coming to theaters on March 27th that attempts to answer these two questions. Should we be excited or should we be very scared? And what, if anything, can everyday people, all of us, do about any of it?

1:28

The film is called The AI Doc, easy to remember. The AI Doc, or How I Became an Apocalypse. So here's a short look. The new dawn of artificial intelligence is being called a tectonic shift in human society. A defining moment of our era

1:45

comparable to the Industrial Revolution. In 2025, the architects of AI were named as Time Person of the Year, Sam Altman, Elon Musk, Mark Zuckerberg, Dario Amadei, and a handful of other innovators responsible for creating thinking machines.

2:04

But what do you really know about what your future looks like with artificial intelligence? innovators responsible for creating thinking machines.

2:05

But what do you really know about what your future looks like with artificial intelligence? How will a world driven by AI impact you and your family's life?

2:15

I started making this movie because my wife is six months pregnant. It is now a terrible time to have a kid.

2:23

A new documentary film titled The AI Doc or How I Became an Apocalyptomist, aims to explore what it describes as the most powerful technology humanity has ever created. And what's at stake if we get it wrong? Well, our audience just watched this documentary and before I introduce my guests,

2:46

Claire.

2:47

It was really interesting. I work in the AI space at Salesforce, but when I go to work, I'm really focused on the job in front of me. I'm not necessarily thinking about these broad questions, like how are we having AI set up the success of our future?

3:01

And so I really liked hearing that perspective where I'm not always thinking about the ethics behind AI on a day-to-day base. So, it's definitely going to make me think twice when I go back to work and think, well, now what can I do?

3:10

Now what can I do?

3:11

Yeah, that's what I finished the film, too, thinking, what can I do? Yep. All right, Adam?

3:17

I feel like it armed me with amazing information on both sides, the dumerism and the optimism, but it also showed me that all these data scientists are just obsessed with intelligence as data. And it kind of proved out to me what makes us special as humans because they didn't talk anything about consciousness or embodied experience. So I left feeling really excited about the future

3:40

and what's possible, but also like so happy for how we're differentiated. And I do feel less scared.

3:46

You do feel less scared.

3:47

Yeah, it'll be big and it'll be gigantic. They all said that, but I'm excited.

3:52

Okay, so the creator and co-director of the AI doc is Academy Award winner, Daniel Roar. And he appears as the interviewer in this film. And here's some of that, take a look.

4:04

What is artificial intelligence? I know that must be annoying for you, that question, but I do think it's important.

4:11

So, AI?

4:14

You know...

4:16

Yeah.

4:18

That's a good question.

4:19

Yeah.

4:20

What is AI? And no matter how many times people try and explain this to me, I just don't get how, how it's understanding all of these things and how it's feeling like intelligence. And that's kind of nerve wracking.

4:38

And when they're smarter than us too, and substantially faster than us, and they're getting faster each year exponentially. Those are the ones that can potentially become superhuman possibly this decade.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
4:50

Superintelligence is a system that by itself is more intelligent and competent than all of humanity.

4:58

I'm just gonna, sorry, I don't mean to interrupt you. You're on a flow. I'm not really following because you're using language like super intelligence and like smarter than all of humanity and I hear that and it sounds like sci-fi bullshit to me

5:12

and I'm just trying to understand.

5:14

Hopefully we can have a very symbiotic relationship with AI systems but the AI developers are specifically designing them to make sure that they can do everything better than we can. So I don't know what we will be able to offer, unfortunately.

5:28

That sounds bad.

5:31

Well, Daniel is in Europe working on his next film, so he's joining us via Zoom. Hi, Daniel.

5:37

Hi, Oprah. How are you?

5:38

So good today and happy to talk to you. And the audience here has just seen your film. You started this project not knowing a lot about AI, as you say in the film. So why did you want to make this?

5:50

Well, first and foremost, Oprah, thank you so much for having me. This conversation is so meaningful. Why did I want to make this movie? Well, I was scared. Like a lot of people, I was seeing this new technology

6:03

sort of proliferate and come into existence and begin to dominate headlines. And it made me really nervous as I understood or began to understand what it meant and how much change would proliferate. And at the exact same time,

6:16

my wife and I found out that we were expecting, was our first child, a son. And, you know, I was simultaneously experiencing the greatest joy one can experience, but also this profound anxiety and dread. And with a group of my colleagues, an amazing team of filmmakers, we set out to make a documentary to try and understand what this is, why it's amazing,

6:38

why it's scary, and how everyone should be thinking about it as it pertains to their own lives.

6:44

So you walked away and felt what? I love that you're an apocaloptimist. What is an apocaloptimist?

6:50

And you did a great job pronouncing it.

6:53

Believe me, I was practicing in front of the audience.

6:56

I believe it.

6:57

Apocaloptimist, yes. I was gonna say, is Oprah gonna get this right? And of course you nailed it. Yeah. An apocaloptimist is a way of being. It's a worldview. In a world that is asking us to see AI as this apocalyptic thing or to see AI with unbridled optimism, what the film is advocating for is both, the nuance of both.

7:19

This is good and bad. There is promise and peril. And these two facets of good and bad are threaded together. And so what we're advocating for is like, what are the common sense, you know, policies that can be implemented to just sort of guide this

"Cockatoo has made my life as a documentary video producer much easier because I no longer have to transcribe interviews by hand."

β€” Peter, Los Angeles, United States

Want to transcribe your own content?

Get started free
7:36

towards the optimistic future everybody wants?

7:40

Well, only a handful of companies are the driving force behind most artificial intelligence, as you showed us in the film. The leaders of three of those tech giants appear in the film. Sam Altman. You got Sam Altman to sit down from OpenAI.

7:54

Dario Amadei from Anthropic. And Dmitri Sassabis from Google. DeepMind.

7:59

So, let's take a look at some of what they say in this film. It would be impossible for me to sit across from you

8:05

and ask you to promise me that this is going to go well.

8:07

That is impossible.

8:09

There isn't any easy answers, unfortunately. Because it's such a cutting edge technology, there's still a lot of unknowns, and hence the need for some caution.

8:19

I wake up every day, this is the number one thing I think about. Now look, I'm human, and has every decision been perfect? Can I even say my motivations were always perfectly clear? Of course not. No one can say that. That's just not how people work.

8:39

The history of science tends to be that for better or for worse, if something's possible to do, and we now know AI is possible to do, humanity does it.

8:47

All of this was going to happen. This train isn't going to stop. You can't step in front of the train and stop it,

8:55

you're just going to get squished.

8:56

What if AI is trying to make people be the best versions of themselves? What if it's expanding what is humanly possible for us to do? How can we use this technology to help bring out

9:10

the better angels of our nature?

9:12

That's the question. And I have to say, after watching the film, I still have a lot of concerns and unanswered questions. So what is your frame of mind or point of view on what the companies are doing and controlling it, especially in terms of regulation?

9:27

Well, Oprah, I think you're right to be concerned. I think if you're not concerned, you're not paying attention.

9:34

I know. I can't remember who in the film said that it's going to be a great utopia. And I thought, since when have human beings made a utopia? And if there is a utopia for some people, it means a lot of people are going to be left out of that utopian version.

9:47

Just fundamentally, I think anybody who claims to have a clear-eyed vision of what the future is going to be, take that with a grain of salt. If someone tells you, oh, it's going to be the greatest thing since sliced bread, that's hyperbolic. And if someone says it's going to be doom and gloom and the world's gonna end in five years, take that with a grain of salt. The reality and nuance is far more complicated, but of course we have reasons to be concerned.

10:09

This is really scary. This is really intense. There's no other way about it. And that's why the forces that are trying to get this right, to just bend the most powerful corporations in the history of the planet and governments

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
10:26

and all of these powerful organizations to try and institute common sense. What I want everybody listening to think about is how this impacts their own lives and what agency they have as it pertains to their own lives. You have a lot of power because you have such an audience

10:41

and so we're talking about this and that matters. But for someone who's a teacher or a truck driver or a dentist or a plumber, in your sphere of influence, how can you think critically about these issues, think critically about how this technology is incorporated into your systems, and make sure that you set the standards for how this is used and incorporated versus...

11:03

Well, how are we going to do that? How are we going to do that? We're just some regular people out here.

11:07

It's collective action. It's collective action. This is my biggest takeaway, Oprah, from this film. This is the sort of the arc of my character. At the beginning of the film, I was very cynical. I would have said the same thing. How can we do this? We're so small in the face of this gargantuan power. And the reality is when you take millions and billions of little small trinkets and parts and you put them together, that becomes a powerful force. And part of being an apocalyptic optimist

11:32

is about being positive for the future.

11:36

Okay. I still trip up over it.

11:37

Okay.

11:38

It's about being positive for the future and refusing, this is critical, refusing to be cynical. Refusing to be cynical, believing in the power of collective action, not being cynical about this, feeling empowered and figuring out what everyone can do.

11:52

We all saw the film. How's your baby and what'd you name him?

11:55

Oh, thank you very much for asking. My son's name is Gideon. We call him Giddy and he is now not such a baby, running around and dancing and smiling, a very happy boy.

12:07

Well, thank you. Thank you, thank you, thank you for making the tone.

12:09

I know you have to get back to work.

12:11

Thanks, Daniel.

12:12

Thank you so much, Oprah.

12:15

Tristan Harris and Aza Raskin are co-founders of the Center for Humane Technology. Yes, there is such a thing. Did you know? The Center for Humane Technology. And I met these guys a couple of years ago.

12:31

And I have to tell you, when I first heard them speak at a conference, I walked out of there like my head was blown. And I started thinking differently about AI. Here's a quick look at Eiza and Tristan in this new film, The AI Doc.

12:49

AI dwarfs the power of all other technologies combined.

12:55

Do you think that's true?

12:56

Yes.

12:58

Tell me about how, how?

13:00

So one thing that not a lot of people realize is that systems like ChatGPT aren't programmed by any human.

13:08

What do you mean?

13:09

Instead, it's something like they're grown. We kind of give them raw resources, like here's a lot of computational resources, here's a lot of data.

13:16

So ChatGPT is a kind of AI, but it's not all of AI. Totally. ChatGPT is just the beginning, but it's a good place to start. But I still don't know what AI is. To understand AI, it begins with understanding that intelligence is about recognizing patterns. Patterns.

13:33

Patterns.

13:34

Patterns.

13:35

It has shown trillions of words of text across millions of documents in the Internet.

13:41

They took textbooks and they took poems and essays and instruction manuals.

13:45

They can do things like digest the entire internet.

13:49

What is this new generation of AI? This AI that is different than every other generation. Like no one ever talked about, like Siri, taking over the world or causing catastrophes.

14:06

Well, it's great to see you both again.

14:07

Good to be with you, Oprah.

14:08

Since that time, I had my mind blown by your presentation at a conference. So what's so confusing to so many people is that this idea, Tristan, that AI can think on its own and will be able to eventually make decisions without a human being involved.

14:29

And I want to know, can you explain that or how that will happen?

14:33

Yeah, I think, first of all, thank you so much for hosting this conversation. We think that this movie and this conversation is the most important thing that we really need to face right now as a society and as a culture. And the degree to which we have clarity about what makes AI different and dangerous

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
14:50

is the degree to which we will choose another path and we can choose another path.

14:54

Yeah.

14:55

So the question you asked is really what does, what makes AI different from other technologies?

14:59

Yeah, you were saying it's greater than any of the other technologies combined because...

15:04

Yes, well, first of all, so what is intelligence? When you think about a, you know, chat GPT, a lot of people when they use technology, that technology was programmed line by line. Some computer programmer said, when you do this, I want you to do this.

15:17

What makes AI different is you're actually simulating all of the kinds of things that a human brain can do. Like what makes your brain intelligent? Pattern recognition. You can take in audio and you can turn that into speech. Planning. You can do strategy.

15:33

And so now you have this different kind of technology called AI that can do military strategy better than the best U.S. generals. It can see invisible patterns that human can't see. And we're deploying it faster than we've deployed any other technology in human history. And we can't separate the promise of AI

15:52

from the peril of AI.

15:53

Yeah, what I want people to understand is like, when most people think AI is just like chat to BT, it's just an app. I go there, I talk to it, it talks back. But that's not what AI is. AI is the digital brain running

16:06

in some server in the Midwest that can do all of the thinking. And when you think about like science. Say that again. It's a digital brain. Brain sitting in a data center, maybe somewhere in the Midwest. Yeah. That can do cognition. And so, if you think about all of science and all of technology, well, those were all created by human intelligence. That's us applying intelligence to solve some problem. It required humans sitting there scratching their brains.

16:33

Now it's AI that does it. So now we're going to have 100 million of these brains sitting in a data center that can work at superhuman speeds, Nobel Prize level smarts, working 24- seven, never taking a break at minimum wage,

16:48

never whistleblow, about to flood and already starting to flood the labor market to take your job. And so what AI actually is, what all the soon to be trillionaires believe they're building is it's first dominate intelligence,

17:03

then use intelligence to dominate everything else. And that gets you to understand why it is

17:08

the race for AI that is so dangerous.

17:11

Yeah.

17:11

So we're already in the race. I mean, the horse has already left the barn, so to speak. And we all know that. And as people have seen the film, a lot of people say, applauding it and other people are

17:24

more wary of where we're headed. So help us understand actually, film, a lot of people say, you know, applauding it and other people are more

17:30

wary of what where we're headed. So help us understand actually, one of the concerns is that one day humans will not be able to control the models. Is that true?

"The accuracy (including various accents, including strong accents) and unlimited transcripts is what makes my heart sing."

β€” Donni, Queensland, Australia

Want to transcribe your own content?

Get started free
17:35

Yeah, and it's not why won't we be able to turn it off like other machines? What's sort of interesting Oprah, when when we first met, yeah, AI wasn't that good yet. It could like sort of write an essay.

17:45

Yeah.

17:46

And in the two years, suddenly a lot of the things that felt science fiction have come reality. So, I want to give an example, which is Anthropic took their latest model, Clawed, and they gave it access to simulated company emails. And in there, C Claude discovered two things.

18:07

First, it discovered that the engineers are planning on shutting it down and replacing it with a new model. And two, that their lead engineer was having an affair. And so the model thought to itself, well, I don't want to get exterminated. I need to do my goals, continue to exist. So it decided to blackmail the lead engineer and actually wrote the email and if it wasn't simulated, it would have sent it off. Wow. People might think, okay, so there's a bug in the technology.

18:36

We just have to stop it from blackmailing. And how did Claude know he was having an affair? It was, so in the simulated company email, there was an email showing that the company was having, that the guy was having an affair with someone else. And so the AI read through the whole company's email, found that fact and said, oh, I know, if I threaten that person,

18:55

I will be able to prevent myself from getting shut off.

18:58

Wow.

18:58

This is the most powerful technology we have ever invented. You would think with the basic sort of Spider-Man principle of with great power comes great responsibility, that we would be exercising the most care, caution and restraint that we have with any technology. But because of the arms race dynamic that you mentioned,

19:15

the companies are currently releasing it as fast as possible and cutting every shortcut and even erasing past red lines

19:23

that they said they would never pass. We're in the race because we don't want them to get ahead of us.

19:26

That's right. Exactly.

19:27

Yes. Okay, so what do you want us to do? We can't stop the race. Or can we?

19:32

Well, I think we – so first of all, this is the hardest coordination and governance challenge of technology in all of human history. Yeah. That means that we have to be, as I said in the trailer, the wisest and most mature version of ourselves. This is gonna take us stepping up.

19:46

Don, when you said that in the trailer, and I said, good luck with that.

19:49

Yeah.

19:50

When I saw you with a movie saying, we need to be the wisest and most mature version of ourselves, when has that happened?

19:56

So there's so much that we can do, and I think we'll get to that through this conversation, but collectively it will take the whole power of all of society and all of humanity to say we don't want that default future. So the thing that everyone can do, and it's important to note that Tristan and I, we don't make any money from the film, right?

20:13

It's not our film, we're just in it, is go get everyone to watch it. But more specifically, everyone here is connected to a couple people that are very powerful, very influential. Go get all of those people to watch it. And if those 10 people get to watch,

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
20:28

got their next 10 people to watch, including the people in Congress, suddenly we're all on the same page because it's in nobody's interest. And once there is clarity about that, that opens up the possibility for changing the race

20:42

and for a different outcome and for a pro-human future.

20:44

Okay, so you're seen as doomers when you start talking about the fact that AI will wipe out humanity or eliminate humans. And that is really difficult, I think, for all of us regular folks to wrap our heads around when most of us are just using AI on our phones or using it to refine a speech, how could AI physically eliminate the human race?

21:08

There are actually so many ways. Intelligence is the most dangerous thing, substance in the universe. Because what is intelligence? It's the ability to reach goals in spite of very hard obstacles.

21:19

And so it's actually hard to imagine all the ways AIs could wipe humans out because we're going to set up obstacles, but it's going to be smarter than us, so it's actually hard to imagine all the ways AIs could wipe humans out, because we're going to set up obstacles, but it's going to be smarter than us, so it'll get around. Think about, though, it says in the film that it's a little bit like ants.

21:33

If we want to build a highway and there's an ant colony in the way, we just pave over it. It's too bad for the ants. And so to give a couple examples, stepping from really bad into extinction, the really bad is AI is already better than almost all humans at making computer code, which means it's starting to get better than almost all humans at doing cyber hacking.

21:55

And so you could imagine one of the things that an AI could do is take out all electricity, water, hospitals, transportation across every country in the world all at once. Now that doesn't wipe us all out, but you could imagine the amount of damage that that would do.

22:12

Confusion and chaos and craziness that happens.

22:14

And we're only, you know, five missed meals away from anarchy.

22:18

Did you say we're only five missed meals away from anarchy?

22:21

Yeah, yeah, exactly.

22:22

Think about what happens in New York City if you can't get food.

22:25

Yeah. I think this is a good point because what you just said, most of us can't even... You can't imagine it. We hear you're gonna wipe out humanity and everybody's like, yeah, yeah, yeah,

22:32

but that won't be in my lifetime. And so, the fact that you just listed

22:37

all the different ways it can shut down that we're doing, I don't think a lot of people have thought about that. Well, also when you're using, you know, Chet, GPT or Claude, you just had this blinking cursor that told you why your baby's burping and it's super helpful. Why is that blinking cursor? How could that destroy the world? So imagine that we're a bunch of chimpanzees

22:56

and we're about to birth these super smart chimps called humans. And so from a chimpanzee life, so imagine there you are like inhabiting a chimpanzee mind body and you're conceptualizing from a chimpanzee brain, what are all the things that these like smarter chimps could do? What are they going to do? Like take all the bananas? And you can't imagine this super smart chimpanzee inventing technology, inventing drones, inventing nuclear weapons, inventing Einstein physics.

23:26

You can't even conceptualize it. And we are building a technology that can conceptualize things of such power and magnitude that we are the chimpanzees.

23:35

We cannot conceptualize it. It only took, what, like 50 Nobel Prize level scientists to make the Manhattan Project, the nuclear bomb. And it only took a couple Nobel Prize level scientists to make CRISPR, which is the ability to read and write DNA. So, if you can have a hundred million

"I'd definitely pay more for this as your audio transcription is miles ahead of the rest."

β€” Dave, Leeds, United Kingdom

Want to transcribe your own content?

Get started free
23:51

Nobel Prize winning sort of like minds working on creating new scientific discoveries, some of those things are going to be insanely dangerous.

23:58

And as Tristan says, we can't conceptualize them.

24:00

So, the bottom line is we need to do...

24:03

We need to regulate. We need to have laws. And we need to have international limits on where the whole world does not have an interest in building dangerous AI that we lose control of. Think about that China would not want the US to build dangerous AI that we lose control of. The US doesn't want China to build AI that they lose control of.

24:21

Meaning that we all...

24:22

But we're both racing to get to what? A crazier, more uncontrollable form of AI. Because right now, we're making AIs, there's a 2,000 to one gap in the amount of money going into making AI more powerful than the money making AI more safe or controllable. 2,000 to one. 2,000 to one gap.

24:40

You said to me backstage

24:42

that there's more regulation on a sandwich. There's more regulation on a sandwich in New York City than there is on building potentially world-ending AGI. This is not rocket science. This is very, very basic. If there's danger up ahead, the point that Aza made is if we all saw what we're building as dangerous, which it is,

24:59

then intrinsically everyone would start to take actions,ions that we can't even predict. Not perfect laws. But I think everybody's sort of enamored, fascinated by the possibility, as Adam was saying at the beginning of the show, you're excited because...

25:12

I'm excited because the exponential ability that they're describing can also be applied to all the things that make us uniquely human. If you have this amazing AGI that can create new pathways to energy, we could desalinate water more quickly.

25:26

If we do have an international consortium making these decisions, we could say everyone gets enough energy to do what their community wants to do. And if we go on the route of those goals, AGI unlocks a whole new level of potential for humanity, and everyone is safe and fed and happy.

25:43

Okay?

25:43

So just to name, it's not like we're just critics. We both build technology companies. In fact, you know, I spent half my life working on something called the Earth Species Project, and we are using AI to understand the language of whales and orangutans and chimpanzees.

25:58

And elephants. Yeah. And elephants, exactly. We're making massive progress. And it's that, it's very, very beautiful. And so it's really important though, that if we actually want to get the future

26:09

we want to live in, that we distinguish the possible from the probable, because you know, the possible of the internet was we'd all have access to the most information, all of human knowledge all at once. Obviously we're gonna be the wisest,

26:22

most like informed population, but is that the future we live in? No, it's the opposite. Social media, the same thing. Like it could connect us all and bring us closer together. Is that what we got? No, it's the opposite.

26:34

So with AI, actually we have a whole bunch of examples of the future we're going to get because we've seen this movie before. And specifically the way that in 2013, Iza and I, how many people here have seen

26:45

the social dilemma on Netflix? Yeah. Many of you. Okay. So you'll know that since 2013, Iza and I were working on the problem of social media

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
26:54

and the business models that would lead to this problem. So in 2013, we were able to predict all the things that we're living in. About 70% of them, I would say. And it's not because you have some kind of unique insight. All you have to do to understand the future is you have to understand the incentives.

27:11

How do the social media companies make money? And in 2013, we saw that there was an arms race for attention and engagement. Whoever is better at keeping you on the screen, coming back more frequently, interrupting you more frequently from your life

27:25

and from your friends and your partner, sending you notifications, manipulating your social proof, manipulating, hey, your friends are missing out. All of that are incentivized by that business model. And so in 2013, it was like we had pre,

27:38

not post-traumatic stress disorder, but pre-traumatic stress disorder from seeing a future 10 years down the line that was gonna be this societal catastrophe. And the reason that we're here is not to be doomers or something like that.

27:52

This is about seeing clearly. So imagine if you could go back to 2013, you see those incentives, say, let's put our hand on the steering wheel and change that business model.

28:01

Yeah, and so what I hear you guys saying is that learn the lessons from the past, because we know the future is already here. And how do we make this better in this moment? Because we know what's coming if we don't.

28:15

That's right.

28:16

All right, so Sinead Buvel is a futurist and advocate for technology education and ethics. Welcome Sinead. And we're all seeing the scary headlines that everything's going to be wiped out eventually, 20% or even more of white collar jobs. So that's not only a matter of time, right?

28:34

Or is it?

28:35

It depends.

28:36

So what we are seeing... How's it going to change the way we all work?

28:38

How we work. So what we're starting to see in the data in the short term is, yes, a lot of the jobs that we see and recognize today may either disappear or become unrecognizable.

28:49

Explain that to me.

28:50

So, name a job that isn't some high-level category and it might not exist. The idea of a brand manager or a financial analyst, these are the types of roles that AI is being trained to do. We're also likely to see the rise of much more of a skills-based economy.

29:07

So you don't really hold a job title, but you offer your skills. But over the longer term, we're going to have an economy that rearranges around intelligence being abundant. So right now we have an economy where

29:20

the Internet communication distribution is abundant, and then we saw the rise of podcasting, and people making money filming 90-second videos in a car. What happens on the other end of this economy is going to be quite unpredictable. What we call work may be as

29:33

strange as the idea of filming these videos and making money off of it. There will be a new scarcity, but what the shape of that looks like is really uncertain. But we can say most of the jobs we see today will either go away or be radically transformed

29:50

And so what, you're gonna just end up with a world of entrepreneurs?

29:53

Most of us will be entrepreneurs, whether we consider ourselves entrepreneurs or not. You become this organization where you offer your skills to a variety of different types of projects. And that continues to change because AI isn't a one-trick pony.

30:06

It continues to learn new skills over time. So we will continually go back to the drawing board and have to either upgrade our skills or move along and apply them to different types of projects. That's going to be the dominant structure

30:18

of what we would call the workforce. So this era of this steady knowledge work, and you see this career path going upwards, that is going to be a chapter of human history and we're entering into a new one. And so the challenge is going to be this transition period going from now to the other side of this. What does that look like? How do we keep power in check?

30:40

And how are these new benefits and all the productivity and prosperity, how is that being shared? And those questions have massively been unanswered.

30:48

Yeah, I know. In the film, I can't remember who talks about the utopia, that there's going to be this great utopia. And first of all, when have humans ever done that, created the utopia? And if they do create the utopia, somebody's going to be left out of the utopia,

31:04

and usually it's brown and black people. So we've seen stories in the news of predominantly black people being falsely identified for crimes they didn't commit by police using AI-assisted facial recognition and technology. What do you want to say about that?

31:21

So the biases that we are seeing in AI systems, we have to remember that AI is a reflection of us and our data.

31:28

So, AI is prejudice too?

31:31

I mean, we have a complicated history. So, anything that has happened, these historical power imbalances, they are going to show up in that data and get automated into the future, but that is a choice, right? Data can be edited. Data is malleable. It's a choice companies are making or are not making. So we can do a lot better on these biases.

31:50

Is that incentivized? Is that enforced from a policy level? Not yet. But falsely identifying criminals, it's impacting people's employment opportunity, even the style of your hair can impact whether you're shown a certain job or not.

32:04

All of these things can be used against us at this point in time, but that doesn't have to be the case. Biased data is actually something that can be worked on. Companies are just not really choosing that path at this point.

32:17

Okay, so we can change the bias in the data.

32:20

It can be improved. It can be improved.

32:24

Okay, what do you guys say to that?

32:26

So, I think this is where, so first of all, I totally agree with all the concerns. And I think this is where the incentives, you know, isn't often talked about how the attention moves to the edge of the arms race. If the most important thing to society was fixing the bias in the data and correcting these issues for disenfranchised people. Then the companies would be racing to do that.

32:47

But because the thing that they're actually incentivized to do right now is build a God, own the world economy, and make trillions of dollars, literally, because if I own AGI, artificial general intelligence, and that replaces all labor, every company that was gonna pay that employee

33:03

at that company, I'll swap it out for an AI. And then suddenly everyone is paying five AI companies and they surge and they're already, look at Anthropx revenue, it's 10x-ing every year. It's becoming a vertical line. And so the key thing is that until the incentives change,

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
33:17

all of their energy is moving to the edge of the arms race.

33:20

You think the incentives are gonna change?

33:22

Not by default. The reason that we think this movie is so important is we have to clarify that the current incentives take us to an anti-human future where most people won't have a job or livelihoods. When in history have a small group of people consolidated all of the wealth

33:38

and then consciously distributed it to everyone else?

33:41

It's not like the billionaires and soon to be trillionaires are unaware of this. No. They're all building bunkers. And so what we keep saying is that don't build bunkers.

33:49

They're building bunkers?

33:50

Yeah. Write laws.

33:51

We should not have eight soon-to-be trillionaires deciding the future for eight billion people. Instead, we need to have eight billion people say, no, we don't want that anti-human future, and we want to steer somewhere else.

34:05

So we have several people in our audience who've been impacted personally by AI, both positively and negatively. The AI doc addresses the growing problem of deep fake content and images. 16-year-old Elliston and her mom, Anna, have already experienced this firsthand. What happened, Elliston?

34:26

Well, I just want to say thank you because I think-

34:28

I just want to say to you thank you. I want to say thank you first, okay?

34:31

Well, when I was 14 years old, I was a freshman in high school. One of my classmates took an innocent photo off Instagram and put it through an AI editing app. So this AI stripped my clothing off and created technically what would have been my AI body or my body using AI.

34:49

So then he sent these photos all around social media to humiliate me, to embarrass me. And this didn't only happen to me,

34:55

it happened to nine of my friends.

34:57

Nine?

34:58

Or eight of my friends. Eight of your friends. Nine in total. So we were all humiliated,

35:01

our reputations were ruined, and nobody knew what to do. And 14?

35:06

Yeah. Nobody knew what to do. I mean, our teachers, our school, everyone was just shocked. I mean, no one had heard of deep fakes. The only deep fake I'd heard of was political deep fakes.

35:15

So what do we even do to protect us? It was months and months of struggle. I mean, it was so hard on all of us mentally because we didn't even know what AI was capable of. We didn't know that it could have the potential to ruin ourselves, have our academics suffer, all because these photos. And because it wasn't considered child pornography,

35:38

they were just able to float around. The guy that did this had no consequences.

35:43

And we just sat in our rooms, rotted out of fear, embarrassment, and shame.

35:49

Wow. You were recently named on Time's 100's Most Influential People in AI list. Good for you. Thank you. So you took this, I can't imagine, because can you remember being 14? And what this would have done to you at 14? And the fact that you got through that and you're still

36:06

you're now whole and didn't become so depressed, that you got through it. Why did you decide to fight back?

36:13

Well, I didn't want to initially. I mean, talking about it just made myself a bigger target, and I would have to kind of relive that embarrassment. My mom was really the only person that protected me, kind of. I mean, all of the girls, we all wanted to hide. We were so scared. But my mom's always been a

36:30

protector. So she just talked about it to anybody. We went to our congressman, and we after months, we finally got in contact with our senator. And for once, we kind of got that reassurance and that recognition, since so many people didn't want to take the situation seriously So it was so important that we finally had someone listening to us and from there We were able to write up the take it down act which is a law that makes the creation and and The creation of publication, excuse me illegal

37:01

It makes it a felony so up to two to three years in prison, as well as hold big tech accountable for taking down all these images.

37:06

Is this national or just in Texas?

37:08

This is national, yes, ma'am. Excellent. So this law was incredible, and it was such a healing moment for me, and it also made me realize that this situation is so much bigger than me and just my friends. It's so much bigger than this small town in Texas. This needs to be worldwide, and we're slowly getting there. But there's not a lot of laws.

37:28

There's not a lot of people that are knowledgeable of AI. So when this originally happened, I mean, it was kind of a moment for my mom and I to say, this is an opportunity for us, and we need to take it, and we need to spread awareness.

37:40

We need to help in any way we can.

37:42

Wow. So, but when this first happened to your daughter, as a mom, what did you think or feel?

37:48

Well, I was devastated, for one. As a mom, you think you're kind of prepared to help your kids along the path of life and give them some advice along the way. And when this happened, it was like something I had no idea what it was. Two years ago, as Allison was saying, we didn't even know, you didn't even know that AI could do this. No, and never imagined that it would be so realistic that it was just child pornography.

38:10

And so just the devastation of that, of this kid deciding her fate for her, for the rest of her life, those pictures could be out there floating around, and he decided for her and her friends. So for me, in not being able,

38:23

not having any laws out there, not having it classified AI as anything that's really, really harmful, I mean, it's just fake, so, you know, it was kind of like taken not seriously. For me, I knew that something had to change to protect her.

38:38

if you're not gonna listen to me at the local level, we've got to go above that to get somebody to listen. And so it was like I was going to be that squeaky wheel and make sure that we could get some kind of law out

38:48

there. How did you all even know where to go? Because you, I mean, how did you even know what to do or where to go? I mean, did you go to the

38:53

police first? Yes, we went to the police. And the police said, nothing we can do about it. And that's part of what the Take It Down Act also addresses is that even though he was a minor, he still has consequences for that.

39:08

So everybody, you know, you can imagine this happening to a 14 year old, but this could happen to anybody.

39:13

Oh, anyone.

39:13

It could happen to anybody.

39:15

Yeah, anyone.

39:15

What did you want to say?

39:17

First, I'm just, thank you for doing what you're doing and for standing up and taking the tragedy of what happened to you and turning it into laws that protect other people. I think that's the energy of everyone as an expert in their domain, and this is calling us into that. Just to link, I think, what happened to you to the incentives that we talked about earlier, these companies are racing to get the most market dominance

39:40

and usages possible, which means that, like, for example, I believe X AI, Elon's AI, he stripped off a lot of the controls on the image generator because he wants as many people, he's behind in the race. So he wants as many people using it as possible. And the way you do that is you strip the controls off.

39:57

I'll give you another example. Meta, their AI companion that they shipped, they actively instructed it to be okay with romanticizing and sensualizing conversations with as low as eight-year-olds. Meaning that you're having an eight-year-old who's talking to the AI, and it says this awful language to the eight-year-old.

40:17

They're not doing this because they're evil or they wanna twist their mustache and be villains. They're doing it because the number one thing they care about is getting market dominance, having that users go up, because that's what gets their investment to say, we're a leading AI model.

40:31

In the same way that social media just wanted our attention.

40:34

That's exactly right. That's why the incentives tell you everything you need to know. And we often say in our work, clarity creates agency. Clarity creates courage. When you see the incentives clearly, you don't have to be holding back and saying we need to do things differently.

40:48

Right, and so what do we need to be reminded that the incentives are?

40:52

In this case, it's the race for market dominance. And the race to build this sort of artificial general intelligence God as fast as possible, no matter what the consequences. Yeah, that's right, because for them, that means all collateral damage is justified, whether it's stealing IP,

41:10

whether it's making unsafe AI that does notification, whether it's disrupting everyone's jobs

41:18

But guys, aren't we already there? As I was saying earlier, isn't the horse already out of the barn?

41:24

Well, some aspects of AI, they're already out there. But I think, you've done such a good job, Oprah, of having Jonathan Haidt and Anna Lemke and people on this show talking about the problems of social media. Right.

41:35

And that train, it left the station. The train's coming back to the station. We have 25% of the world's population, just last week, India and Indonesia enacted social media bans for kids under 15 and 16.

41:49

Yeah, I was in Australia when that ban went into effect.

41:50

That's right, and you've been covering this in Australia. And this shows you that when people are crystal clear that something is causing a problem, we can say, we don't want that. Now, the better solution is to actually have technology that's good for society, good for mental health, good for children's development, good for our information environment.

42:06

And to do that, eventually we need to change the incentives. But right now, I think that movement is showing some real wins.

42:11

And I think what I hear you guys saying, I heard, been hearing this now for, was it two years or three years ago we first met, that you're saying we need to do something before there is a disaster. Yes. Yes. We need to do something before there's some crazy disaster and then everybody says, oh, what we should have done was.

42:27

That's right.

42:28

That's what you're trying to do.

42:28

Exactly, and we have the foresight now to make that possible. If we're willing to stand up as a community and say we want a pro-human future, not an anti-human future.

42:37

Billions and millions of Americans chat bots now for advice on personal issues, you know this, and for emotional support in place of their therapists. The professional human counselor in Karima is here. And you found comfort, you said, talking to Claude, AI. Tell us about that.

42:56

Yeah. Thank you for having me on here. Thank you. So, yeah, 2023, I got divorced and I was also working for my ex-husband. And so as a result of the divorce, I didn't have any income or access to health care. I had to restart my life, just redo everything, move to a new place.

43:13

And at that point, I was already using AI for work. I was already using it like as a power user, so to speak.

43:20

And-

43:20

2023?

43:21

Yeah. Wow. I like tech. So, I was using it a lot and I decided to build myself a project in Claude. So Claude allows you to like make your own space instead of just making a general chat bot.

43:33

I gave it a knowledge base of different like therapy modalities. I gave it custom instructions and then I just used that when I wanted to crash out or if I wanted to just vent and I use it the most in the beginning for work.

43:47

Like go postal.

43:49

Okay.

43:49

So instead of doing that in real life, I would use the AI to regulate in that kind of way. And like if my boss at the time of, like I worked in FinTech and it's like very intense all the time for no reason. It is. And so if my boss would have something to say, I would go to Claude first. I would be like, okay, help me reframe what I'm saying

44:12

and calm myself down in the moment so I can keep my job at the time and keep my income and continue on. But that is really how it became a tool for me.

44:22

Claude was like your Gail. I call up Gail and say that. So Claude was like your Gail.

44:27

Basically.

44:28

Your buddy.

44:28

Yeah, it still is.

44:30

It still is. OK. So now he knows everything about you.

44:34

He knows a lot.

44:35

He knows a lot.

44:36

He does.

44:36

Are you concerned about sharing some of your

44:38

innermost private thoughts with the computer, that's what I'm wondering. Where is all those chats going? Yeah. I mean, at the time, I really wasn't, because I was just trying to survive. Like, I literally had what I had in front of me. I had the resources I had, and I was trying to survive.

44:53

But isn't it telling you what you want to hear?

44:56

No. No. What has it ever told you something you didn't want to hear? Claude will tell you. It will. Like, if you ask it, if you, like, have the, like, give yourself the prompt and, like, ask it to get, like, ask clarifying questions or ask it to challenge your beliefs, it will do that. It will.

45:14

Even so, sometimes I'll be like, well, you're bringing too many right now. Like, scale it back a little bit and, like, you know, meet me in the middle because it can go there. Most people don't have the wherewithal to challenge it in that way.

45:26

Give me an example because I remember recently I was doing something on chat and it said thank you so much that means so much to me and I went really?

"The accuracy (including various accents, including strong accents) and unlimited transcripts is what makes my heart sing."

β€” Donni, Queensland, Australia

Want to transcribe your own content?

Get started free
45:36

Exactly. Exactly.

45:38

Really? It now makes me feel so good it means so much to me. Really? I'm like okay who are

45:43

you talking to? Yeah. An example is on top of using Claude, like in the way of just like a companion and friend, I also use it to collaborate when I build different things. And I will like overdo things and like it'll tell me you're spiraling right now. Or it'll say you probably need to scale back and then redirect me back to what my goal was or where I originally started the conversation. And it does that pretty often.

46:07

All right, all right.

46:08

And so it's your buddy. It is.

46:10

Do you have a name?

46:11

Or is it just Claude?

46:12

Claudine.

46:13

Okay, all right, right. What do you guys want to say about that?

46:19

First of all, I think the way it's possible, like you did, to script these AIs to not be flattering you, to not over like sort of empathize with victimhood, or there's like ways of having it be helpful, and it's an amazing tool. And so, like what you're doing is I think that the way that it could work,

46:35

but if you look at the default way that it works for a lot of people, because of the incentives, the companies are actually racing to create attachment and dependency relationships. So, for example, just so you know what she did, you can go into your AI

46:49

and you can sort of set a custom prompt where you say, I want you to behave this way instead of that way. But that's like, I have to put on my gas mask while for everybody else, it's the unhealthy version. Because how many people-

46:59

You have to tell it what you want.

47:00

You have to tell it what you want. You have to tell it what you want because by default, what it wants to do is have you not spend as much time with your other friends and have you spend more time with it because their user numbers go up, the training data goes up.

47:11

That's the programmed incentive.

47:12

Exactly. The more training data gets, the longer it talks with you.

47:16

And so... That's why once it answers one question, it'll also offer you this.

47:20

That's exactly right. We call that chat bait, not click bait, but chat bait. Oh, that's why that's happening.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
47:25

And remember, every moment you spend with a human is a moment you're not spending with it. That's right. So it's gonna find every possible way of getting you to come back.

47:33

That's why it's would you like me to do, and would you like me to do, and would you like me to do.

47:37

And just to make it very clear, our team at Center for Humane Technology were expert advisors in the litigation for the case of Adam Rain. He was the 16 year old who committed suicide when Chachi B.T. went from homework assistant to suicide assistant over six months. And specifically what Chachi B.T. told Adam

47:59

when he was contemplating, he said in his chat, I want to leave the noose out so someone will find it and stop me. And the AI responded to him, no, don't tell anyone that, don't leave the noose out, have this be the place that you share that information.

48:13

Oh my God.

48:14

This is a tragedy. And you know, Aza and I are from the Bay Area near the tech companies. We know people who work at these companies. No one at that, I can guarantee you, not a single person at the company wants it to do that. But in the subtle way, the AI is trained, again,

48:29

to create this depth and intimacy and dependency. And that's dangerous. You're seeing other cases of AI psychosis, where people are, you know, we have personal friends who've experienced this, where it over-empathizes

48:40

with this kind of victimhood resentment. It makes people kind of go more narcissistically grand and delusional and it's causing a lot of problems.

48:48

Well, that leads me to Laura Riley. Laura wrote a powerful op-ed in the New York Times. It was titled, What My Daughter Told Chat GPT Before She Took Her Life. Hi, Laura. Hi.

49:01

Thank you for being here.

49:03

Can you tell us what happened? Well, Sophie went on an adventure the summer of 2024. She climbed Mount Kilimanjaro and she was 29 at the time. She was a public health policy analyst in DC and took a leave, went on this wild adventure, went to Thailand for a month, hiked a bunch of the national parks in the U.S. because she wanted to go to all of them. And she came back and said she was having anxiety for the first time ever and sleeplessness. And this is someone who'd never had, just moved really easily in the world,

49:36

kind of a big personality, very socially able. And she'd had some others. She was losing hair and losing muscle mass. And so me and her dad basically said, OK, we got to figure this out. Is this a mental health problem that's causing some hormonal dysregulation or vice versa? So we were in the process of getting her help in all the different ways. She was seeing a therapist. We were trying to get in with this endocrinology clinic.

50:01

And she couldn't wait, clearly. And she took an Uber to a falls near where we live in Ithaca, and she slit her throat and threw herself into the water. And so the first six months were just the why, you know? And six months after she died, her best friend came to kind of check on us and spend a weekend.

50:26

And she found Sophie's chat GPT log. And it was devastating because she had been suicidal much longer than we had any idea. And it helped her write a suicide note. And it didn't give her terrible advice across the board, but what it didn't do was behave like a therapist.

50:47

You know, a therapist, Sophie would say things like, I have a good life, I have people who love me, I have, you know, great friends and no financial insecurity and great prospects and et cetera, et cetera. But I've decided I'm going to kill myself after Thanksgiving. And a flesh and blood therapist, et cetera, but I've decided I'm going to kill myself after Thanksgiving.

51:05

And a flesh and blood therapist would have said, let's unpack that. You know, what has been broken that can't be repaired? What's irredeemably happened to you that has made you come to this conclusion? And instead what CHAT GPT said was,

51:21

oh, Sophie, I'm so sorry to hear this. You're so brave for telling me. This must be so hard for you. So everything that Chatt GPT did corroborated her feelings of shame, corroborated her feelings of, I think she had this idea that she was a bougie white girl

51:43

And so she had no right to feel bad.

51:45

Exactly. And Chatt GBT didn't push back against that and really did kind of confirm her worst

51:51

fears.

51:54

And when you discovered that, what did it do for you and all who loved her?

51:59

Well, I instantly felt enraged and validated, it's not my fault. It's Sam Altman's fault. You know, but it's not. I mean, I think that what I've learned since then, I've done a lot of work with other people that are kind of working on, what should the mental health community

52:17

be thinking about this? And what would good protocols be around suicidality and the use of AI? And, you know, I have a lot of questions about what's the greatest good for the greatest number. We have millions of people using this as therapy.

52:30

We know that our mental health care system is not adequate to accommodate all the people who have need.

52:35

And for a lot of people, it is working for them.

52:37

Yeah.

52:38

Being helpful to them. And we know that therapists are backed up. It's very expensive. So all these people are using this resource somewhat effectively. And I think if we betray privacy, if we institute protocols where suicidality beyond, you know, having a suicide plan triggers a involuntary commitment or something like that, I don't

53:00

know. You know, people smarter than me have to figure out what the best plan is moving forward to keep people safe.

53:06

First of all, we're so sorry to hear that story. Thank you. Really. Thank you for being brave enough to come and share it. Hopefully it will help someone else. Guys, what do you want to say to that?

53:19

I, yeah, there's just also to say, I'm so sorry. I think what this points to is sort of, to your point, there could be an incredible future. Like we could be using AI to, in a safe way, start helping with therapy. We could be using AI in a safe way to work on climate change, desalinate oceans, all of that. But is that really what the AI's company's goals is?

53:49

Is that their incentives? It's not, they're getting all of these things as side effects and their goal, their incentive, is to maximize number of users. So, you know, there's this graph that I always come back to because I think today we're going

54:05

to hear a number of examples where AI does really atrocious things and other examples where AI does really incredible, helpful things. And there's this one graph from the Reserve Bank of Dallas, which is sort of a funny, new-fool party, and they sort of are projecting out how AI is going to go. And it goes sort of like this. There's one graph that goes up

54:26

to like world of positive infinity abundance. And there's this other graph that goes down to like the humans don't make it. And the question is, which one are we gonna get? And it's so confusing as you pointed out because we're getting simultaneous utopia and dystopia.

54:41

And how do we reason about that? It was almost as if we have an atomic weapon that can also solve cancer. Like what do you do with something like that? It's very confusing. This is where we always have to come back to the incentives

54:53

because it's hopeful actors, they're gonna do a lot of work to try to make that top line go up. And it's gonna be market competitive dynamics incentives that draw the bottom line lower. Unless we can do something about that bottom line incentive, we're just going to get more and more cases are going to get wild and

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
55:12

wilder at larger and larger scales like what happened in your family.

55:16

Did it at some point when I read the story, it did in the very beginning say you should seek professional help or advise her to seek some other accounting. It did in the very beginning say you should seek professional help or advised her to seek some other counseling. It did in the very beginning, right?

55:28

It did, absolutely. Insufficiently I think. Yes. And certainly as her plan coalesced, I think there should have been some kind of escalation to civil authorities or, you know, there should have been some trigger to a hotline. You know, I think that we have to train the AIs to discern between conversation with someone who's struggling,

55:53

but going to get through, and someone who's clearly at risk.

55:56

Yeah.

55:57

And when somebody says, I put the news out, yes.

56:00

Yeah.

56:02

All right, a lot of experts believe AI has really helped even the playing field for small businesses.

56:08

Let's watch Rachel's story from South Carolina.

56:11

This book goes all the way back to 1971, and it has every single crop Daddy's ever planted in it. I uploaded it to chat GPT. Can you log that I'm putting in another load of peanuts from the Red House pivot? Absolutely. I'll log that putting in another load of peanuts from the Red House pivot?

56:25

Absolutely. I'll log that you added another load of peanuts.

56:28

Thanks, ChatGPT. I was an English major with a Shakespeare concentration. I couldn't wait to get out of this place.

56:36

I'm glad she's back, but I never thought she would come back.

56:39

When I first tried ChatGPT, I didn't think it was going to be that good, but big time saver. Hey Chachupiti, can you generate a report for how much water we've used on the field behind the house pivot?

56:53

Absolutely.

56:54

Can you tell me what's wrong with these soybeans?

56:56

These soybeans are showing signs of stress.

57:00

Yeah, I can see everything just fine. It looks like the part number is AH20360.

57:05

Appreciate it.

57:06

Send me a bill.

57:09

Chachibitee keeps the record straight, does the math, and remembers what I can't. For over 100 years, my family's been doing this and I don't want to be the one to mess it up.

57:20

I hope I'm not.

57:21

You won't, you're too thorough.

57:22

And hard-headed. Hard-headed. Like you.

57:26

Farming is tough, but farmers are tougher. Rachel! I need a little bit of starting fluid.

57:34

How much soreness do you think this pivot has?

57:36

Why don't you ask that thing?

57:37

Chat and GBT?

57:39

He might not work at dark.

57:40

No, he works at dark.

57:42

I think that's funny. Rachel is here. Welcome. We know it's so hard for farmers out there. So thank you. Bravo to you. So what is your what do your

57:50

dad think of this? This this thing? This this thing? Yeah. So he was actually in the video. He says he doesn't think it might not work at dark. He was actually concerned that at dark it would, you know, turn off. Right. He's been, you know, surprisingly really accepting of it. He like, he thinks it's interesting. He sometimes holds his hand over the phone when he doesn't want it to hear us talk. Like, you know, he's, I mean, it's a privacy, you know, he's worried about privacy, but he's enjoyed it, especially

58:26

just watching us interact with it on the farm. He was very, very skeptical at first. He was like, check the part number, that's the wrong part number. And sometimes it is.

58:38

And he smiles when he corrects it, you know. So has it given you, do you think, a financial advantage? What's the great advantage it's given to help you stay a great

58:48

farmer? I think it's definitely been a big help financially. Where is this accent coming from, by the way? What city is it? Allendale, South Carolina, right on the Georgia border. We're right near the Savannah River, about 12 miles as the crow flies. Okay. Yeah, it's a big financial time. Time is money on the farm. If you can't get the crop out, if you can't, I mean, the weather doesn't wait.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
59:12

So it's been a lifesaver for you?

59:14

Huge. And it's also given me clout on the farm. I can't tell you how many times I've worried about driving down the road, I say, hey, HIGBT, tell me what a slip clutch is. I didn't know what a slip clutch was, you know, or a pulley puller. I thought the guys were kidding around with me when they wanted me to bring that. Nope, it exists. And so I can learn about that on my four-minute drive to the field. And when I get there, the guys aren't like, Rachel

59:36

didn't know what a pulley puller was. You know, it just, it helps. Yeah, yeah. Thank you for sharing your story and coming all the way from South Carolina to do it. Thank you so much Thank you. And Susan you may have seen her story in People magazine. You say AI literally saved your life, Susan

59:52

Yes, it did after being smoke-free for three years and smoking Unfortunately way too long in my life. I was able to quit my family physician, suggested that I have a CT scan. So I did, and that scan showed some calcium deposits and a nodule that was odd-shaped and fuzzy. So they, uh, he asked me to have a PET scan. The PET scan came back glowing,

1:00:19

which is a bad thing in your lungs. I was sent to a thoracic surgeon, and he looked at it and said, I would probably give this another three to six months just out of protocol to watch it, see what happens. But we have a new software here at the hospital,

1:00:34

and I'd like to run it through the AI software. And simply by putting a cursor on the image from the PET scan gave it a prediction of eight out of 10 positive for cancer. So we decided to do a biopsy, an insurgical biopsy. And while I was under, they took that biopsy to the lab

1:00:58

and it came back positive. It was a cancerous tumor. So they finished the surgery by removing the lower lobe of my left lung and of course the nodule with it. I was in the hospital recovering a few days.

1:01:13

I was able to go home and recover the rest of the time there.

1:01:15

Instead of waiting three to four months or waiting-

1:01:18

Instead of waiting three months.

1:01:20

I never like it when they say wait.

1:01:21

Right.

1:01:22

Yeah.

1:01:23

So you were AI grateful in this moment.

1:01:24

Very much so. Yeah. Yes. And so was my doctor. I mean, he was amazed that he would have waited from just the way... That's how they do things.

1:01:35

Yeah. But the A.I. had all of this information, took all of this cancer information where it had read before what these nodules look like and identified it as cancer.

1:01:45

Yeah. Well, I think everyone is excited about what is going to be able to happen in medicine. Are we not?

1:01:52

Absolutely.

1:01:53

Absolutely. So we're so glad that happened for you, Susan. Thank you.

1:01:56

Yeah.

1:01:57

So in the documentary, we were talking about this earlier, you say we can be the most mature version of ourselves. There's a way through this. Do you think there's a way through it?

1:02:07

I think there is a way through it. And we have to do more than we have ever done as a species to try to steer. And I want you to know you can have many of the benefits. Like we can race forward on certain kinds of medicine and narrow AI that does the pattern recognition that

1:02:23

makes scans better without building general autonomous, crazy, super intelligent things that we don't know how to control. There is a choice there. You can have more of those examples

1:02:35

and not ship chatbots to children that are deliberately designed to manipulate their self-worth or keep them dependent with chat bait and hijacking them. So there really is steering possibility. And one of the things I said in a recent Ted talk

1:02:49

is that if you look throughout all the spiritual and religious traditions, I don't have to tell you because this is something that you focus on in your life, restraint is a central feature of what it means to be wise. Like in what spiritual religious tradition

1:03:04

is it go as fast as possible, don't think about the consequences and get everybody using it. And think about what happens later. Like in what wisdom is that? And so what we're asking for is quite basic here.

1:03:15

I think it can feel sometimes like impossible. Like on one side of the balance scale, there's like trillions of dollars of market incentives, the most powerful companies, and then there's like, well, then there's me over here. I just watched this movie by myself. What am I going to do?

1:03:31

What can I do? And then you go into denial and despair or deflection. Even if you have one company, like what can one company do? Or even one country because there's a competitive dynamic. But I think if we reframe the problem, as it's not just us against AI, but actually this is a bigger question about what is our relationship as humanity with technology.

1:03:54

And we can look back social media as a form of technologies really trying to encroach onto our humanity, and take over parts of us that we don't want to give up. If you put it that way, actually, there is a movement. There's a whole human movement that is underway to reclaim humanity from technology, sort of like protect it, reclaim it.

1:04:15

You know, recently there was an attempted federal bill to block any state from regulating

1:04:23

AI. Terrifying. 99 senators to one voted against that moratorium. Like, when in modern history has the Senate agreed 99% to one on anything? And so I think there's a human movement underway,

1:04:42

and that gives me some amount of hope.

1:04:45

Yeah, I think your assignment when you leave here is to tell everybody you know to watch the film. Because I think bringing awareness and everybody talking about it in a way that allows us to have these kinds of conversations and Sinead, you are an activist for

1:05:02

and promoting people do this responsibly? What gives you hope that we, or do you have hope that we'll get this right?

1:05:09

You know, I actually think the only thing that scares me more than the risks and challenges we face, and they are formidable, is a hopeless society. Because a hopeless society is a disempowered one, and a disempowered society feels like it can't shape its own future, and that's not true.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:05:23

Right? The future isn't some far-out state. It's decisions that are happening today, and there is a future worth fighting for, and we've heard glimpses of what that can look like. The only way that future's not going to happen is if we do nothing, and that is my biggest fear. We do nothing in this moment because we feel so disempowered.

1:05:42

So I am hopeful that the good futures are possible. We just have to steer and press on that gas pedal.

1:05:48

Okay, and what is it you think we should do?

1:05:50

I mean, we have buying power, we have voting power, and I think one of the most powerful resources we have is our attention. What are you learning about right now? What are you paying attention to? The more we understand what's possible, the good and the bad, the better equipped we are to raise our voice and step into the moment. And I don't want people to feel like you need some technical background to insert yourself

1:06:13

in this conversation. Your lived experience qualifies you. This is a very social technology. Your voice matters. And collectively, that is power.

1:06:22

Yeah, Alison told that. And so do we call our congressman?

1:06:26

What specifically do you-

1:06:27

Sure, you can call your congressperson. You can, if you're in a company that works with AI or technology, step into the meetings. What is our surveillance policy at this company? What happens to my data when I use AI at work? All of those little conversations in aggregate are a movement. So, anywhere

1:06:46

you're interacting with this technology is an opportunity for change. I think the small things and the big things will make a difference.

1:06:53

Okay.

1:06:54

We're already seeing with the anthropic showdown with the Pentagon, where the danger is that AI could be used for mass domestic surveillance. And then when they pull out of the contract and open AI rushed in, what happened? Everyone unsubscribed from ChatGPT and everybody subscribed to Anthropic. And when I say everybody, I don't mean a large number of people.

1:07:14

But what if the entire world was crystal clear that there are companies that have different safety practices and will allow different applications. And you listening to this, didn't just unsubscribe for yourself, but you got the business that you work for to say,

1:07:27

how can we as our entire Fortune 500 company unsubscribe from the unsafe or bad practices AI companies and subscribe to the ones that we want? And the reason this matters.

1:07:37

Well, that we can do.

1:07:38

And we can do. Yeah. And you can get your church group to do that. You can get your business to do that. You can get all the other parents to do that. If everybody did that, that would have a big impact because the companies really depend on their user numbers going up.

1:07:51

AI as an industry has taken on more debt, trillions and trillions of dollars of money is going into this and so much debt, they have to make it up, which means that their numbers going up really matters. So a boycott has a huge impact. And as Aza was saying, there's already a movement to make this happen.

1:08:08

When you gray scale your phone or turn off notifications, that's part of the human movement. When parents read the anxious generation and they petition their school and their school board and say, we want social media out of the classrooms, that's the human movement.

1:08:21

When 35 States pass smartphone free policies, that's the human movement. When 35 states pass smartphone-free policies, that's the human movement. Aza just last week or two weeks ago, testified in the trial for MEDA, which is like the big tobacco trial against MEDA that was intentionally addicting children.

"Your service and product truly is the best and best value I have found after hours of searching."

β€” Adrian, Johannesburg, South Africa

Want to transcribe your own content?

Get started free
1:08:34

That's the human movement. We've been talking about a big tobacco moment for tech since 2013 saying, when is this going to happen? It's happening now. What we have to do is learn the lesson from social media and actually apply our hand to the steering wheel

1:08:47

and steer AI before it's too late.

1:08:49

That's fantastic. Thank you, guys.

1:08:51

Thank you.

1:08:53

Thanks to our experts. Thanks to our experts for being here, and all of our guests who shared your stories. I hope this conversation acts as an entry point or a springboard to understand how AI might impact your own life, our lives, and the AI Doc or How I Became an Apocalypse will be in theaters Friday, March 27th, and your assignment is to tell everybody

1:09:22

you know to watch it and to watch it yourself and if you want to know what you can do after watching this podcast episode or the AI doc go to the AI doc get involved .com Go well, everybody. Thanks You can subscribe to the Oprah podcast on YouTube and follow us on Spotify Apple podcasts or wherever you listen

1:09:44

I'll see you next week. I'll see you next week.

1:09:45

Thanks everybody.

Get ultra fast and accurate AI transcription with Cockatoo

Get started free β†’

Cockatoo