Transcribe all your audio with Cockatoo

Blazing fast. Incredibly accurate. Try it free.

Start Transcribing Free

No credit card required

Dario Amodei — “We are near the end of the exponential”

Dario Amodei — “We are near the end of the exponential”

Dwarkesh Patel

39 views
Watch
0:00

So, we talked three years ago. I'm curious, in your view, what has been the biggest update of the last three years? What has been the biggest difference between what it felt like last three years versus now? Yeah, I would say actually the underlying technology, like the exponential of the technology, has gone, broadly speaking, I would say, about as I expected it to go.

0:19

I mean, there's like plus or minus, you know, a couple. There's plus or minus a year or two here, there's plus or minus a year or two there. I don't know that I would have predicted the specific direction of code. But actually when I look at the exponential, it is roughly what I expected in terms of the march of the models from like, you know, smart high school student to smart college student to like, you know, beginning to do PhD and professional stuff. And in the case of code reaching beyond that, so, you know, the frontier is a little bit uneven.

0:49

It's roughly what I expected. I will tell you though, what the most surprising thing has been. The most surprising thing has been the lack of public recognition of how close we are to the end of the exponential. To me, it is absolutely wild that you have people within the bubble and outside the bubble, but you have people talking about these just the same tired, old, hot-button political issues

1:17

around us. We're near the end of the exponential. I want to understand what that exponential looks like right now, because the first question I asked you when we recorded three years ago was, what's up with scaling? How does it work?

1:31

I have a similar question now, but I feel like it's a more complicated question, because at least from the public's point of view, three years ago, there were these well-known public trends where across many orders of magnitude of compute, you could see how the loss improves.

1:45

And now we have RL scaling and there's no publicly known scaling law for it. It's not even clear what exactly the story is of is this supposed to be teaching the model skills, is it supposed to be teaching meta-learning? What is the scaling hypothesis at this point? Yeah. So, I have actually the same hypothesis that I had even all the way back in 2017.

2:05

So in 2017, I think I talked about it last time, but I wrote a doc called the Big Blob of Compute Hypothesis. And it wasn't about the scaling of language models in particular. When I wrote it, GPT-1 had just come out, right?

2:19

So that was one among many things, right? There was, back in those days, there was robotics. People tried to work on reasoning as a separate thing from language models. There was scaling of the kind of RL that happened, that kind of happened in AlphaGo

2:33

and that happened at Dota at OpenAI. And people remember StarCraft at DeepMind, the AlphaStar. So it was written as a more general document. And the specific thing I said was the following. And you know, it's very, you know, Rich Sutton put out the bitter lesson a couple years later.

2:54

But you know, the hypothesis is basically the same. So what it says is all the cleverness, all the techniques, all the kind of we need a new method to do something like that, doesn't matter very much. There are only a few things that matter. And I think I listed seven of them.

3:10

One is like how much raw compute you have. The other is the quantity of data that you have. Then the third is kind of the quality and distribution of data, right? It needs to be a broad, broad distribution of data. The fourth is I think how long you train for.

3:26

The fifth is you need an objective function that can scale to the moon. So the pre-training objective function is one such objective function, right? Another objective function is, you know, the kind of RL objective function

3:41

that says, like, you have a goal, you're going to go out and reach the goal. Within that, of course, there's objective rewards like you see in math and coding, and there's more subjective rewards like you see in RL from human feedback or higher order versions of that.

3:56

Then the sixth and seventh were things around normalization or conditioning, like just getting the numerical stability so that kind of the big blob of compute flows in this laminar way instead of running into problems. So that was the hypothesis and it's a hypothesis I still hold.

4:15

I don't think I've seen very much that is not in line with that hypothesis. And so the pre-trained scaling laws were one example of kind of what we see there. And indeed, those have continued going. Like, you know, I think now it's been widely reported, like, you know, we feel good about pre-training.

4:35

Like pre-training is continuing to give us gains. What has changed is that now we're also seeing the same thing for RL, right? So we're seeing a pre-training phase and then we're seeing like an RL phase on top of that. And with RL, it's actually just the same. Like, you know, even other companies have published,

4:58

like, you know, in some of their releases have published things that say, look, you know, we train the model on math contests, you know, AIME or the kind of other things, and, you know, how well the model does is log linear in how long we've trained it. And we see that as well. And it's not just math contests. It's a wide variety of RL tasks. And so we're seeing the same scaling in RL that we saw for pre-training.

5:29

Yeah.

5:29

I interviewed him last year, and he is actually very non-LLM-pilled. And if I'm, I don't know if this is his perspective, but one way to paraphrase this objection is something like, look, something which possesses the true core of human learning would not require all these billions of dollars of data and compute and these bespoke environments to learn how to use Excel or how to use PowerPoint, how to navigate a web browser. And the fact that we have to build in these skills using these RL environments hints that we're actually lacking this core human learning

6:08

algorithm. And so we're scaling the wrong thing. And so, yeah, that does raise the question, why are we doing all this RL scaling if we do think there's something that's going to be human-like in its ability to learn on the fly? Yeah, yeah.

6:20

So I think this kind of puts together several things that should be kind of thought of differently. I think there is a genuine puzzle here, but it may not matter. In fact, I would guess it probably doesn't matter. So let's take the RL out of it for a second, because I actually think RL,

6:38

it's a red herring to say that RL is any different from pre-training in this matter. So if we look at pre-training scaling, it was very interesting back in 2017 when Alec Radford was doing GPT-1. If you look at the models before GPT-1,

6:56

they were trained on these datasets that didn't represent a wide distribution of text, right? You had like these very standard kind of language modeling benchmarks. And GBT-1 itself was trained on a bunch of, I think it was fan fiction actually. But it was literary, it was literary text, which is a very small fraction of the text that you get.

7:19

And what we found with that, and in those days it was like a billion words or something so small data sets and represented a pretty narrow distribution, right? Like a narrow distribution of kind of what you can see in the world. And it didn't generalize well. If you did better on, you know, the, you know, I forgot what it was, some kind of fan fiction corpus, it wouldn't generalize that well to kind of the other tat, you know, we had all these measures of like, you know, how well does the, how well does a model do at predicting all of these other kinds of texts, you really

7:54

didn't see the generalization. It was only when you trained over all the tasks on the, you know, the internet, when you, when you kind of did a general internet scrape, right, from something like, you know, Common Crawl or scraping links on Reddit, which is what we did for GPT-2. It's only when you do that, that you kind of started to get generalization.

8:13

And I think we're seeing the same thing on RL, that we're starting with first very simple RL tasks, like training on math competitions. Then we're kind of moving to, you know, kind of moving to, you know, kind of broader, broader training that involves things like code as a task.

8:28

And now we're moving to do kind of many, many other tasks. And then I think we're going to increasingly get generalization. So that, that kind of takes out the RL versus the pre-training side of it. But I think there is a puzzle here either way,

8:42

which is that on pre-training, when we train the model on pre-training, you know, we use like trillions of tokens, right? And humans don't see trillions of words. So there is an actual sample efficiency difference here. There is actually something different that's happening here,

8:59

which is that the models start from scratch and, you know, they have to get much more training. But we also see that once they're trained, if we give them a long context length, the only thing blocking a long context length is inference. But if we give them a context length of a million,

9:16

they're very good at learning and adapting within that context length. So I don't know the full answer to this, but I think there's something going on that pre-training, it's not like the process of humans learning. It's somewhere between the process of humans learning and the process of human evolution.

9:34

It's like, it's somewhere between, like we get many of our priors from evolution. Our brain isn't just a blank slate, right? Whole books have been written about. I think the language models, they're much more blank slates. They literally start as like random weights.

9:47

Whereas the human brain starts with all these regions, it's connected to all these inputs and outputs. And so maybe we should think of pre-training and for that matter, RL as well, as being something that exists in the middle space between human evolution and, you know, kind of human on the spot learning.

10:08

And as the in-context learning that the models do as something between long-term human learning and short-term human learning. So, there's this hierarchy of like there's evolution, there's long-term learning, there's short-term learning, and there's just human reaction.

10:24

And the LOM phases exist along this spectrum, but not necessarily exactly at the same points. There's no analog to some of the human modes of learning. The LOMs are kind of falling between the points. Does that make sense? Yes, although some things are still a bit confusing.

10:41

For example, if the analogy is that this is like evolution, so it's fine that it's not that sample efficient, Yes, although some things are still a bit confusing. For example, if the analogy is that this is like evolution, so it's fine that it's not that sample efficient, then like, well, if we're gonna get the kind of super sample efficient agent from in-context learning, why are we bothering to build in,

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
10:55

you know, there's RL environment companies which are, it seems like what they're doing is they're teaching it how to use this API, how to use Slack, how to use whatever. It's confusing to me why there's so much emphasis on that if the kind of agent that can just learn on the fly is emerging or is gonna soon emerge

11:09

or has already emerged.

11:10

Yeah, yeah, so I mean, I can't speak for the emphasis of anyone else. I can only talk about how we think about it. I think the way we think about it is the goal is not to teach the model every possible skill within RL, just as we don't do that within pre-training, right?

11:27

Within pre-training, we're not trying to expose the model to, you know, every possible, you know, way that words could be put together, right? You know, it's rather that the model trains on a lot of things and then it reaches generalization across pre-training, right? That was the transition from GPT-1 to GPT-2 that I saw up close, which is like,

11:49

the model reaches a point. I like had these moments where I was like, oh yeah, you just give the model like, you just give the model a list of numbers that's like, this is the cost of the house, this is the square feet of the house.

12:04

And the model completes the pattern and does linear regression. Like not great, but it does it, but it's never seen that exact thing before. And so, you know, to the extent that we are building these RL environments,

12:17

the goal is very similar to what is, you know, to what was done five or 10 years ago with pre-training with, we're trying to get a whole bunch of data, not because we want to cover a specific document or a specific skill, but because we want to generalize. I mean, I think the framework you're laying down obviously makes sense. Like we're making progress towards AGI.

12:41

I think the crux is something like, nobody at this point disagrees that we're gonna achieve AGI in this century. And the crux is, you say we're hitting the end of the exponential, and somebody else looks at this and says, oh yeah, we're making progress,

12:55

we've been making progress since 2012, and then 2035 we'll have a human-like agent. And so I wanna understand what it is that you're seeing, which makes you think, yeah, obviously we're seeing the kinds of things that evolution did or that within the human lifetime learning is like in these models.

13:10

And why think that it's one year away and not 10 years away? I actually think of it as like two, there's kind of two cases to be made here, or like two claims you could make, one of which is like stronger and the other of which is weaker. So I think starting, starting with the weaker claim, you know, when, when I

13:29

first saw the scaling back in like, you know, 2019, um, you know, I wasn't sure, you know, this was the whole, this was kind of a 50, 50 thing, right. I thought I saw something that was, you know, and my claim was this is much more likely than anyone thinks it is like this is wild. No one else would even consider this. Maybe there's a 50% chance this happens.

13:51

On the basic hypothesis of, you know, as you put it within 10 years, we'll get to, you know, you know, what I call kind of country of geniuses and a data center. I'm at like 90% on that. And it's hard to go much higher than 90% because the world is so unpredictable. Maybe the irreducible uncertainty would be if we were at 95% where you get to things like,

14:15

I don't know, maybe multiple companies have, you know, kind of internal turmoil and nothing happens. And then Taiwan gets invaded and like all the fabs get blown up by missiles and you know happens and then Taiwan gets invaded and like all the all the fabs get blown up by missiles and and you know and then now you would drink to scenario. Yeah, you know just you could construct a scenario where there's like a 5% chance that it it or you know, you can construct a 5% world where like things things get delayed

14:38

for 10 for for for for for 10 years. That's maybe 5% there's another 5%, which is that I'm very confident on tasks that can be verified. So I think with coding, I'm just except for that irreducible uncertainty. There's just, I mean,

14:54

I think we'll be there in one or two years. There's no way we will not be there in 10 years in terms of being able to do it end to end coding. My one little bit, the one little bit of, of fundamental uncertainty, even on long timescales is this thing about tasks that aren't verifiable, like planning a mission to Mars, like, you know, doing some fundamental scientific

15:15

discovery, like, like CRISPR, like, you know, writing a writing a novel, hard, hard to verify those tasks. I am almost certain that we have a reliable path to get there, but like, if there was a little bit uncertainty, it's there. So, so, so, so, so on the 10 years, I'm like, you know, 90%, which is about as certain as you can be. Like, I think it's, I think it's crazy to say that this won't happen by 2035.

15:45

Like in some sane world, it would be outside the mainstream. But the emphasis on verification hints to me as a lack of belief that these models was generalized. If you think about humans, you're good at things that both which we get a verifiable reward and things which we don't.

16:05

No, no, no, no, this is why I'm almost sure. We already see substantial generalization from things that verify to things that don't verify.

16:13

We're already seeing that.

16:13

But it seems like you were emphasizing this as a spectrum which will split apart which domains we see more progress. And I'm like, but that's it. That doesn't seem like how humans get better. The world in which we don't make it or the world in which we don't get there is the world in which we do. We do all the things that are that are verifiable.

16:31

And then they like, you know, many of them generalize. But what we kind of don't get fully there, we don, it's not a, it's not a binary thing. But it also seems to me, even if, even if in the world where generalization is weak, when you only say to profile domains, it's not clear to me in such a world, you could automate software engineering because software, like in some sense, you are quote unquote, a software engineer. But part of being a software engineer for you involves writing these like long

16:59

memos about your grand vision about different things. And so I don't think that's part of the job of SWE. That's part of the job of the company. But I do think SWE involves like design documents and other things like that. Which by the way, the models are not bad. They're already pretty good at writing comments. And so with, again, I'm making like much weaker claims here than I believe to like, you know, to kind of set up a, you know, to distinguish between two things. Like we're already almost there for software engineering.

17:26

We are already almost there. By what metric? There's one metric, which is like how many lines of code are written by AI. And if you use, if you consider other productivity improvements in the course of the history

17:35

of software engineering, compilers write all the lines of software and, but we, there's a difference between how many lines are written and how big the productivity improvement is. And then like, we're almost there meeting, like how big is the productivity improvement,

17:48

not just how many lines are written.

17:49

Yeah, yeah. So I actually agree with you on this. So I've made this series of predictions on code and software engineering. And I think people have repeatedly kind of misunderstood them.

18:01

So let me lay out the spectrum, right? Like I think it was like, you know, like, you know, eight or nine months ago or something, I said, you know, the AI model will be writing 90% of the lines of code in like, you know, three to six months, which happened at least at some places, right?

18:19

Happened at Anthropic, happened with many people downstream using our models, but that's actually a very weak criterion, right? People thought I was saying like, we won't need 90% of the software engineers. Those things are worlds apart, right? Like I would put the spectrum as 90% of code is written by the model.

18:38

100% of code is written by the model. And that's a big difference in productivity. Um, 90% of the end to end suite tasks, right. Including things like compiling, including things like setting up clusters and environments, testing features, writing memos, 90% of the suite tasks are written by the models. 100% of today's suite tasks are, are, are written by the models.

19:02

And, and even when, when, when that happened, doesn't mean software engineers are out of a job. Like there's like new higher level things they can do, where they can,

19:08

they can manage.

19:09

And then there's a further down the spectrum, like, you know, there's 90% less demand for suites, which I think will happen, but like, This is, this, this is of spectrum with farming. And so I actually totally agree with you on that.

19:29

It's just, these are very different benchmarks from each other, but we're proceeding through them super fast. It seems like in part of your vision, it's like going from 90 to 100. First, it's gonna happen fast.

19:39

And two, that somehow that leads to huge productivity improvements. Whereas when I notice even in Greenfield projects that people start with cloud code or something, people report starting a lot of projects. And I'm like, do we see in the world out there

19:54

a renaissance of software, all these new features that wouldn't exist otherwise? And at least so far, it doesn't seem like we see that. And so that does make me wonder, even if like I never had to intervene on cloud code

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
20:10

the world is complicated jobs are complicated and

20:17

Closing the loop on self-contained systems whether it just writing software or something how much sort of how much broader

20:24

Gains we would see just from that and so maybe that makes us this should dilute our estimation of the country of geniuses. Well, I actually, I like simultaneously, I simultaneously agree with you, agree that it's a reason why these things don't happen instantly. But at the same time, I think the effect is gonna be very fast.

20:40

So like, I don't know, you could have these two poles, right, one is like, you know, AI is like, you know, it's not going to make progress.

20:47

It's slow.

20:48

Like it's going to take, you know, kind of forever to diffuse within the economy.

20:52

Right.

20:52

Economic diffusion has become one of these buzzwords. That's like a reason why we're not going to make AI progress or why AI progress doesn't matter. And, and, you know, the other axis is like, we'll get recursive self-improvement, you know, the whole thing, you know, can't you just draw an exponential line on the, on the

21:07

curve?

21:07

You know, it's, it's good. We're going to have, you know, Dyson spheres around the sun and like, you know, you know, so many nanoseconds after, uh, you know, after, after, there are these two extremes, but what we've seen. From, from the beginning, you know, at least if you look within Anthropic, there's this bizarre 10 X per year growth in revenue that we've seen. Right. So, you know, in 2023, it was like zero to a hundred million.

21:38

2024, it was a hundred million to a billion 2025. It was a billion to like nine or 10 billion. And then you guys should have just bought like a billion dollars with your own products, or you could just like have a clean 10 and the first month of this year, like that, that exponential is, you would think it would slow down, but it would like, you know, we, we added another few billion to like, you know, to, to, to, we

22:00

added another few billion to revenue in January. And, and so, you know, obviously that curve can't go on forever.

22:07

Right.

22:08

You know, the GDP is only so large. I don't, you know, I would even guess that it bends that it bends bend somewhat this year, but like that is like a fast curve. Right. That's like a, that's like a really fast curve and I would bet it stays pretty fast, even as the scale goes to the entire economy.

22:25

So like, I think we should be thinking about this middle world where things are like extremely fast but not instant where they take time because of economic diffusion, because of the need to close the loop, because you know, it's like this fiddly, oh man, I have to do change management within my enterprise. I have to like, I set this up, but I have to change the security permissions on this in order to make it actually work.

22:55

Or I had this old piece of software that checks the model before it's compiled and released, and I have to rewrite it. And yes, the model can do that, but I have to tell the model to do that. And it has to, it has to take time to do that. And, and, and so I think everything we've seen so far is, is compatible with the idea that there's one fast exponential, that's the, the capability of the model.

23:20

And then there's another fast exponential that's downstream of that, which is the diffusion of the model into the economy. Not instant, not slow, much faster than any previous technology, but it has its limits. And this is what we, you know, when I look inside Anthropic, when I look at our customers, fast adoption, but not infinitely fast. Can I try a hot take on you? Yeah.

23:46

I feel like diffusion is cope, that people use to say when it's like, if the model isn't able to do something, they're like, oh, but it's like a diffusion issue. But then you should use the comparison to humans. You would think that the inherent advantages

"Your service and product truly is the best and best value I have found after hours of searching."

Adrian, Johannesburg, South Africa

Want to transcribe your own content?

Get started free
23:58

that AIs have would make diffusion a much easier problem for new AIs getting onboarded than new humans getting on boarded. So an AI can read your entire Slack and your drive in minutes. They can share all the knowledge that the other copies of the same instance have.

24:12

You don't have this adverse selection problem when you're hiring AIs who's going to just hire copies of a vetted AI model. Hiring a human is so much more hassle. And people hire humans all the time, right? We pay humans upwards of $50 trillion in wages because they're useful, even though it's like, in principle, it would be much easier to integrate AIs into the economy

24:30

than it is to hire humans. I think like the diffusion, I feel like, doesn't really explain. I think diffusion is very real and doesn't exclusively have to do with limitations on the AI models.

24:46

Like, again, there are people who use diffusion to, you know, as kind of a buzzword to say this isn't a big deal. I'm not talking about that. I'm not talking about, you know, AI will diffuse at the speed that previous,

24:58

I think AI will diffuse much faster than previous technologies have, but not infinitely fast. So I'll just give an example of this, right? Like, there's like cloud code, like cloud code is extremely easy to set up. You know, if you're a developer, you can kind

25:12

of just start using cloud code. There is no reason why a developer at a large enterprise should not be adopting cloud code as quickly as you know, individual developer or developer at a startup and we do everything we can to promote it, right? We sell, we sell cloud code to enterprises and big enterprises, like, you know,

25:32

big financial companies, big pharmaceutical companies, all of them, they're adopting cloud code much faster than enterprises typically adopt new technology, right? But again, it like, it takes time. Like any given feature or any given product like Cloud Code or like Co-Work will get adopted by the individual developers who are on

25:58

Twitter all the time by the like series A startups many months faster than they will get adopted by like, a like large enterprise that does food sales. There are a number of factors like you have to go through legal, you have to provision it for everyone.

26:16

It has to, like it has to pass security and compliance. The leaders of the company who are further away from the AI revolution, are forward looking, but are forward-looking, but they have to say, Oh, it makes sense for us to spend 50 million. This is what this Claude code thing is. This is why it helps our company.

26:35

This is why it makes us more productive. And then they have to explain to the people, two levels below, and they have to say, okay, we have 3000 developers, like, here's how we're going to roll it out to our developers. And we have conversations like this every day. Like, you know, we are doing everything we can to make Anthropx revenue grow 20 or 30x a year instead of 10x a year. You know, and and again, you know, many enterprises are just saying, this is so productive, like, you know, we're going to take shortcuts in our usual procurement process, right?

27:05

They're moving much faster than, you know, when we tried to sell them just the ordinary API, which many of them use, but Cloud Code is a more compelling product. But it's not an infinitely compelling product. And I don't think even AGI or Powerful AI or Country of Geniuses in the data center will be an infinitely compelling product. It will be a compelling product enough, maybe to get three or five or 10 X a year growth, even when you're in the hundreds of billions of dollars, which

27:30

is extremely hard to do. And it has never been done in history before, but not infinitely fast. I buy that it would be a slight slowdown and maybe this is not your claim, but sometimes people talk about this, like, Oh, the capabilities are there, but because of diffusion, otherwise, like, we're basically at AGI. And then I don't believe we're basically at AGI. I think if you had the country of geniuses in a data center, if your company didn't adopt the country of geniuses in a data center.

27:53

If you had the country of geniuses in a data center, we would know it. We would know it if you had the country of geniuses in a data center. Like, everyone in this room would know it, everyone in Washington would know it, like, you know, people in rural parts might not know it, but like, we would know it. We don't have that now. That's very clear.

28:12

As Dario was hinting at, to get generalization, you need to train across a wide variety of realistic tasks and environments. For example, with a sales agent, the hardest part isn't teaching it to mash buttons in a specific database in Salesforce.

28:25

It's training the agent's judgment across ambiguous situations. How do you sort through a database with thousands of leads to figure out which ones are hot? How do you actually reach out? What do you do when you get ghosted?

28:36

When an AI lab wanted to train a sales agent, LabelBox brought in dozens of Fortune 500 salespeople to build a bunch of different RL environments. They created thousands of scenarios where the sales agent had to engage with a potential customer, which was role-played by a second AI. Limblebox made sure that this customer AI had a few different personas, because when you cold call, you have no idea who's going to be on the other end.

28:57

You need to be able to deal with a whole range of possibilities. Limblebox's sales experts monitored these conversations turn by turn, tweaking the role-playing agent to ensure it did the kinds of things an actual customer would do. LabelBlocks could iterate faster than anybody else in the industry. This is super important because RL is an empirical science, it's not a solve problem. LabelBlocks has a bunch of tools for monitoring agent performance in real-time.

29:19

This lets their experts keep coming up with tasks so that the model stays in the right distribution of difficulty and gets the optimal reward signal during training. LabelBox can do this sort of thing in almost every domain. They've got hedge fund managers, radiologists, even airline pilots. So whatever you're working on, LabelBox can help.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
29:36

Learn more at labelbox.com slash vorkash. Coming back to concrete predictions, because I think because there's so many different things to disambiguate, it can be easy to talk past each other when we're talking about capabilities. So, for example, when I interviewed you three years ago, I asked you a prediction about

29:54

what we should we expect three years from now. I think you were right. So you said we should expect systems, which if you talk to them for the course of an hour, it's hard to tell them apart from a generally well-educated human. I think you were right about that. And I think spiritually I feel unsatisfied

30:09

because my internal expectation was that such a system could automate large parts of white-collar work. And so it might be more productive to talk about the actual end capabilities you want such a system. So I will basically tell you, I will basically tell you what, what, you know, where, where I think we are. So let me, let me ask it in a very specific question so that we can figure

30:29

out exactly what kinds of capabilities we should get back soon. So maybe I'll ask about it in the context of a job I understand well, not because it's the most relevant job, but just because I can evaluate the claims about it. Take video editors, right? I have video editors. And part of their job involves learning about our audience's preferences, learning about my preferences and tastes and the different trade-offs we have,

30:50

and just over the course of many months building up this understanding of context. And so the skill and ability they have six months into the job, a model that can pick up that skill on the job, on the fly, when should we expect such an AI system? Yeah, so I guess what you're talking about is like,

31:05

you know, we're doing this interview for three hours and then like, you know, someone's gonna come in, someone's gonna edit it, they're gonna be like, oh, you know, you know, I don't know, Dario like, you know, scratched his head and you know, we could edit that out and you know.

31:19

Magnify that. There was this like long, there was this like long discussion that like is less interesting to people. And then, then, you know, then there's other thing that's like more interesting to people. So, you know, let's, let's, let's kind of make this, this edit. So, you know, I think the country of geniuses in a data center will be able to do that.

31:35

The way it will be able to do that is, you know, it will have general control of a computer screen, right? Like, you know, and, and, and you'll be able to feed this in and it'll be able to also use the computer screen to like, go on the web, look at all your previous, look at all your previous interviews. Like, look at what people are saying on Twitter in response to your interviews, like talk to you, ask you questions, talk to your staff, look at the history of kind of edits, edits that you did.

31:59

And from that, like do the job. Yeah. And from that, do the job. So I think that's dependent on several things. One that's dependent, and I think this is one of the things that's actually blocking deployment, getting to the point on computer use, where the models are really masters at using the computer, right?

32:15

And we've seen this climb in benchmarks, and benchmarks are always imperfect measures, but OS world is, you know, went from, you know, like 5%, you know, like, I think when we first released, you know, computer use, like a year and a quarter ago was like maybe 15%. I don't remember exactly, but we've climbed from that to like 65 or 70%. And, and, you know,

32:40

there may be harder measures as well. But, but I think computer use has to pass a point of reliability. Can I just add a follow up on that before we move on to the next point? I often, for years, I've been trying to build different internal LLM tools for myself. And often I have these text in, text out tasks, which should be dead center in the repertoire of these models.

33:01

And yet I still hire humans to do them just because it's if it's something like make identify what the best clips would be in this transcript and maybe they'll do like a seven out of 10 job at them. But there's not this ongoing way I can engage with them to help them get better at the job the way I could with a human employee. And so that missing ability, even if you saw computer use, would still block my ability to like, offload an actual job to them. Again, there's, there's, this gets back to what we took to kind of, to kind of what we were talking about before with learning on the job where it's, it's very interesting.

33:31

You know, I think, I think with the coding agents, like, I don't think people would say that learning on the job is what is, what is, you know, preventing the coding agents from like, you know, doing everything end to end. Like they keep, they keep getting better. We have engineers at Anthropic who like, don't write any code. And when I look at the productivity, to your previous question, you know,

33:52

we have folks who say, this GPU kernel, this chip, I used to write it myself, I just have Claude do it. And so there's this enormous improvement in productivity. And I don't know, like when I see Claude Code, like familiarity with the code base, or like, you know, or a feeling that the model

34:13

hasn't worked at the company for a year, that's not high up on the list of complaints I see. And so I think what I'm saying is we're like, we're kind of taking a different path. Don't you think with coding, that's because there is an external scaffold of memory which

34:25

exists instantiated in the code base, which I don't know how many other jobs have coding made fast progress precisely because it has this unique advantage that other economic activity doesn't. But when you say that, what you're implying is that by reading the code base into the context, I have everything that the human needed to learn on the job.

34:47

So that would be an example of whether it's written or not, whether it's available or not, a case where everything you needed to know, you got from the context window, right? And that what we think of as learning, like, oh man, I started this job,

35:03

it's gonna take me six months to understand the code base. The model just did it in the context. Yeah, I honestly don't know how to think about this because there are people who qualitatively report what you're saying. There was a meter study I'm sure you saw last year. Yes.

"The accuracy (including various accents, including strong accents) and unlimited transcripts is what makes my heart sing."

Donni, Queensland, Australia

Want to transcribe your own content?

Get started free
35:18

Where they had experienced developers try to close a pull request in repositories that they were familiar with. And those developers reported an uplift. They reported that they felt more productive with the use of these models. But in fact, if you look at their output and how much was actually merged back in, there's a 20% downlift. They were less productive as a result of using these models.

35:38

And so I'm trying to square the qualitative feeling that people feel with these models versus one, in a macro level, where is this like renaissance of software? And then two, when people do these independent evaluations, why are we not seeing the productivity benefits that we would expect? Within Anthropic, this is just really unambiguous, right? We're under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than than other companies. So like the the the pressure to survive economically

36:11

while also keeping our values is is just incredible right we're trying to keep this 10x revenue curve going there's like there is zero time for bullshit there is zero time for feeling like we're productive when we're not like. These tools make us a lot more productive. Like why, why do you think we're concerned about competitors using the tools? Because we think we're ahead of the competitors and like we don't, we don't want to sell. We wouldn't be going through all this trouble if this was secretly reducing our productivity.

36:46

Like we see the end productivity every few months in the form of model launches. Like there's no kidding yourself about this. Like the models make you more productive. One, that is people feeling like they're more productive is qualitatively predicted by studies like this. But two, if I just look at the end output, obviously you guys are making fast progress.

37:06

But the fact, you know, the idea was supposed to be with recursive self-improvement is that you make a better AI, the AI helps you build a better next AI, et cetera, et cetera. And what I see instead, if I look at the you open AI deep mind, is that people are just shifting around the podium every few months. And maybe you think that stops because you've won or whatever, but why are we not seeing the person with the best coding model have this lasting advantage

37:32

if in fact there are these enormous productivity gains from the last coding model? So no, no, no. I mean, I think it's all like, my model of the situation is there's an advantage that's gradually growing.

37:44

Like I would say right now, the coding models give maybe, I don't know, a like 15, maybe 20% total factor speed up. Like that's my view. And six months ago, it was maybe 5%. And so it didn't matter, like 5% doesn't register. It's now just getting to the point where it's like one of several factors that, that kind of matters.

38:09

And, and that's gonna, that's gonna keep speeding up. And so I think six months ago, like, you know, there were several, there were several companies that were at roughly the same point because, you know, this, this wasn't, this wasn't a notable factor, but I think it's starting to speed up more and more. I, you know, I would also say there are multiple companies that, you know, write models that are used for code and, you know, we're not perfectly good at, you know,

38:34

preventing some of these other companies from kind of using our models internally. So, you know, I think everything we're, kind of everything we're seeing is consistent with this kind of, um, this kind of snowball model where, where, you know, there's no hard, again, my, my, my, my, my theme in all of this is like, all of this is soft takeoff, like soft, smooth

38:59

exponentials, although the exponentials are relatively steep. And so, and so we're seeing this snowball gather momentum where it's like 10%, 20%, 25%, you know, for, for 40%. And as you go, yeah, Amdahl's law, you have to get all the like things that are preventing you from, from closing the loop out of the way, but like, this is one of the biggest priorities within entropic.

39:21

Um, stepping back, I think before in the stack we were talking about, well, when do we get this on the job learning? And it seems like the coding, the point you were making, the coding thing is, we actually don't need on the job learning.

39:34

That you can have tremendous productivity improvements, you can have potentially trillions of dollars of revenue for AI companies without this basic human ability. Maybe that's not your claim, you should clarify, but without this basic human ability to learn on the job. But I just look at like, in most domains of economic activity,

39:51

people say, I hired somebody, they weren't that useful for the first few months, and then over time, they built up the context, understanding, it's actually harder to find what we're talking about here, but they got something. And then now they're a power horse and they're so valuable to us.

40:05

And if AI doesn't develop this ability to learn on the fly, I'm a bit skeptical that we're gonna see

40:12

huge changes to the world without that ability.

40:14

Yeah, so I think two things here, right? There's the state of the technology right now, which is, again, we have these two stages. We have the pre-training and RL stage where you throw a bunch of data and tasks into the models and then they generalize. So it's like learning, but it's like learning from more data and not learning over kind of one human or one model's lifetime. So again, this is situated between

40:40

evolution and human learning. But once you learn all those skills, you have them. And just like with pre-training, just how the models know more, you know, if I look at a pre-trained model, you know, it knows more about the history of samurai in Japan than I do. It knows more about baseball than I do. It knows, you know, it knows more about, you know,

41:02

low-pass filters and electronics than, you know, all of these things, its knowledge is way broader than mine. So I think, I think even, even just that, you know, may get us to the point where the models are better at, you know, kind of better at everything. And then we also have, again, just with scaling the kind of existing setup, we have the in-context

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
41:23

learning, which I would describe as kind of like human on-the-job learning, but like a little weaker and a little short-term. Like you look at in-context learning, you give the model a bunch of examples, it does get it. There's real learning that happens in context.

41:37

And like a million tokens is a lot. That can be days of human learning, right? You know, if you think about the model, you know, kind of reading a million words, you know, it takes me, how long would it take me to read a million? I mean, you know, like days or weeks at least.

41:54

So you have these two things, and I think these two things within the existing paradigm may just be enough to get you the country of geniuses in the data center. I don't know for sure, but I think they're gonna get you a large fraction of it. There may be gaps,

42:08

but I certainly think just as things are, this I believe is enough to generate trillions of dollars of revenue. That's one, that's all one. Two is this idea of continual learning, this idea of a single model learning on the job.

42:24

I think we're working on that, too. And I think there's a good chance that in the next year or two, we also solve that. Again, I think you get most of the way there without it. I think the trillions of dollars of, you know, I think the trillions of dollars a year market,

42:45

maybe all of the national security implications and the safety implications that I wrote about in adolescence of technology can happen without it. But I also think we, and I imagine others are working on it. And I think there's a good chance that, you know, that we get there within the next year or two. There are a bunch of ideas. I won't go into all of them in detail, but, um, you know, one is just make the

43:08

context longer, there's, there's nothing preventing longer context from working. You just have to train at longer context and then learn to, to serve them at inference. And both of those are engineering problems that we are working on and that I would assume others are working on as well. Yeah. assume others are working on as well. Yeah, so this context line increase, it seemed like there was a period from 2020 to 2023

43:26

where from GPT-3 to GPT-4 turbo, there was an increase from like 2000 context lines to 128K. If you're like for the next, for the two-ish years since then, we've been in the same-ish ballpark.

43:36

Yeah. And when model context lines get much longer than that, people report qualitative degradation in the ability of the model to consider that full context. So I'm curious what you're internally seeing that makes you think like, oh, 10 million contexts, 100 million contexts,

43:51

to get human, like six months learning, billion, billion contexts. This isn't a research problem. This is an engineering and inference problem, right? If you wanna serve long contexts, you have to like store your entire KV cache.

44:03

You have to, you have to like store your entire KV cache, you have to, you know, you know, it's, it's, it's, it's difficult to store all the memory in the GPUs to juggle the memory around. I don't even know the detail, you know, at this point, this is at a level of detail that, that, that I'm no longer able to follow, although, you know, I, the weights, these are the activations you have to store. But you know, you know, these days the whole thing has flipped because we have MOE models and kind of all of that.

44:30

But and this degradation you're talking about, like, again, without getting too specific, like a question I would ask is like, there's two things. There's the context length you train at, and there's a context length that you serve at. If you train at a small context length and then try to serve at a long context length, like maybe you get these degradations, it's better than nothing. You might still offer it, but you get these degradations and maybe it's harder to train at a long context length.

44:55

Yeah, so, you know, there's a lot. I want to at the same time ask about like maybe some rabbit holes of like, well, wouldn't you expect that if you had to train on longer context length, that would mean that you're able to get sort of like less samples in for the same amount of compute. But before, maybe it's not worth diving deep on that. I want to get an answer to the bigger picture question, which is like, okay, so I don't feel a preference for a human editor that's been working for me for six months versus an AI that's been working with me for six months.

45:27

What year do you predict that that will be the case? I mean, you know, my guess for that is, you know, there's a lot of problems that are basically like we can do this when we have the country of geniuses in a data center. And so, you know, my picture for that is, you know, again, if you, if you, if you, if you, you know, if you made me guess, it's like one to two years, maybe one to three years, it's really hard to tell.

45:51

I have a, I have a strong view. 99, 95% that like, all this will happen in 10 years. Like that's, I think that's just a super safe bet. Yeah. And then I have a hunch this is more like a 50-50 thing, that it's going to be more like one to two, maybe more like one to three. So one to three years. Country of Genius says, and the slightly less economically valuable task of editing videos.

46:14

It seems pretty economically valuable, let me tell you. It's just there are a lot of use cases like that, right? There are a lot of similar ones. So you're predicting that within one to three years. And in generally, Anthropic has predicted that by late 26, early 27, we will have AI systems that are, quote, have the ability to navigate interfaces available to humans doing digital work

46:33

today, intellectual capabilities, mashing or exceeding that of Nobel prize winners and the ability to interface with the physical world. And then you gave an interview two months ago with Dealbook where you were emphasizing your your company's more responsible world. And then you gave an interview two months ago with Dealbook where you were emphasizing your company's more responsible compute scaling as compared to your competitors.

46:50

And I'm trying to square these two views where if you really believe that we're going to have a country of geniuses, you want as big a data center as you can get. There's no reason to slow down. The TAM of a Nobel Prize winner that is actually can do everything a Nobel Prize winner can do is like trillions of dollars. And so I'm trying to square this conservatism, which seems rational if you have more moderate timelines with your stated views about AI progress.

"I'd definitely pay more for this as your audio transcription is miles ahead of the rest."

Dave, Leeds, United Kingdom

Want to transcribe your own content?

Get started free
47:16

Yeah. So it actually all fits together. And we go back to this fast, but not infinitely fast diffusion. So like, let's say that we're making progress at this rate. Um, you know, the, the, the technology is making progress this fast. Again, I have, you know, very high conviction that like, it's going, you know, the, the, the, you know,

47:36

we're, we're, we're going to get there within, within a few years. I have a hunch that we're going to get there within a year or two. So a little uncertainty on the technical side, but like, you know, pretty, pretty strong confidence that it won't be off by much. What I'm less certain about is again, the economic diffusion side. Like, I really do believe that we could have models that are a country of geniuses, a hundred country of

48:00

geniuses in the data center in one to two years. One question is how many years after that do the trillions in, you know, do the trillions in revenue start rolling in? I don't think it's guaranteed that it's going to be immediate. You know, I think it could be one year,

48:22

it could be two years,

48:24

I could even stretch could be two years,

48:25

I could even stretch it to five years, although I'm skeptical of that. And so we have this uncertainty, which is even if the technology goes as fast as I suspect that it will, we don't know exactly how fast it's gonna drive revenue.

48:42

We know it's coming, but with the way you buy these data centers, if you're off by a couple of years, that can be ruinous. It is just like how I wrote, you know, in machines of loving grace, I said, look, I think we might get this powerful AI, this country of genius in the data center. That description you gave comes from the machines of loving grace.

48:59

I said, we'll get that 2026, maybe 2027 again, that is, that is my hunch. Wouldn't be surprised if I'm off by a year or two, but like, that is my hunch. Let's say that happens. That's the starting gun. How long does it take to cure all the diseases? Right. That's, that's one of the ways that like drives a huge amount of, of, of, of

49:16

economic value, right? Like you cure, you cure every disease, you know, there's a question of how much of that goes to the pharmaceutical company, to the AI company, but there's an enormous consumer surplus because everyone, you know, every, assuming we can get access for everyone, which I care about greatly. We, you know, we, we cure all of these diseases.

49:32

How long does it take?

49:33

You have to do the biological discovery. You have to, you know, you have to, you know, manufacture the new drug, you have to, you know, go through the regulatory process. I mean, we saw this with, like, vaccines and COVID, right? Like, there's just this, we got the vaccine out to everyone, but it took a year and a half, right? And so my question is, how long does it take to get the cure for everything, which AI is the genius that can, in theory, invent, out to everyone? How long from when that AI first exists in the lab to when diseases have actually been cured for everyone? We've had a polio vaccine for 50 years.

50:12

We're still trying to eradicate it in the most remote corners of Africa. And the Gates Foundation is trying as hard as they can. Others are trying as hard as they can. But that's difficult. Again, I don't expect most of the economic diffusion to be as difficult as that, right? That's like the most difficult case.

50:28

But, but there's a, there's a real dilemma here. And, and where I've settled on it is it will be, it will be, it will be faster than anything we've seen in the world, but it still has its limits. And, and so then when we go to buying data centers, again, the curve I'm looking at is, OK, we've had a 10x a year increase every year. So beginning of this year, we're looking at $10 billion

50:57

in rate of annualized revenue at the beginning of the year. We have to decide how much compute to buy. And, you know, it takes a year or two to actually build out the data centers, to reserve the data centers. So basically I'm saying like in 2027, how much compute do I get?

51:18

Well, I could assume that the revenue will continue growing 10X a year. So it'll be 100 billion at the end of 2026 and 1 trillion at the end of 2027. And so I could buy a trillion dollars, actually it would be like $5 trillion of compute because it would be a trillion dollar a year

51:42

for five years, right? I could buy a trillion dollars of compute that starts at the end of 2027. And if my revenue is not a trillion dollars, if it's even 800 billion, there's no force on earth. There's no hedge on earth

51:58

that could stop me from going bankrupt if I buy that much compute. And so even though a part of my brain wonders if it's gonna keep growing 10X, I can't buy a trillion dollars a year of compute in 2027. If I'm just off by a year in that rate of growth, or if the growth rate is 5X a year instead of 10X a year,

52:21

then you go bankrupt. And so you end up in a world where, you know, you're supporting hundreds of billions, not trillions, and you accept, you accept some risk that there's so much demand that you can't support the revenue. And you accept still some risk that, you know, you got it wrong and it's still slow. And so when I talked about behaving responsibly, what I meant actually was not the absolute

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
52:48

amount that, that actually was not, um, you know, I think it is true. We're spending somewhat less than some of the other players. It's actually the other things like, have we been thoughtful about it or are we Yolo even saying, Oh, we're going to do a hundred billion dollars here, a hundred billion dollars

53:04

there.

53:05

I kind of get the impression that, you know, some of the other companies have not written down the spreadsheet, that they don't really understand the risks they're taking. They're just kind of doing stuff because it sounds cool. And we've thought carefully about it, right? We're an enterprise business. Therefore, you know, we can rely more on revenue.

53:24

It's less fickle than consumer. We have better margins, which is the buffer between buying too much and buying too little. And so I think we bought an amount that allows us to capture pretty strong upside worlds. It won't capture the full 10x a year.

53:39

And things would have to go pretty badly for us to be in financial trouble. So I think we've thought carefully and we've made that balance. And that's what I mean when I say that we're being responsible. Okay. So it seems like it's possible that we actually just have different definitions of the country

53:55

of a genius in a data center. Because when I think of like actual human geniuses, an actual country of human geniuses in a data center, I'm like, I would happily buy $5 trillion worth of compute to run actual country of human geniuses in a data center. So let's say JP Morgan or Moderna or whatever doesn't want to use them.

54:13

I've got a country of geniuses, like they'll start their own company. And if like they can't start their own company and they're bottlenecked by clinical trials, it is worth stating with clinical trials, like most clinical trials, like most clinical trials fail because the drug doesn't work. There's not efficacy, right? And I make exactly that point in Machines of Love and Grace.

54:27

I say the clinical trials are gonna go much faster than we're used to, but not instant, not infinitely fast. And then suppose it takes a year for the clinical trials to work out so that you're getting revenue from that and can make more drugs.

54:39

Okay, well, you've got a country of geniuses and you're an AI lab and you have, you could use many more AI researchers. You also think that there's these like self-reinforcing gains from, you know, smart people working on AI tech. So like, okay, you can have the, you can have the data center working on like AI progress. Is there more gains from buying, like substantially more gains from buying a trillion dollars a year of compute versus $300 billion a year of compute? If your competitor is buying a trillion dollars a year of compute

55:05

versus $300 billion a year of compute.

55:07

If your competitor is buying a trillion, yes, there is.

55:09

Well, no, there's some gain, but then, but again, there's this chance that they go bankrupt before, uh, you know, again, if you're off by only a year, you destroy yourselves. That's the, that's the balance. We're buying a lot. We're buying a hell of a lot. We're

55:27

buying an amount that's comparable to that, that the biggest players in the game are buying. But if you're asking me, why haven't we signed 10 trillion of compute starting in mid 2027. First of all, it can't be produced. There isn't that much in the world. But second, what if the country of geniuses comes, but it comes in mid 2028 instead of mid 2027? You go bankrupt. So if your projection is one to three years, it seems like you should have won $10 trillion

56:01

of compute by 2029. 2020, maybe 2020 latest. Like, I mean, you know, you, like, are you unsure? Like, it seems like even in your, the longest version of the timelines you state the compute you are ramping up to build doesn't seem. What, what, what makes you think that? Well, you, as you said, you want the 10 trillion, like human wages, let's say are, um, on the order of 50 trillion a year. If you look at, so, so I won't, I won't talk about anthropic in particular, but if you

56:27

talk about the industry, like, um, the amount of compute the industry hit, you know, the, the, the, the amount of compute the industry's building this year is probably in the, you know, I don't know, very low tens of, you know, call it 10, 15 gigawatts next year. I, you know, it goes up by roughly three X a year. So like next year's 30 or 40 gigawatts and, um, 20, 28 might be a hundred, 20, 29 might be like three, 300 gigawatts.

56:58

And like each gigawatt costs like, um, maybe 10, I mean, I'm doing the math in my head, but each gigawatt costs maybe $10 billion, you know, or border $10 to $15 billion a year. So you know, you kind of, you know, you put that all together and you're getting about what you described. You're getting multiple trillions a year by 2028 or 2029.

57:18

So you're getting exactly that. You're getting exactly what you predict. That's for the industry. That's for the industry. That's for the industry. That's right. So suppose Anthropics Compute keeps 3x-ing a year, and then by like 27 you have, or 27-28

57:32

you have 10 gigawatts, and like multiply that by as you say 10 billion, so then it's like 100 billion a year. But then you're saying the TAM by 2028, 29. I don't want to give exact numbers for Anthropic, but these numbers are too small. These numbers are too small. Okay, interesting.

"99% accuracy and it switches languages, even though you choose one before you transcribe. Upload → Transcribe → Download and repeat!"

Ruben, Netherlands

Want to transcribe your own content?

Get started free
57:49

I'm really proud that the puzzles I've worked on with Jane Street have resulted in them hiring a bunch of people from my audience. Well, they're still hiring, and they just sent me another puzzle. For this one, they spent about 20,000 GPU hours few hours training backdoors into three different language models. Each one has a hidden prompt that elicits completely different behavior. You just have to find

58:07

the trigger. This is particularly cool because finding backdoors is actually an open question in frontier AI research. Anthropic actually released a couple of papers about sleeper agents and they show that you can build a simple classifier on the residual stream to detect when a backdoor is about to fire. But they already knew what the triggers were because they built them. Here, you don't.

58:27

And it's not feasible to check the activations for all possible trigger phrases. Unlike the other puzzles they made for this podcast, Jane Street isn't even sure this one is solvable. But they've set aside $50,000 for the best attempts and write-ups. The puzzle's live at janestrereet.com slash thwarkesh. And they're accepting submissions until April 1st. All right, back to Dario.

58:48

You've told investors that you plan to be profitable starting in 28, and this is the year where we're potentially getting the country of geniuses as a data center. And this is gonna now unlock all this progress and medicine and health and et cetera, et, progress and, uh, medicine and, uh, health and et

59:05

cetera, et cetera, and new technologies. Wouldn't this be particularly exactly the time where you'd like, want to reinvest in the business and build bigger countries so they can make more discoveries? I mean, profitability, profitability is this kind of like weird thing in this field. I, I like, like, I don't think, I don't think in this field profitability is actually a measure of, you know, kind of spending down versus investing in the business.

59:33

Like let's just take a model of this. I actually think profitability happens when you underestimated the amount of demand you were going to get and loss happens when you overestimated the amount of demand you were going to get because you're buying when you overestimated the amount of demand you were going to get, because you're buying the data centers ahead of time. So think about it this way.

59:50

Ideally, you would like, and again, these are stylized facts, these numbers are not exact, I'm just trying to make a toy model here. Let's say half of your compute is for training, and half of your compute is for inference. And you know, the inference has some gross margin that's like, more than 50%. And so what that

1:00:07

means is that if you were in steady state, you build a data center, if you knew exactly, exactly, exactly the demand you were getting, you would, you know, you would, you would, you would, you would get a certain amount of revenue, say, I don't know, let's say you pay $ hundred billion dollars a year for compute and on $50 billion a year, you support $150 billion on of, of, of, of, of, of, of revenue.

1:00:31

And the other 50 billion, the other 50 billion are used for training. Um, so basically you're profitable. You make fit, you make fit, you make $50 billion a profit. Those are the economics of the industry today or, sorry, not today, but like, that's where we're, where we're projecting forward in a year or two. The only thing that makes that not the case is if you get. Less demand than 50 billion. Um, then you have more than 50% of your, your data center for

1:00:57

research and you're not profitable. So you, you know, you train stronger models, but you're like not profitable. If you get more demand than you thought, then your research gets squeezed, but you're kind of able to support more inference and you're more profitable. So it's, maybe I'm not explaining it well,

1:01:17

but the thing I'm trying to say is, you decide the amount of compute first, and then you have some target desire of inference versus training, but that gets determined by demand. It doesn't get determined by you.

1:01:29

Well, what I'm hearing is the reason you're predicting profit is that you are systematically under-investing in compute, right? Because if you actually like- No, no, no. I'm saying it's hard to predict. So, these things about 2028 and when it will happen, that's our attempt to do the best we can with investors.

1:01:46

All of this stuff is really uncertain because of the cone of uncertainty. Like, we could be profitable in 2026 if the revenue grows fast enough. And then, you know, if we overestimate or underestimate the next year, that could swing wildly. Like, what I'm trying to get at is you have a model in your head of like, the business invests, invests, invests, invests, gets scale, and kind of then becomes profitable.

1:02:13

There's a single point at which things turn around. I don't think the economics of this industry work that way. I see. So if I'm understanding correctly, you're saying, because of the discrepancy between the amount of compute we should have gotten and the amount of compute we got,

1:02:27

we were like sort of forced to make profit. But that doesn't mean we're gonna continue making profit. We're gonna like reinvest the money because well, now AI has made so much progress and we want the bigger country of geniuses. And so then back into revenue is high,

1:02:41

but losses are also high. If we, if we predict, if every year we predict exactly what the demand is going to be, we'll be profitable every year because grow, because spending, spending 50% of your compute on 50% of your compute on research roughly. Um, plus a gross margin that's higher than 50% and correct demand prediction leads to profit. That's to profit.

1:03:05

That's the profitable business model that I think is kind of like there, but like obscured by these like building ahead and prediction errors.

1:03:13

I guess you're treating the 50% as a sort of like, you know, just like a given constant. Yes. Yes. Whereas in fact, if AI progress is fast and you can increase the progress by scaling up

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:03:23

more, you just have more than 50% and not make profit. Here's what I'll say. You might wanna scale up it more. You might wanna scale it up more, but remember the log returns to scale, right? If 70% would get you a very little bit of a smaller model

1:03:39

through a factor of 1.4X, right? Like that extra $20 billion is, you know, that each dollar there is worth much less to you because the log linear setup. And so you might find that it's better to invest that, it's better to invest that $20 billion in, you know,

1:03:58

in serving inference or in hiring engineers who are kind of better, who are kind of better at what they're doing.

1:04:10

It's not exactly going to be 50%.

1:04:12

It'll probably vary, vary over time.

1:04:16

What I'm saying is the, the, the, the, the like log linear return, what it leads to is you spend of order one. Fraction of the business, right? Like not 5%, not 95%. And then you get diminishing returns because of the log scale. I realize it's strange that I'm like convincing Dario to like believe in AI progress or something.

1:04:34

But like, okay, you don't invest in research because it has diminishing returns,

1:04:38

but you invest in the other things you mentioned.

1:04:39

Again, we're talking about diminishing returns after you're spending 50 billion a year, right? Like this is a point I'm sure you would make, but like diminishing returns on a genius is, could be quite high. And more generally, like what is profit in the market economy? Profit is basically saying the other companies in the market can like do more

1:04:59

things with this money that I can put aside. And drop, I'm just trying to like, cause I, you know, I don't want to give information about Anthropic is why I'm giving these stylized numbers, but like, let's just derive the equilibrium of the industry.

1:05:11

Right.

1:05:12

I think the, so, so, so why doesn't everyone spend 100% of their, um, uh, you know, 100% of their compute on training and not serve any customers. Right. It's because if they didn't get any revenue, they couldn't raise money. They couldn't do compute deals. They couldn't buy more compute the next year. So there's going to be an equilibrium where every, every company spends.

1:05:33

Less than 100% on, on, on, on, on training and certainly less than 100% on inference. It should be clear why you don't just serve the current models and, and, you know, and, and, and, and, and never train another model because then you don't have any demand because you'll, because you'll fall behind. So there's some equilibrium. It's, it's not going to be 10%.

1:05:51

It's not going to be 90%. Let's just say as a stylized fact, it's 50%. That's what I'm getting at. And, and, and I think you're able to get on compute. And so the underlying economics are profitable. The problem is you have this hellish demand prediction

1:06:15

problem when you're buying the next year of compute, and you might guess under and be very profitable but have no compute for research, or you might guess over and you are not profitable and you have all the compute for research in the world. Does that make sense?

1:06:36

Just as a dynamic model of the industry. Maybe stepping back, I'm like, I'm not saying I think the country of genius is gonna come in two years and therefore you should buy this compute to me what you're saying the end conclusion you're arriving at makes a lot of sense, but

1:06:52

That's because like oh it seems like country geniuses is hard And there's a long way to go and so the stepping back the thing. I'm trying to get it is more like It seems like your worldview is compatible with somebody who says, we're like 10 years away from a world in which like we're generating

1:07:09

And that's just not my view. Yeah. That is not my view. Like I, so I'll like make another prediction. It is hard for me to see that there won't be trillions of dollars in revenue before 2030.

1:07:23

Like I can construct a plausible world. It takes maybe three years. So that would, that would, you know, that would be the end of what I think it's plausible.

1:07:31

Like in 2028, we get the, the real country of geniuses in the data center, you know, the revenue has been, been go, you know, the revenue has been going into the, maybe maybe is in the low hundreds of billions by, by, by, by 2028. And, and, and then the country of geniuses accelerates it to trillions, you know, and, and we're basically, we're basically on the slow end of diffusion. It takes two years to get to the trillions that, that, that would, that,

1:07:55

that, that would be the world where it takes until that would be the world where it takes until 2030. I, I, I suspect even composing the technical exponential and diffusion exponential will get there before 2030. So, you laid out a model where Entropic makes profit because it seems like fundamentally we're in a compute-constrained world. And so, it's like eventually we keep growing compute.

1:08:17

No, I think the way the profit comes is, again, and let's just abstract the whole industry here. Like, we have a, let's just let's just abstract the whole industry here. Like we have a, you know, let's just imagine we're, we're, we're in like an economics textbook. We have a small number of firms. Each can invest a limited amount in, you know, or, or, or like each can

1:08:33

invest some fraction fraction in R and D. They have some marginal cost to serve the margins on that, the profit margin, the gross profit margins on that marginal cost are like very high because, because, because inference is efficient, there's some competition, but the models are also differentiated. There's some, there's some, um, you know, companies will compete to push their research budgets up, but like, because there's a small number

1:08:57

of players, you know, we have the. I, what is it called? The neck and the Cornell equilibrium, I think is what the, what the, uh, small number of firm equal equilibrium is it. The point is it, it doesn't equilibrate to perfect competition with, with, with, with, with, with, with zero margins. If there's like three firms, if there's three firms in the economy, all are kind of independently

1:09:19

behaving, behaving rationally, it doesn't equilibrate to zero. Um, help me understand that because right now we do have three leading firms and they're not making profit. And so, yeah, what is changing? Yeah. So, again, the gross margins right now are very positive.

1:09:39

What's happening is a combination of two things. One is we're still in the exponential scale-up phase of compute. So basically what that means is we're training like a model gets trained. It costs, let's say a model got trained that costs a billion dollars last year. And then this year it produced $4 billion of revenue and cost $1 billion to inference from. So, again, I'm using stylized number here,

1:10:11

but there'll be 75% gross margins and this 25% tax. So that model as a whole makes $2 billion. But at the same time, we're spending $10 billion to train the next model because there's an exponential scale-up.

1:10:29

And so the company loses money. Each model makes money, but the company loses money. The equilibrium I'm talking about is an equilibrium where we have the country of geniuses. We have the country of geniuses in a data center, but that model training scale- up has equilibrated more.

1:10:46

Maybe it's still going up, we're still trying to predict the demand, but it's more leveled out. I'm confused a couple of things there. So let's start with the current world. In the current world, you're right that,

1:11:00

as you said before, if you treat each individual model as a company, it's profitable. But of course, a big part of the production function of being a frontier lab is training the next model, right? So if you didn't do that, then you'd make profit for two months. And then you wouldn't have margins because you wouldn't have the best model. And then so, yeah, you can make profits in two months on the current system.

1:11:20

But at some point, that reaches the biggest scale that it can reach. And then in equilibrium, we have algorithmic improvements, but we're spending roughly the same amount to train the next model as we spent to train the current model. So this equilibrium relies- I mean, at some point you run out of money in the economy.

1:11:40

Fixed lump of labor or fallacy. The economy is gonna grow, right? That's one of your predictions.

1:11:43

Well, yes. Data centers in space. But this is another example of the theme I was talking about, which is that the economy will grow much faster with AI than I think it ever has before. But it's not like right now, the computer is growing 3x a year.

1:11:59

I don't believe the economy is going to grow 300% a year. Like I said this in Machines of Love and Grace, I think we may get 10% or 20% per year growth in the economy, but we're not going to get 300% growth in the economy. So I think in the end, if compute becomes the majority of what the economy produces, it's going to be capped by that.

1:12:20

So OK, now let's assume a model where compute stays capped. The world where Frontier Labs are making money is one where they continue to make fast progress because fundamentally your margin is limited by how good the alternative is. And so you are able to make money

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:12:37

because you have a Frontier model. If you didn't have Frontier model, you wouldn't be making money. Well, you, I mean. making money. And so this model requires there never to be a steady state. Like forever and ever you keep making more algorithmic progress. I don't think that's true. I mean, I feel like we're like, we're taught, we're, you

1:12:53

know, we're, I feel like this is an economics, like, you know, this is like an economics

1:12:58

class now or something.

1:12:59

Do you know the Tyler Cowen quote? We never stop talking about economics. We never, we never stopped talking about economics. So no, but, but there, there, there are worlds in which, um, you know, there, so I don't think this field is going to be, I don't think this field is going to be a monopoly. All my lawyers never want me to say the word monopoly. Um, but, uh, I don't think this field is going to be a, not one, but a small number of players. And ordinarily, like the way you get monopolies like Facebook or Meta, I always call them Facebook, but is these kind of

1:13:36

network effects. The way you get industries in which there are a small number of players are very high costs of entry, right? So, you know, cloud is like this. I think cloud is a good example of this. You have three, maybe four players within cloud. I think that's the same for AI, three, maybe four. And the reason is that it's so expensive,

1:13:59

it requires so much expertise and so much capital to like run a cloud company.

1:14:05

Right.

1:14:05

And so you have to put up all this capital. And then in addition to putting up all this capital, you have to get all of this other stuff that like, you know, requires a lot of skill to, you know, to make it happen. And so it's like, if you go to someone and you're like, I want to disrupt this industry, here's a hundred billion dollars. you're like, okay, I'm putting $100 billion and also betting that you can do all these other things that these people have been doing for like-

1:14:25

Well, only to decrease the profit in the industry. And then the effect of your entering is the profit margins go down. So, you know, we have equilibria like this all the time in the economy where we have a few players, profits are not astronomical,

1:14:39

margins are not astronomical, but they're not zero. Right? And, you know, I think that's what we see on cloud. Cloud is very undifferentiated. Models are more differentiated than cloud, right? Like everyone knows Claude is good at different things than GPT is good at, than Gemini is good at. And it's not just Claude's good at coding,

1:15:00

GPT is good at, you know, math and reasoning, you know. It's more subtle than that. Like models are good at different types of coding. Models have different styles. Like I think these things are actually, you know, quite different from each other.

1:15:14

And so I would expect more differentiation than you see in cloud. Now, there actually is a counter, there is one counter argument. And that counter argument is that if all of that, the process of producing models,

1:15:33

if AI models can do that themselves, then that could spread throughout the economy. But that is not an argument for commoditizing AI models in general. That's kind of an argument for commoditizing the whole economy at once.

1:15:49

I don't know what quite happens in that world where basically anyone can do anything, anyone can build anything, and there's like no mode around anything at all. I mean, I don't know, maybe we want that world. Like, maybe that's the end state here. Like, maybe when AI models can do everything, if we've solved all the safety and security problems, like, you know, when maybe when when when kind of AI models can do, you know, when when when when am I was can do everything if we've solved all the safety and security problems, like, you know, that's one of the one of the one of the mechanisms for, for, you know, you know, just just kind of the economy

1:16:16

flattening itself again, but but that's kind of like post like, far post country geniuses in the data center. I'm maybe a finer way to put that potential point is, one, it seems like AI research is especially loaded on raw intellectual power, which will be especially abundant in a world with AGI.

1:16:37

And two, if you just look at the world today, there's very few technologies that seem to be diffusing as fast as AI algorithmic progress. And so that does hint that this industry is sort of structurally diffusive. So I think coding is going fast, but I think AI research is a superset of coding and there are aspects of it that are not going fast.

1:17:00

But I do think, again, once we get coding, once we get AI models going fast, then, you know, AI, you know, that will speed up the ability of AI models to kind of, to kind of do everything else. So I think while coding is going fast now, I think once the AI models are building the next AI models and building everything else, the kind of whole, the whole economy will start to kind of go at the same pace.

1:17:21

I am, I am worried geographically though. I'm a little worried that like just proximity to AI, having heard about AI, um, uh, that, that, that may be one differentiator. And so when I said the, like, you know, 10 or 20% growth rate, a worry I have is that the growth rate could be like 50% in Silicon Valley. And parts of the world that are kind of socially connected to Silicon Valley and not that much faster than its current pace elsewhere.

"Your service and product truly is the best and best value I have found after hours of searching."

Adrian, Johannesburg, South Africa

Want to transcribe your own content?

Get started free
1:17:52

And I think that'd be a pretty messed up world. So one of the things I think about a lot is how to prevent that.

1:17:56

Yeah.

1:17:57

Do you think that once we have this country of geniuses at data center that robotics is sort of quickly solved afterwards because it seems like a big problem with robotics is that a human can learn how to teleoperate current hardware, but current AI models can't at least not not in a way that's

1:18:14

super productive. And so if we have this ability to learn like a human should it solve robotics immediately as well? I don't think it's dependent on learning like a human. It could happen in different ways. Again, we could have trained the model

1:18:25

on many different video games, which are like robotic controls or many different simulated robotics environments, or just train them to control computer screens and they learn to generalize. So it will happen.

1:18:37

It's not necessarily dependent on human-like learning. Human-like learning is one way it could happen. If the model's like, oh, I pick up a robot, I don't know how to use it, I learn. That could happen because we discovered, discovering continual learning. That could also happen because we train the model

1:18:52

on a bunch of environments and then it generalized, or it could happen because the model learns that in the context length. It doesn't actually matter which way, if we go back to the discussion we had like an hour ago, that type of thing can happen in several different ways. But I do think when, for whatever reason, the models have those skills, then robotics

1:19:16

will be revolutionized, both the design of robots, because the models will be much better than humans at that, and also the ability to kind of control robots. So we'll get better at the physical building, the physical hardware building, the physical robots, and we'll also get better at controlling it. Now, you know, does that mean the robotics industry will

1:19:35

also be generating trillions of dollars of revenue? My answer there is yes, but there will be the same extremely fast, but not infinitely fast diffusion. So will robotics be robotics be revolutionized? Yeah, maybe tack on another year or two. That's the way I think about these things.

1:19:51

Makes sense. There's a general skepticism about extremely fast progress. Like here's my view, which is like, it sounds like you are gonna solve continual learning one way or another

1:20:01

within a matter of years. But just as people weren't talking about continual learning a couple of years ago, and then we realized, oh, why aren't these models as useful as they could be right now, even though they are clearly passing the Turing test and are experts in so many different domains, maybe it's this thing. And then we solve this thing and we realized, actually, there's another,

1:20:18

um, another thing that human intelligence can do, and that's a basis of human labor that these models can't do. And then, so why not think there will be more things like this? Why think that like, we're, you know, we've like found the pieces of human intelligence. Well, to be clear, I mean, I think continual learning, as I've said before, might not be a barrier at all.

1:20:35

Yeah. Right? Like, you know, I think there just might not be, um, th there basically might not be such a thing at all. In fact, I would point to the history in, in ML of people coming up with things that are barriers that end up kind of dissolving within the big blob of compute, right.

1:20:58

That, you know, people talked about, you know, you know, how do you have, you know, how do, how do your models keep track of nouns and verbs? And, you know, how do they, you know, they can understand some syntactically, but they can't understand semantically, you know, it's only statistical correlations. You can understand a paragraph, you can understand a word, there's reasoning, you can't do reasoning, but then suddenly it turns out you can do code and math very

1:21:24

well at all. So I think there's actually a stronger history of some of these things seeming like a big deal and then kind of dissolving. Some of them are real. I mean, the need for data is real, maybe continual learning is a real thing.

1:21:42

But again, I would ground us in something like code. Like I think we may get to the point in like a year or two where the models can just do sui end to end. Like that's a whole task. That's a whole sphere of human activity that we're just saying models can do it now. When you say end to end, do you mean setting technical direction, understanding the context

1:22:04

of the problem, etc. Yes. Yes. Yes. I mean, all of that. Interesting. I mean, that is, I feel like AGI complete, which maybe is internally consistent, but it's not like saying 90% of code or 100% of code. It's like, no, no, no. The other parts of the job as well. No, no, no, I gave this spectrum. 90% of code, 100% of code, 90% of N10 SUI, 100% of N10 SUI, new tasks are created for SUIs, eventually those get done as well.

1:22:31

But it's a long spectrum there. But we're traversing the spectrum very quickly. I do think it's funny that I've seen a couple of podcasts you've done where the host will be like, I would wrote this essay about the control learning thing, and it always makes me crack up because you're like, you know, you've been an AI researcher for like 10 years.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:22:47

And I'm sure there's like some feeling of like, okay, so a podcaster wrote an essay.

1:22:53

And like every interview I get asked about it.

1:22:55

You know, the truth of the truth of the matter is that we're all trying to figure this out together.

1:23:00

Yeah.

1:23:00

Right. together, right? There, there are some ways in which. I'm able to see things that others aren't these days that probably has more to do with like, I can see a bunch of stuff within Anthropic and have to make a bunch of decisions than I have any great research insight that, that, that others don't.

1:23:16

Right.

1:23:16

I, you know, I'm running a 2,500 person company, like it's, it's actually pretty hard for me to have concrete research insight, you know, much harder than you know, than it would have been, you know, 10 years ago or, you know, or even two or three years ago. As we go towards a world of a full drop in remote worker replacement, does a API pricing

1:23:40

model still make the most sense? And if not, what is the correct way to price AGI or serve AGI? Yeah, I mean, I think there's going to be a bunch of different business models here, sort of all at once, that are going to be experimented with. I actually do think that the API model is more durable than many people think. One way I think about it is if the technology

1:24:06

is kind of advancing quickly, if it's advancing exponentially, what that means is there's always kind of like a surface area of kind of new use cases that have been developed in the last three months. And any kind of product surface you put in place

1:24:22

is always at risk of sort of becoming irrelevant, right? Any given product surface probably makes sense for our, you know, a range of capabilities of the model, right? The chatbot is already running into limitations of, you know, making it smarter doesn't really help the average consumer that much. But I don't think that's a limitation of AI models. I don't think that's evidence that, you know, the models are good enough and they're, you know, them getting better doesn't matter to the economy. It doesn't matter to that particular product. And so

1:24:55

I think the value of the API is the API always offers an opportunity, you know, very close to the bare metal to build on what the latest thing is. And so, you know, very close to the bare metal to build on what the latest thing is. Um, and so there, you know, there's, there's, there's kind of always going to be this, you know, this, this kind of front of new startups and new ideas that weren't possible a few months ago and are possible because the model is advancing. And, and so I, I actually, I, I kind of actually predict that we are,

1:25:25

it's going to exist alongside other models, but we're always going to have the API business model because there's always going to be a need for a thousand different people to try experimenting with the model in different way. And a hundred of them become startups

1:25:40

and 10 of them become big successful startups. And, you know, two or three really end up being the way that people use the model of a given generation. So I basically think it's always going to exist. At the same time, I'm sure there's going to be other models as well.

1:25:55

Like not every token that's output by the model is worth the same amount. Think about, what is the value of the tokens that are like, that the model outputs when someone, you know, call, you know, someone, you know, calls them up and says, my Mac isn't working or something, you know, the models like restart it, right? Yeah. And like, you know, someone hasn't heard that before, but like, you know, the model said that like 10 million times, right?

1:26:21

Right. 10 million times, right? You know, maybe that's worth like a dollar or a few cents or something. Whereas if the model, you know, the model goes to, you know, one of the pharmaceutical companies and it says, oh, you know, this molecule you're developing, you should take the aromatic ring from that end of the molecule and put it on that end of the molecule. And you know, if you do that, wonderful things will happen.

1:26:50

tens of millions of dollars. Right.

1:26:50

Um, uh, so, so I think we're definitely going to see business models that, that recognize that, you know, at some point we're going to see, you know, pay for results or, you know, in some, in some form, or we may see forms of compensation that are like labor, you know, that kind of work by the hour. I don't know. I think because it's a new industry, a lot of things are going to be tried and I don't

1:27:19

know what will turn out to be the right thing. What I find, I take your point that people will have to try things to figure out what is the best way to use this blob of intelligence. But what I find striking is cloud code. So I don't think in the history of startups, there has been a single application that has been as hotly competed in as coding agents. And cloud code is a category leader here. And that seems surprising to me.

"The accuracy (including various accents, including strong accents) and unlimited transcripts is what makes my heart sing."

Donni, Queensland, Australia

Want to transcribe your own content?

Get started free
1:27:49

Like it doesn't seem intrinsically like Anthropic had to build this. I wonder if you have an accounting of why it had to be Anthropic or how Anthropic ended up building an application in addition to the model underlying it. Yeah, so it actually happened in a pretty simple way,

1:28:02

which is we had our own, you know, we had our coding models, which were good at coding. And you know, around the beginning of 2025, I said, I think the time has come where you can have non-trivial acceleration of your own research if you're an AI company by using these models. And of course, you know, you need an interface, you need a harness to use them.

1:28:25

And so I encourage people internally, and I didn't say this is one thing that, you know, that you have to use. I just said people should experiment with this. And then, you know, this thing, I think it might've been originally called CLAWD-CLI,

1:28:38

and then the name eventually got changed to CLAWD- cloud code internally, was the thing that kind of everyone was using, and it was seeing fast internal adoption. And I looked at it, and I said, probably we should launch this externally, right? You know, it's it's seen such fast adoption within Anthropic, like, you know, like, you know,

1:28:57

coding is a lot of what we do. And so you know, we have a we have a audience of many, many hundreds of people that's in some ways, at least representative of the external audience. So it looks like we already have product market fit. Let's launch this thing. Um, and, and then we launched it.

1:29:11

And, and, and I think, you know, just, just the fact that we ourselves are kind of developing the model and we ourselves know what we most need to use the model, I think it's, it's kind of creating this feedback loop. I see. In the sense that you, let's say a developer at Anthropic is like, ah, it would be better if it was better at this X thing. And then you bake that into the next model that you build.

1:29:34

That's one version of it, but then there's just the ordinary product iteration of like, you know, we have a bunch of coders within Anthropic, like, we, you know, they like use cloud code every day. And so we get fast feedback that was more important in the early days. Now, of course, there are millions of people using it. And so we get a bunch of external feedback as well. But it's, you know, it's just great to be able to get, You know, I think this is the reason why we launched a coding model and didn't launch a pharmaceutical company, right?

1:30:08

You know, my background's in biology, but we don't have any of the resources that are needed to launch a pharmaceutical company. So there's been a ton of hype around OpenClaw, and I wanted to check it out for myself. I've got a date coming up this weekend,

1:30:21

and I don't have anything planned yet. So I gave OpenClaw a Mercury debit card. I set a couple hundred dollar limit and I said, surprise me. Okay, so here's the Mac mini it's on. And besides having access to my Mercury, it's totally quarantined. And I actually felt quite comfortable giving it access to a debit card because Mercury makes it super easy to set up guardrails.

1:30:39

I was able to customize permissions, cap the spend, and restrict the category of purchases. I wanted to make sure the debit card worked, so I asked OpenCloud to just make a test transaction and decided to donate a couple bucks to Wikipedia. Besides that, I have no idea what's going to happen. I will report back on the next episode about how it goes. In the meantime, if you want a personal banking solution that can accommodate all the different ways that people use their money, even experimental ones like this one, visit mercury.com slash

1:31:04

personal. Mercury is a fintech company, not an FDIC insured bank. Banking services provided through Choice Financial Group and Column NA, members FDIC. You know she thinks we're getting coffee and walking around the neighborhood. Let me ask you about now making AI go well. It seems like whatever vision we have about how AI goes well has

1:31:28

to be compatible with two things. One is the ability to build and run AIs is diffusing extremely rapidly. And two is that the population of AIs, the amount we have in their intelligence, will also increase very rapidly. And that means that lots of people will be able to build huge populations of misaligned AIs or AIs which are just like companies which are trying to increase their footprint or

1:31:54

have weird psyches like Sidney Bing, but now they're superhuman. What is a vision for a world in which we have an equilibrium that is compatible with lots of different AIs, some of which are misaligned, running around. Yeah, yeah. So I think, you know, in the adolescence of technology, I was kind of, you know, skeptical of like, the balance of power. But I think I was particularly skeptical of, or the thing I was specifically skeptical of is you have like, three or four of these companies, like kind of all building models that are kind of dry, you know, sort of, sort of, um, uh, like derived from the, like derived from the same thing. And, uh, you know, that, that these would check each other or, or even that kind of, you know, any number of them would, would, would, uh, would, would check each other. Like we might live in a offense dominant world where, you know, like one person or

1:32:45

one AI model is like smart enough to do something that like causes damage for everything else. I think in the, I mean, in the short run, we have a limited number of players now, so we can start by within the limited number of players. We, you know, we kind of, you know, we, we need to put in place the, you know, the safeguards. We need to make sure everyone does the right alignment work. We need to make sure everyone has bio classifiers, like, you know, those are, those are kind of the immediate things we need to do.

1:33:11

I agree that, you know, that, that doesn't solve the problem in the long run, particularly if the ability of AI models to make other AI models proliferates, know, the whole thing can kind of, you know, can become harder to solve. You know, I think in the long run, we need some architecture of governance, right? Some are some architecture of governance that preserves human freedom, but kind of also allows us to like, you know, govern the very large

1:33:39

number of kind of, you know you know, uh, uh, human systems, AI systems, hybrid, hybrid, human, human, um, you know, hybrid, hybrid, human AI, like, you know, companies are, are like, or like, or like economic units. So, you know, we're going, we're going to need to think about like, you know, how do we, how do we protect the world against, you know, bioterrorism? How do we protect the world against like, you

1:34:07

know, against like, against like mirror life? Like, you know, probably, probably we're going to need to, you know, need some kind of like AI monitoring system that like, you know, kind of monitors for, for all of these things, but then we need to build this in a way that like, you know, preserves civil liberties and like our constitutional rights. So I think just, just as, as is anything else,

1:34:27

like it's, it's like a new security landscape with a new set of, you know, a new set of tools and a new set of vulnerabilities. And I think my worry is if we had a hundred years for this to happen all very slowly, we'd get used to it.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:34:43

You know, like we've gotten used to like, you know, the presence of, you know, the presence of explosives in society or like the, you know, the presence of various, um, you know, like new weapons or the, you know, the pre the, the presence of video cameras, um, we would get used

1:34:58

to it over, over, over, over a hundred and we develop governance mechanisms, we'd make our mistakes. My worry is just that this happening all so fast. And so I think maybe we need to do our thinking faster about how to make these governance mechanisms work.

1:35:12

Yeah.

1:35:13

It seems like in a offense dominant world, over the course of the next century, so the idea is the AI is making the progress that would happen over the next century happen in some period of five to 10 years. But we would still need the same mechanisms or balance of power would be similarly intractable, even

1:35:29

if humans were the only game in town. And so I guess we have the advice of AI. It fundamentally doesn't seem like a totally different ballgame here. If checks and balances were going to work, they would work with humans as well. If they aren't going to work, they wouldn't work with AIs as well. And so maybe this just dooms human checks and balances as well. But yeah, again, I think there's some way to, I think there's some way to make this happen. Like it, you know, it just, it just, you know, we may have to, you may have to talk to AIs

1:36:05

about kind of, you know, building societal structures in such a way that like these defenses are possible. I don't know. I mean, this is so, this is, you know, I don't want to say so far ahead in time, but like so far ahead in technological ability that may happen over a short period of time that it's hard for us to anticipate it in advance. Speaking of governments getting involved, on December 26th, the Tennessee legislature introduced a bill

1:36:29

which said, quote, it would be an offense for a person to knowingly train artificial intelligence to provide emotional support, including through open-ended conversations with a user. And of course, one of the things that Claude attempts to do

1:36:42

is be a thoughtful, thoughtful friend, thoughtful knowledgeable friend. And in general, it seems like we're going to have this patchwork of state laws. A lot of the benefits that normal people could experience as a result of AI are going to be curtailed, especially when we get into the kinds of things you discuss in Machines of Love and Grace, biological freedom, mental health improvements, et cetera, et cetera. It seems easier to imagine worlds in which these get whack them all the way by different

1:37:07

laws. Whereas bills like this don't seem to address the actual existential threats that you're concerned about. So I'm curious about, to understand in the context of things like this, your anthropic position against the federal moratorium on state AI laws. Yes.

1:37:24

So I don't know. There's many different things going on at once, right? I think that particular law is dumb. Like, you know, I think it was clearly made by legislators who just probably had little idea what AI models could do and not do. They're like, AI models serving as that just sounds scary.

1:37:40

Like, I don't want that to happen. So, you know, we're, we're, we're not, we're not in favor of that.

1:37:46

Right.

1:37:46

But, but, but you know, that, that wasn't the thing that was being voted on. The thing that was being voted on is we're going to ban all state regulation of AI for 10 years with no apparent plan to, to do any federal regulation of AI, which would take Congress to pass, which is a very high bar. So the idea that we'd ban states from doing anything for 10 years, and people said they

1:38:10

had a plan for federal government, but there was no actual, there was no proposal on the table, there was no actual attempt. Given the serious dangers that I lay out in adolescence of technology around things like the biological weapons and bioterrorism, you know, kind of biological weapons and bioterrorism, autonomy risk, and the timelines we've been talking about, like 10 years is an eternity.

1:38:32

Like, that's a, that's a, I think that's a crazy thing to do. So if that's the choice, if that's what you forced us to choose, then we're going to choose not to have that moratorium. And, you know, I think the benefits of that position exceed the costs, but it's not a perfect position if that's the choice. Now, I think the thing that we should do,

1:38:53

the thing that I would support is the federal government should step in, not saying states you can't regulate, but here's what we're going to do, and states you can't differ from this, right? Like, I think preemption is fine in the sense of saying that federal government says, here's our standards, this applies to everyone, states can't

1:39:13

do something different. That would be something I would support if it would be done in the right way. What, um, but but this idea of states, you can't do anything, and we're not doing anything either. That either, that struck us as very much not making sense. And I think we'll not age well, it's already starting to not age well

1:39:33

with all the backlash that you've seen. Now, in terms of what we would want, I mean, the things we've talked about are starting with transparency standards in order to monitor some of these autonomy risks and bioterrorism risks. As the risks become more serious, as we

"I'd definitely pay more for this as your audio transcription is miles ahead of the rest."

Dave, Leeds, United Kingdom

Want to transcribe your own content?

Get started free
1:39:52

get more evidence for them, then I think we could be more aggressive in some targeted ways and say, hey, AI bioterrorism is really a threat. Let's pass a law that kind of forces people to have classifiers. And I could even imagine, it depends, it depends how serious a threat, let's pass a law that kind of forces people to have classifiers. And I could even imagine, it depends. It depends how serious a threat it ends up being.

1:40:09

We don't know for sure. Then we need to pursue this in an intellectually honest way, where we say ahead of time, the risk has not emerged yet. But I could certainly imagine with the pace that things are going that I could imagine a world where later this year we say, hey, this AI bioterrorism

1:40:25

stuff is really serious. We should do something about it. We should put it in a federal, we should, you know, put it in a federal standard. And if the federal government won't act, we should put it in a state standard. I could totally see that. I'm concerned about a world where if you just consider the pace of progress you're expecting, the life cycle of legislation. You know, the benefits are, as you say, because of diffusion lag, the benefits are slow enough that I really do think this patchwork of, on the current trajectory, this patchwork

1:40:55

of state laws would prohibit. I mean, having an emotional chatbot friend is something that freaks people out, then just imagine the kinds of actual benefits from AI we want normal people to be able to experience from improvements in health and health span and improvements in mental health and so forth. Whereas at the same time, it seems like you think the dangers are already on the horizon and I just don't see that much.

1:41:19

It seems like it would be especially injurious to the benefits of AI as compared to the dangers of AI. And so that's maybe where the cost-benefit makes less sense to me. So there's a few things here, right? I mean, people talk about there being thousands of these state laws. First of all, the vast, vast majority of them do not pass.

1:41:37

And, you know, the world works a certain way in theory, but like just because a law has been passed doesn't mean it's really enforced, right? The people, the people, you know, implementing it may be like, oh my God, this is stupid. It would mean shutting off like, you know, everything that's ever been built

1:41:52

in everything that's ever been built in Tennessee. So, you know, very often laws are interpreted in like, of course, you have to worry if you're passing a law to stop a bad thing. You had this problem as well. Yeah.

1:42:10

Look, my basic view is, you know, if we could decide what laws were passed and how things were done, which, you know, we're only one small input into that, you know, I would deregulate a lot of the stuff around the health benefits of AI.

1:42:28

I think, you know, I don't worry as much about the like the kind of chatbot laws. I actually worry more about the drug approval process where I think AI models are going to greatly accelerate the rate at which we discover drugs and just the, the pipeline will get jammed up.

1:42:47

Like the pipeline will not be prepared to like process all, all of the stuff that's going through it. So, um, you know, I, I think, I think reform of the regulatory process to buy us more towards, we have a lot of things coming where the safety and the efficacy is actually going to be really crisp and clear.

1:43:06

Like, I mean, a beautiful thing, really, really crisp and clear and like really, really effective. But, you know, and, and maybe we don't need all this, all this, um, uh, uh, like, um, all this superstructure around it that was designed

1:43:19

around an era of drugs that barely work and often have serious side effects. But at the same time, I think we should be ramping up quite significantly this kind of safety and security legislation. And like I've said, starting with transparency is my view of trying not to hamper the industry, trying to find the right balance. I'm worried about it.

1:43:45

Some people criticize my essay for saying, that's too slow. The dangers of AI will come too soon if we do that. Well, basically I kind of think like, the last six months and maybe the next few months are gonna be about transparency.

1:43:58

And then if these risks emerge when we're more certain of them, which I think we might be as soon as later this year, then I think we need to act very fast in the areas that we've actually seen the risk. Like, I think the only way to do this is to be nimble.

1:44:12

Now, the legislative process is normally not nimble, but we need to emphasize to everyone involved the urgency of this. That's why I'm sending this message of urgency, right? That's why I wrote Adolescence of Technology. I wanted policymakers to read it. I wanted economists to read it.

1:44:29

I want national security professionals to read it. You know, I want decision makers to read it so that they have some hope of acting faster than they would have otherwise. Is there anything you can do or advocate that would make it more certain

1:44:46

that the benefits of AI are better instantiated, where I feel like you have worked with legislatures to be like, okay, we're gonna prevent bioterrorism here, we're gonna increase transparency, we're gonna increase whistleblower protection. And I just think by default,

1:45:00

the actual, like the things we're looking forward to here, it just seems very easy, they seem very fragile to, uh, different kinds of moral panics or political economy problems. I don't actually, so, so I don't actually agree that much in the developed world. I feel like, you know, in the developed world,

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:45:16

like markets function pretty well. And when there's, when there's like a lot of money to be made on something, and it's clearly the best available alternative, it's actually hard for the regulatory system to stop it. You know, we're, we're seeing that in AI itself, right? I, you know, like a thing I've been trying to fight for is export controls on chips to China, right? And like, that's in the national security interests

1:45:40

of the U S like, you know, that's like square within the, you know, the, the policy beliefs of, you know, every, almost everyone in Congress of both parties. But, and, you know, I think the case is very clear, the counter arguments against it are, I'll politely call them fishy.

1:45:57

Um, uh, and yet it doesn't happen and we sell the chips because there's, there's so much money, there's so much money riding on it. Um, and you know, the, the, that money wants to be made. And, and in that case, in my opinion, that's a bad thing. Um, and, but, but it also, it also applies when, when it's a good thing.

1:46:15

And, and so I, I don't think that if we're talking about drugs and benefits of the technology. I am not as worried about those benefits being hampered in the developed world. I am a little worried about them going too slow. And as I said, I do think we should work to speed the approval process in the FDA. I do think we should fight against these chatbot bills that you're describing, right, described individually. I'm against them.

1:46:45

I think they're stupid. But I actually think the bigger worry is a developing world where we don't have functioning markets, where we often can't build on the technology that we've had. I worry more that those folks will get left behind. And I worry that even if the cures are developed,

1:47:03

maybe there's someone in rural Mississippi who doesn't get it as well, right? That's a kind of smaller version of the thing, the concern we have in the developing world. And so the things we've been doing are, you know, we work with, you know, we work with, you know, philanthropists, right? You know, we work with folks who, you know, deliver, you know, medicine and health interventions to developing world, to sub-Saharan Africa, India, Latin America, other developing parts of the world. That's the thing I think that won't happen on its own.

1:47:41

You mentioned expert controls. Yeah. Why can't US and China both have a country of geniuses? Why? Why can't you know, why won't it happen? Or why? No, like, why shouldn't it happen? Why shouldn't it happen? Um, you know, I think, I think if this does happen, you know, then then we kind of have a well, we could have a few situations. We have like

1:48:02

an offense dominant situation. We could have a situation like nuclear weapons, but like more dangerous, right? Where it's like, you know, kind of either side could easily destroy everything. We could also have a world where it's kind of unstable. Like the nuclear equilibrium is stable, right? Because it's like deterrence.

1:48:21

But let's say there were uncertainty about like if the two AIs fought, which AI would win. That could create instability, right? You often have conflict when the two sides have a different assessment of their likelihood of winning, right? If one side is like, oh, yeah, there's a 90% chance I'll win,

1:48:37

and the other side's like, there's a 90% chance I'll win, then a fight is much more likely. They can't both be right, but they can both think that. But this is like a fully general argument against the diffusion of AI technology, which that is the implication of this world.

1:48:51

Let me just go on,

1:48:53

because I think we will get diffusion eventually. The other concern I have is that people, the governments will oppress their own people with AI. And so, you know, I'm just, I'm worried about some world where you have a country that's already kind of a, there's a government that kind of already is kind of building a high-tech authoritarian state.

1:49:19

And to be clear, this is about the government. This is not about the people. We need to find a way for people everywhere to benefit. My worry here is about governments. So yeah, my worry is if the world gets carved up into two pieces, one of those two pieces

1:49:33

could be authoritarian or totalitarian in a way that's very difficult to displace. Now, will governments eventually get powerful AI and there's risk of authoritarianism? Yes, will governments eventually get powerful AI? And there's risk of authoritarianism?

1:49:45

Yes.

1:49:45

Will governments eventually get powerful AI? And there's risk of bad equilibria? Yes, I think both things. But the initial conditions matter, right? At some point, we're going to need to set up the rules of the road.

1:50:02

I'm not saying that one country, either the United States or a coalition of democracies, which I think would be a better setup, although it requires more international cooperation than we currently seem to want to make. But I don't think a coalition of democracies

1:50:16

or certainly one country should just say, these are the rules of the road. There's going to be some negotiation. The world is going to have to grapple with this. And what I would like is that the, the, the, you know, the democratic nations of the world, those

"99% accuracy and it switches languages, even though you choose one before you transcribe. Upload → Transcribe → Download and repeat!"

Ruben, Netherlands

Want to transcribe your own content?

Get started free
1:50:31

with, you know, who are close, whose governments have represent closer to pro-human values are holding the stronger hand that have, have more leverage when the rules of the road are set. And, and so I'm, I'm very concerned about that initial condition. I was really listening to an interview from three years ago,

1:50:49

and one of the ways it aged poorly is that I kept asking questions, assuming there was gonna be some key fulcrum moment two to three years from now, when in fact, being that far out, it just seems like progress continues, AI improves,

1:51:03

AI is more diffused, people will use it for more things. It seems like you're imagining a world in the future where the countries get together and here's the rules of the road and here's the leverage we have, here's the leverage you have when it seems like on current trajectory,

1:51:15

everybody will have more AI. Some of that AI will be used by authoritarian countries, some of that within the authoritarian countries will be by private actors versus state actors. It's not clear who will benefit more. It's always unpredictable to tell in advance. You know, it seems like the internet

1:51:29

privileged authoritarian countries more than you would have expected. And maybe the AI will be the opposite way around. So I want to better understand what you're imagining here. Yeah, yeah. So just to be precise about it,

1:51:42

I think the exponential of the underlying technology will continue as it has before, right? The models get smarter and smarter even when they get to country of geniuses in a data center. I think you can continue to make the model smarter. There's a question of like getting diminishing returns on their value in the world, right? How much does it matter after you've already solved human biology or, you know, at some point, you can do harder math.

1:52:08

You can do more abstruse math problems, but nothing after that matters. But putting that aside, I do think the exponential will continue, but there will be certain distinguished points on the exponential, and companies, individuals, countries will reach those points at different times.

1:52:27

And so, you know, there's, you know, could there be some, you know, I talk about is a nuclear deterrent still in adolescence of technology, is a nuclear deterrent still stable in the world of AI? I don't know, but that's an example of like one thing we've taken for granted that like the technology could reach such a level that it's no longer like, you know, we can no longer be certain of it at least.

1:52:49

You know, think of others, you know, there are kind of points where if you reach a certain point, you maybe you have offensive cyber dominance and like every computer system is transparent to you after that.

1:53:02

Unless the other side has a kind of equivalent defense. So I don't know what the critical moment is or if there's a single critical moment, but I think there will be either a critical moment, a small number of critical moments, or some critical window where it's like AI confers some large advantage from the perspective of national security and one country or coalition has reached it before others.

1:53:34

I'm not advocating that they're just like, okay, we're in charge now. That's not how I think about it. There's always the other side is catching up. There's extreme actions you're not willing to take and it's not right to take complete control anyway. But at the point that that happens, I think people are going to understand that the world has changed. And there's going to be some negotiation, implicit or implicit, about what is the post AI world order look like? And I think my interest is in, you know, making that negotiation be one in which, you know, classical liberal democracy has, you know, has a strong hand. Well, I want to understand what that better means because you say in the essay, quote, autocracy is simply not a form of government that people can accept in the post-powerful

1:54:32

AI age. And that sounds like you're saying the CCP as an institution cannot exist after we get AGI. And that seems like a very strong demand and it seems to imply a world where the leading lab or the leading country will be able to, and by that language should get to determine how the world is governed or what kinds of governments are allowed and

1:54:58

not allowed. Yeah. So when, when I, um, I believe that paragraph was, I think I said something like, you could take it even further and say X. So I wasn't, I wasn't necessarily endorsing that, that, I wasn't necessarily endorsing that view. I, you know, I was saying like, here's first, you know, here's a weaker thing that I believe, but you know, I think, I, you know, I think I said, we have to worry a lot about authoritarians and we should try and check them

1:55:26

and limit their power. Like, you could take this kind of further, much more interventionist view that says, like, authoritarian countries with AI are these kind of self-fulfilling cycles that you can't, that are very hard to displace,

1:55:40

and so you just need to get rid of them from the beginning. That has exactly all the problems you say, which is, you know, if you were to make a commitment to overthrowing every authoritarian country, I mean, then they would take a bunch of actions now that like, you know, that could

1:55:55

lead to instability. So that may or, you know, that just may not be possible. But the point I was making that I do endorse is that it is quite possible that, you know, today, you know, the view or at least my view or the view in most of the Western world is democracy is a better form of government than authoritarianism.

1:56:18

But it's not like if a country's authoritarian, we don't react the way we reacted if they committed a genocide or something, right? And I guess what I'm saying is I'm a little worried that in the age of AGI, authoritarianism will have a different meaning. It will be a graver thing. And we have to decide one way or another how to deal with that. And the interventionist view is one possible view. I was exploring such views. You know, it may end up being the right view, it may end up being too extreme to

1:56:49

be the right view. But I do have hope. And one piece of hope I have is there is, we have seen that as new technologies are invented, forms of government become obsolete. I mentioned this in Adolescence of Technology where I said, you know, like feudalism was basically, you know, like a form of government, right? And then when we invented industrialization, feudalism was no longer sustainable, no longer

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:57:20

made sense. Why is that, Hov? Couldn't that imply that democracy is no longer going to be a competitive system? Well, it could, right. It could go either way, right? But I actually, so these problems with authoritarianism, right, that the problems with authoritarianism get deeper, I just, I wonder if that's an indicator of other problems that authoritarianism will have.

1:57:46

Right? In other words, people become, because authoritarianism becomes worse, people are more afraid of authoritarianism. They work harder to stop it. It's more of a, like you have to think in terms of total equilibrium, right? I just wonder if it will motivate new ways of thinking about, with the new technology, how to preserve and protect freedom. And even more optimistically, will it lead to a collective reckoning and, you know, a kind of more emphatic realization

1:58:23

of how important some of the things we take as individual rights are, right? A more emphatic realization of how important some of the things we take as individual rights are, right? A more emphatic realization that we just, we really can't give these away. There's, we've seen, there's no other way to live that actually works. I am actually, I am actually hopeful that, I guess one way to say it, it sounds too idealistic, but I actually believe it could be the case, is that dictatorships become morally obsolete. They become morally unworkable forms of government. And that the crisis that that creates is sufficient to

1:59:00

force us to find another way. I think there is genuinely a tough question here, which I'm not sure how you resolve. And we've had to come out one way or another on it through history, right? So with China in the 70s and 80s, we decided,

1:59:14

even though it's an authoritarian system, we will engage with it. And I think in retrospect, that was the right call because it has stayed an authoritarian system, but a billion plus people are much wealthier and better off than they would have otherwise been. And it's not clear that it would have stopped being an authoritarian country, otherwise

1:59:28

you can just look at North Korea as an example of that, right? And I don't know if that takes that much intelligence to remain an authoritarian country that continues to coalesce its own power. As you can just imagine a North Korea with an AI that's much worse than everybody else's, but still enough to keep power. And so you can just imagine a North Korea with an AI that's much worse than everybody else's but still enough to keep power. And so in general, it seems like, should we just have this attitude of the benefits of AI will, in the form of all these empowerments of humanity

1:59:55

and health and so forth, will be big. And historically, we have decided it's good to spread the benefits of technology widely, even to people whose governments are authoritarian. And I guess it is a tough question how to think about it with AI, but historically we have said, yes, this is a positive sum world

2:00:12

and it's still worth defusing the technology. Yeah, so there are a number of choices we have. I think framing this as a kind of government to government decision and in national security terms, that's like one lens, but there are a lot of other lenses. Like, you could imagine a world where, you know, we produce all these cures

2:00:30

to diseases and, like, you know, the cures to diseases are fine to sell to authoritarian countries. The data centers just aren't, right? The chips and the data centers just aren't. And the AI industry itself. You know, like another possibility is, and I think folks should think about this, like, you know, could there be developments we can make either that naturally happened as a result of AI

2:00:55

or that we could make happen by building technology on AI? Could we create an equilibrium where it becomes infeasible for authoritarian countries to deny their people kind of private use of the benefits of the technology? You know, are there are there are there are there equilibria where we can kind of give everyone in authoritarian country their own AI model that kind of, you know,

2:01:19

like defends themselves from surveillance, and there isn't a way for the authoritarian country to like, crack down on on this while retaining power. I don't know, that sounds to me like if that went far enough it would be a reason why authoritarian countries would disintegrate from the inside. But maybe there's a middle world where like there's an equilibrium or if they want to hold on to power the authoritarians can't deny kind of individualized access to the technology. But I actually do have a hope for the more radical version, which is, you know, is it

2:01:51

possible that the technology might inherently have properties or that by building on it in certain ways, we could create properties that have this kind of dissolving effect on authoritarian structures? Now, we hoped originally, right? We think back to the beginning of the Obama administration, we thought originally that social media and the internet

2:02:12

would have that property. It turns out not to. But I don't know. What if we could try again with the knowledge of how many things could go wrong and that this is a different technology? I don't know that it would work, but it's worth a try.

2:02:25

Yeah.

2:02:26

I think it's just, it's very unpredictable. Like there's first principles reasons why

2:02:30

authoritarianism might be prevalent.

2:02:31

It's all very unpredictable. I don't think, I mean, we got it. We, we just got to, we kind of, we got to recognize the problem and then we got to try those and then assess whether they're working or which ones are working, if any, and then try new ones if the old ones aren't working. But I guess what that nets out to today is you say, we will not sell data centers or sorry, chips, and then the ability to make chips to China.

2:02:55

And so in some sense, you are denying there will be some benefits to the Chinese economy, Chinese people, et cetera, because we're doing that. And then there'd also be benefits to the American economy because it's a positive sum world. We could trade. They could have their country's data centers doing one thing. We could have ours doing another. And already, you're saying it's not worth that positive sum stipend to empower this

"Cockatoo has made my life as a documentary video producer much easier because I no longer have to transcribe interviews by hand."

Peter, Los Angeles, United States

Want to transcribe your own content?

Get started free
2:03:18

country's AI.

2:03:19

What I would say is that we are about to be in a world where growth and economic value will come very easily. If we're able to build these powerful AI models, growth and economic value will come very easily. What will not come easily is distribution of benefits, distribution of wealth, political freedom. These are the things that are going to be hard to achieve. And so, when I think about policy, I think that the technology in the market will deliver

2:03:49

all the fundamental benefits, you know, almost faster than we can take them. And that these questions about distribution and political freedom and rights are the ones that will actually matter and that policy should focus on. Okay, so speaking of distribution, as you were mentioning, we have developing countries and in many cases, catch-up growth has been weaker than we would have hoped for. But when catch-up growth does happen, it's fundamentally because they have underutilized labor and we can bring the capital and know-how from developed countries to these countries

2:04:21

and then they can grow quite rapidly. Obviously in a world where labor is no longer the constraining factor, this mechanism no longer works. And so is the hope basically to rely on philanthropy from the people who immediately get wealthy from AI or from the countries that get wealthy from AI?

2:04:37

What is the hope for- I mean, philanthropy should obviously play some role as it has in the past, but I think growth is always better and stronger if we can make it endogenous. So what are the relevant industries in an AI-driven world?

2:04:54

Look, there's lots of stuff. I said we shouldn't build data centers in China, but there's no reason we shouldn't build data centers in Africa. In fact, I think it'd be great to build data centers in Africa. You know, as long as they're not owned by China, we should, we should build, we should build data centers in Africa. I think that's a, that's, that's, I think that's a

2:05:12

great thing to do. You know, we should also build, you know, there's no reason we can't like, you know, the, the, if, if AI is accelerating, accelerating drug discovery, then, you know, there'll be a bunch of biotech startups, like let's make sure some of those happen in the developing world. And certainly during the transition, I mean, we can talk about the point where humans have no role, but, but humans will have, still have some role in starting up these companies

2:05:38

and supervising, supervising the AI models, so let's make sure some loose humans are humans in the developing world so that fast growth can happen there as well. You guys recently announced, Quad is gonna have a constitution that's aligned to a set of values and not necessarily just to the end user. And there's a world you can imagine

2:05:54

where if it is aligned to the end user, it preserves the balance of power we have in the world today because everybody gets to have their own AI that's advocating for them. And so the ratio of bad actors to good actors stays constant. It seems to work out for our world today. Why is it better not to do that, but to have a specific set of values

2:06:12

that the AI should carry forward? Yeah, so I'm not sure I'd quite draw the distinction in that way. There may be two relevant distinctions here, which are, I think you're talking about a mix of the two. Like one is, should we give the model a set of instructions about do this versus don't

2:06:30

do this? And the other, you know, versus should we give the model a set of principles for, you know, for kind of how to act? And there it's, you know, it's just, it's kind of purely a practical and empirical thing that we've observed that by teaching the model principles, getting it to learn from principles, its behavior is more consistent, it's easier to cover edge cases, and the model

2:06:57

is more likely to do what people want it to do. In other words, if you, you know, if you're like, you know, don't tell people how to hotwire a car, don't speak in Korean. Don't, you know, you know, just, you know, like don't make biological weapons, but overall you're trying to understand what it should be aiming to do, how it should be aiming to operate. So just from a practical perspective,

2:07:32

that turns out to be just a more effective way to train the model. That's one piece of it. So that, you know, it's the kind of rules versus principles trade-off. Then there's another thing you're talking about, which is kind of like the corrigibility versus like, you know, I would say kind of intrinsic motivation trade-off, which is like,

2:07:51

how much should the model be a kind of, I don't know, like a skin suit or something where, you know, you just kind of, you know, it just kind of directly follows the instructions that are given to it by whoever is giving it those instructions, versus how much should the model have an inherent set of values and go off and do things on its own. And there, I would actually say everything about the model is actually closer to the direction of like, you know, it should mostly do what people want. It should mostly follow the— We're not trying to build something that like, you know,

2:08:27

goes off and runs the world on its own. We're actually pretty far on the corrigible side. Now, what we do say is there are certain things that the model won't do, right? That it's like, you know, that— I think we say it in various ways in the Constitution

2:08:41

that under normal circumstances, if someone asks the model to do a task, it should do that task. That should be the default. But if you've asked it to do something dangerous or if you've asked it to kind of harm someone else, then the model is unwilling to do that. So I actually think of it as like a mostly

2:09:04

corrigible model that has some limits, but those limits are based on principles. Yeah, I mean, then the fundamental question is, how are those principles determined? And this is not a special question for Anthropic. This would be a question for any company. But because you have been the ones to actually write down the principles, I get to ask you this question. Normally a constitution is like you write it down, it's set in stone, and there's a process of updating

2:09:29

it and changing it and so forth. In this case, it seems like a document that people in anthropic write that can be changed at any time, that guides the behavior of systems that are going to be the basis of a lot of economic activity. What is the, how do you think about how those principles should be set? Yes. Um, so I think there's, there's two, there's maybe three, three kinds of sizes of loop

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
2:09:56

here, right? Three, three ways to iterate. One is you can iterate. We iterate within entropic, we train the model, we're not happy with it. And we kind of changed the constitution. And I think that's good to do. And putting out publicly, making updates to the constitution every once in a while, saying, here's a new constitution.

2:10:11

I think that's good to do because people can comment on it. The second level of loop is different companies will have different constitutions. And I think it's useful for like, Anthropic puts out a constitution and, you know, Gemini model puts out a constitution and, you know, other companies put out a constitution and then they can

2:10:29

kind of look at them, compare, outside observers can critique and say, this, this, I'd like this one, this thing from this constitution and this thing for that constitution. And then kind of that, that creates some kind of, you know, soft incentive and feedback for all the companies to like take the best of each elements and improve. Then I

2:10:48

think there's a third loop, which is, you know, society beyond the AI companies and beyond just those who kind of, you know, who, who comment on the constitutions without hard power. And, and there, you know, we've done some experiments like, you know, a couple of years ago, we did an experiment with, I think it was called the collective intelligence

2:11:05

project to like, um, you know, to, to basically pull people and ask them what should be in our AI constitution. Um, uh, and, and, you know, I think at the time we incorporated some of those changes. And so you could imagine with the new approach

2:11:19

we've taken to the constitution, doing something like that, It's a little harder because it's like, that was actually an easier approach to take when the constitution was like a list of do's and don'ts. Um, at the level of principles, it has to have a certain amount of coherence.

2:11:32

Um, but, but you could, you could still imagine getting views from a wide variety of people, and I think you could also imagine, and this is like a crazy idea, but Hey, you know, this whole interview is about crazy ideas, right? So, you know, you could even imagine systems of kind of representative

2:11:48

government having input, right? Like, you know, I wouldn't do this today because the legislative process is so slow. Like, this is exactly why I think we should be careful about the legislative process and AI regulation, but there's no reason you couldn't

2:12:01

in principle say, like, you know, all AI models have to have a constitution that starts with like these things. And then like, you can append, you can append other things after it, but like, there has to be this special section that like takes precedence. I wouldn't do that. That's too rigid. That, that sounds, um, you know, that, that, that, that sounds kind of overly prescriptive in a way that I think overly aggressive legislation is,

2:12:26

but like, that is a thing you could, you know, like, that is a thing you could try to do. Is there some much less heavy-handed version of that? Maybe. I really like control loop too, where obviously this is not how constitutions

2:12:39

of actual governments do or should work, where there's not this vague sense in which the Supreme Court will feel out how people are feeling and what are their vibes and then update the constitution accordingly. So there's, with actual governments, there's a more procedural process.

2:12:55

But you actually have a vision of competition between constitutions, which is actually very reminiscent of how some libertarian charter cities people used to talk about what an archipelago of different kinds of governments could look like, and then there'd be selection among them of who could operate the most effectively, in which place people would be the happiest.

2:13:14

And in a sense, you're actually, yeah, there's this vision. I'm kind of recreating that.

2:13:19

Yeah, yeah, like the utopia of archipelagos.

2:13:22

Again, I think that vision has things to recommend it and things that will kind of go wrong with it. I think it's an interesting and in some ways compelling vision, but also things will go wrong with it that you hadn't imagined. So I like Loop 2 as well, but I feel like the whole thing has got to be some mix of loops one, two, and

2:13:46

three, and it's a matter of the proportions, right? I think that's got to be the answer. When somebody eventually writes the equivalent of the making of the atomic bomb for this era, what is the thing that will be hardest to glean from the historical record that they're most likely to miss? I think a few things. One is, at every moment of this exponential, the extent to which the world outside it didn't understand it. This is a bias that's often present in history where anything that actually happened looks inevitable in retrospect. And so, you know,

2:14:20

I think when people look back, it will be hard for them to put themselves in the place of people who are actually making a bet on this thing to happen. That wasn't inevitable that we had these arguments like the arguments that you know that I make for scaling or that continual learning will be solved. You know, that some of us internally in our heads put a high probability on this happening, but it's like there's a world outside us that's not acting on that at all. And I think the weirdness of it, I think unfortunately like the insularity of it, like, you know, if we're one year or two years away from it happening, like the average person on the street has no idea. And that's one of the things I'm trying to change, like with the

2:15:14

memos, with talking to policymakers, but like, I don't know. I think, I, I, that's just like a crazy thing. Yeah. Um, finally I would say, and this probably applies to almost all historical moments of crisis, um, how absolutely fast it was happening, how everything was happening all at once. And so decisions that you might think, you know, were kind of carefully calculated. Well, actually you have to make that decision and then you have to make 30

"Your service and product truly is the best and best value I have found after hours of searching."

Adrian, Johannesburg, South Africa

Want to transcribe your own content?

Get started free
2:15:43

other decisions on the, on the same day, because it's all happening so fast and, and you don't even know which decisions are going to turn out to be consequential. So, you know, one of my, one of my, I guess, worries, although it's also an insight into, into, you know, into kind of what's

2:15:58

happening is that, you know, some very critical decision will be, will be some decision that, you know, someone just comes into my office and is like, Dario, you have two minutes. Like, you know, should we, should we do, you know, should we do thing, thing, a or thing B on this? Like, you know, someone gives me this random, you know, half page, half page memo. And it's like, should we, should we do a or B? And I'm like, I don't know, I have to eat lunch, let's do B. And that ends up being the most consequential thing ever.

2:16:26

So final question. It seems like you have, there's not tech CEOs who are usually writing 50 page memos every few months. And it seems like you have managed to build a role for yourself and a company around you,

2:16:40

which is compatible with this more intellectual type role as CEO. And I want to understand how you construct that and how, like, how does that work to be that you just go away for a couple of weeks and then you tell your company, this is the memo like here's what we're doing. It's also reported you write a bunch of these internally.

2:16:58

Yeah. So, I mean, for this particular one, you know, I wrote it over winter break. Um, uh, so there was the tight, you know, and I was having a hard time finding the time to actually find it, to actually write it. But I actually think about this in a broader way. Um, I actually think it relates to the culture of the company.

2:17:13

So I probably spend a third, maybe 40% of my time making sure the culture of Anthropic is good. As Anthropic has gotten larger, it's, it's gotten larger, it's gotten harder to just, you know, get involved in like, you know, directly involved in like the training of the models, the launch of the models, the building of the products. Like it's 2,500 people.

2:17:31

It's like, you know, there's just, you know, I have certain instincts, but like, there's only, you know, I, it's very difficult to get in, to get, to get involved in every single detail. I like, I try as much as possible. But one thing that's very leveraged is making sure Anthropic is a good place to work. People like working there. Everyone thinks of themselves as team members.

2:17:51

Everyone works together instead of against each other. And you know, we've seen as some of the other AI companies have grown without naming any names, you know, we're starting to see decoherence and people fighting each other. And, you know, I would argue there was even a lot of that from the beginning, but, but, you know, that it's, it's gotten worse, but I think we've done

2:18:06

an extraordinarily good job, even if not perfect of holding the company together, making everyone feel the mission that we're sincere about the mission and that, you know, everyone has faith that everyone else there is working for the right reason that we're a team that people aren't trying to get ahead of each other's expense or backstab each other, which again, I think happens a lot at some of the other places.

2:18:33

And how do you make that the case? I mean, it's a lot of things, you know, it's me, it's the other people we hire, it's the environment we try to create. But I think an important thing in the culture is I, some, and just, you know, the, the, you know, the other leaders as well, but especially me have to articulate what the company is about, why it's doing what it's doing, what its strategy is, what its values are, what its mission is and what it stands for. And you know, when you get to 2,500 people, you can't do that person by person.

2:19:09

You have to write or you have to speak to the whole company. This is why I get up in front of the whole company every two weeks and speak for an hour. It's actually, I mean, I wouldn't say I write essays internally. I do two things. One, I write this thing called the DVQ, Dario Vision Quest. Um, uh, I wasn't the one who named it that that's the name it received. And it's one of these names that I kind of, I tried to fight it cause it

2:19:30

made it sound like I was like going off and smoking peyote or something. Um, uh, but, but the name just stuck. Um, so I get up in front of the company every two weeks, I have like a three or four page document. And I just kind of talk through like three or four different topics about what's going on internally. The, you know, the, the models we're producing, the products, the outside industry, the world as a whole, as it relates to AI and

2:19:55

geopolitically in general, you know, just some mix of that. And I just go through very, very honestly, I just go through and I just, I just say, you know, this, this is what I'm thinking. This is what anthropic leadership is thinking. And then I answer questions and, and that direct connection, I think has a lot of value that is hard to achieve when you're

2:20:14

passing things down the chain, you know, six, six levels deep. Um, uh, and you know, a large fraction of the company comes, comes to attend either, either in person or, um, either in person or virtually. And it, you know, it really means that you can communicate a lot. And then the other thing I do is I just, you know, I have a channel in Slack where I just write a bunch of things and comment a lot.

2:20:35

Um, and often that's in response to, you know, just things I'm seeing at the company or questions people ask, or like, you know, we do internal surveys and there are things people are concerned about. And so I'll write them up and I'm like, I'm, you know, I'm, I'm, I'm just, I'm very honest about these things. You know, I just, I just say them very directly.

2:20:55

And the point is to get a reputation of telling the company the truth about what's happening, to call things what they are, to acknowledge problems, to avoid the sort of corpo speak, the kind of defensive communication that often is necessary in public because the world is very large and full of people who are interpreting things in bad faith. But if you have a company of people who you trust and we try to hire people that we trust, then you can really just be entirely unfiltered.

2:21:30

And I think that's an enormous strength of the company. It makes it a better place to work. It makes people more of the sum of their parts and increases the likelihood that we accomplish the mission because everyone is on the same page about the mission and everyone is debating and discussing how best to accomplish the mission. Hmm. Well, in lieu of an external DarioVision quest,

2:21:48

we have this interview. This interview is a little like that. This has been fun, Dario. Thanks for doing it. Yeah. Thank you, Dorkesh. Hey, everybody. I hope you enjoyed that episode. If you did, the also helpful if you leave a rating or a comment on whatever platform you're listening

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
2:22:08

on.

2:22:09

If you're interested in sponsoring the podcast, you can reach out at dwarkesh.com slash advertise. If you're interested in sponsoring the podcast, you can reach out at dwarkesh.com slash advertise.

2:22:16

Otherwise, I'll see you on the next one.

Get ultra fast and accurate AI transcription with Cockatoo

Get started free →

Cockatoo