Sam Altman x Nikhil Kamath: How to Win When AI Changes Everything | People by WTF | Episode 13

Nikhil Kamath45:11

98 views
Watch
0:00

Oh, but he said where Nikhil is.

0:24

Nikhil? You've told him 5 minutes? Like he has two minutes.

0:26

Yeah, two minutes.

0:28

Everything looks good, just the monitor, the main monitor went off. Everyone who's done can leave.

0:43

Hi Sam.

0:44

Hey Nikhil. How are you? I'm good. Sorry I'm late.

0:45

I got caught up in getting ready for the launch tomorrow and lost track of time and excitement with the final results.

0:53

Hey, no worries.

0:54

I'm guessing it must be really hectic, right?

0:55

It is a very hectic day. I have the model and I've been playing with it a little bit. How is it different, Sam? I'm not an expert at this. There's all these ways we can talk about, it's better at this metric, or it can do this amazing coding demo that GPT-4 couldn't. But the thing that has been most striking for me is, in ways that are both big and small, going back from GPT-5 to our previous generation model is just so painful. It's just worse at everything.

1:45

And I've taken for granted that there's a fluency and a depth of intelligence with GPT-5 that we haven't had in any previous model. It's an integrated model, so you don't have to pick on our model switcher and know if you should use GPT-4 or O3 or O4 mini, or any of the complicated things. It's just one thing that works. And it is like having PhD-level experts in every field available to you 24-7 for whatever you need. Not only to ask anything, but also to do anything for you.

2:16

So if you need a piece of software created, it can kind of do it from scratch all at once. If you need a research report on some complicated topic, it can do that for you. If you needed to plan an event for you, all at once. If you need a research report on some complicated topic, it can do that for you.

2:30

If you needed to plan an event for you, it could do that too. Is it more agentic in nature in the sense that sequential tasking,

2:40

you're one step closer to it? it's much better at things like that. The robustness and reliability has greatly increased and that's very helpful for agentic workflows.

2:55

I'm very impressed by how long and complex of a task it can carry out. to invest in for the next decade. So I don't want to talk about that too much. I thought we'll keep today about first principles and how the world is changing by virtue of all that is changing in the world that you dominate. So the very first thing I want to start with

3:19

is if I were a 25 year old boy or girl living in Mumbai or Bangalore in India. I know you've said a bunch of times that colleges are not holding on to the place of relevance they might have had when I was growing up. But what do I do now? A, what do I study? If I'm starting a company, what kind of what do I study? If I were to even find a job, what industry do you think

3:51

has some kind of tailwind? I'm not talking 10 years down the line, but even as close as three to five years down the line.

3:59

First of all, I think this is probably the most exciting time to be starting out one's career, maybe ever. I think that 25 probably the most exciting time to be starting out one's career, maybe ever. I think that 25-year-old in Mumbai can probably do more than any previous 25-year-old in history could. It's really amazing what you can do with a tool like this. I felt the same way when I was 25, and the tools then were not as amazing as the tools we have now,

4:25

but we had the computer revolution, and a 25-year-old then could do things that no 25-year-old in history before would have been able to. And now that's happening in a huge way. So whether you want to start a company or be a programmer or go work in any other industry, create new media, whatever it is. The ability for one person to use these tools

4:55

and have great ideas and implement them with what would have taken decades of experience or teams of people is really quite remarkable. In terms of particular industries, I am very excited about what AI is going to mean for science, and the amount of science that one person will be able to discover, and the rate at which they can do that. Clearly it's transforming what it means to program computers in a huge way, definitely for startups.

5:25

If you have an idea for a new business, the ability for a very tiny team to do a huge amount of work is great. But it feels like this is just now a very open canvas. People are limited to a degree that they've never been before only by the quality and creativity of their ideas. systems. The level of capability, the level of robustness, the reliability and the ability to use this just for

6:10

a lot of tasks in life to create software to answer questions to learn to work more efficiently. Like this is a this is a, you know, pretty significant step forward. Each time we've had one of these, we have been amazed by the human potential it unlocks. And in particular, India is now our second largest market in the world. It may become our largest. We've taken a lot of feedback from users

6:38

in India about what they'd like from us. Better support for languages, more affordable access, much more. And we've been able to put that into this model and upgrades to ChatGPT. So we're committed to continuing to work on that. But every time we've had a major leap forward in how the capability we can bring to users, we've been amazed by what those 25-year-olds go off and do in terms of creating new companies, or learning better,

7:10

getting better medical advice, or whatever else. Is there anything that I could build on top of GPT-5 today as a 25-year-old in India

7:20

that you think are low-hanging fruits per se, that I should definitely look at? GPT-5 to help you write the software for a product much more efficiently, help you handle customer support, help you write marketing and communications plans, help you review all of this. That's pretty amazing.

8:05

If I could push you to be like, a bit more specific, and say, I get science, but what do I study? Say I've been studying engineering or commerce or arts or something like that. Is there any specific thing I study in order to use AI to develop something in science?

8:28

I think the most important specific thing to study is just getting really good at using the new AI tools. I think learning is valuable for its own sake and learning to learn is this like meta skill that will serve you throughout life. And whether you're learning, you know, engineering, like learning to learn is this meta skill that will serve you throughout life. And whether you're learning engineering, like computer engineering, or biology,

8:50

or any other field, if you get good at learning things, you can learn new things quickly. But fluency with the tools is really important. When I was in college college or in high school, it seemed to me that the obvious thing to do was learn to program. And I didn't know exactly what I was going to use that for,

9:12

but there was this new frontier that seemed very high leverage and important to get good at. And right now, learning how to use AI tools is probably the most important, specific, hard skill to learn. And the difference between people who are really good at it, really AI native and think of everything in terms of those tools and don't is huge.

9:30

There's other general skills. Learning to be adaptable and resilient, which I think is something really learnable. That's quite valuable in a world that's changing so fast. Learning how to figure out what people want is really quite important. Before this I used to be a startup investor and people would ask me what the most important thing for startup founders to figure out was. And my predecessor, Paul Graham, had this answer that has always stuck with me for people to give to founders. And it became the motto for Y Combinator, which is, make something people want. to give to founders.

10:05

And that sounds like such an easy instruction. I have watched so many people try so hard to learn how to figure that out and fail. And then I've watched many people work really hard to learn how to do that and get great at it over a career. So that's something as well.

10:25

In terms of the specifics, are you supposed to take the biology class or the physics class? I don't think it matters right now. And if I were to build on top of that, and when you say learn to adapt and change and learn AI tools faster,

10:40

is there a path? we could begin walking towards. And in the last few weeks, I have surprised myself how much I've used it to create a piece of software to solve some little problem that I have in my life. And it's been an interesting creative process because I'll ask it for a first draft, I'll start using that. And then I'll say, hey, with this feature,

11:18

it would be better. Or with this other thing, I'd be able to do something differently. Or I have started using it and realized that with this other thing I'd be able to do something differently. Or I have started using it and realized that with my workflow I really needed this. And by putting more and more of the things that I have to do in this kind of a workflow,

11:35

that's been a very interesting way for me to learn how to use this. You mentioned Paul Graham and I was reading this letter or report he wrote in 2009 where he spoke about five founders to watch out for. And I think you were 19 at the time and he mentioned you along with people like Steve Jobs and Larry and Sergey.

12:00

Why was that? You hadn't accomplished anything of note like they had. What did Paul see in you? And what do you think is the innate skill set

12:10

that sets you apart from your peers? That was very nice of him. I remember that. I remember at the time feeling that it was very deeply undeserved, but grateful that he said that. I mean, there's like a lot of people who are starting great companies. We got lucky here in many ways. We've also worked super hard. Maybe the one thing that I would, that I think we have done here well, is to take a very long time horizon and think very independently.

12:48

When we started, it was like four and a half years before we had a first product, and it was very uncertain. We were just doing research, trying to figure out how to get AI to work, and we had very different ideas than the rest of the world. But I think that ability to sort of trust our own conviction over a long time horizon without a lot of external feedback was very helpful.

13:14

Is that a V or a I thing? Because I'm speaking about when you were 19.

13:18

Oh, sorry, I thought you were asking about opening. Me when I was 19, I barely remember that. I don't know. Sorry, I thought you were asking about OpenAI. I was like a naive 19 year old that didn't really know what he was doing. This is not false modesty. I think I've done impressive things now, but my own self-conception in 19 was

13:40

deeply unsure of myself and very unimpressive. If the world were to be,

13:49

the world of tomorrow were to be a AI kingdom of sorts, you're definitely some kind of prince. And I don't know if you follow Machiavelli, but he used to say a very interesting thing. He said that the prince should always appear, appear but not be religious, compassionate, trustworthy, humane, and honest. Do you think this particular projection of, I've heard you, I've watched

14:20

a lot of your interviews of late and I've heard you say repeatedly that you're not formidable and use words like that. This projection of humility, is it apt for the

14:34

world we live in or the world you're walking into? I'm not sure it's apt for either one, but when I was 19, to go back to what you said, I assumed that the people running these big tech companies really had it all figured out. and somebody had a plan and there were these people that were like way different than me that understood how to do things and you know had this like the companies were like running very well and there was like not much drama and everything was like You know just everything was everything was being handled by the adults um And now that i'm supposed to be the adult in the room. I will tell you I think no one has a plan No one really has it all working smoothly everyone or, or at least I, am figuring out as they go.

15:27

And you have things you want to build and then things go wrong and you bet on the wrong people or the right people and you get a technological breakthrough here or you don't there. And you just keep putting one foot in front of another and you put your head down and work

15:43

and you have these tactics that some of them Just keep putting one foot in front of another and you put your head down and work. You try something and the market reacts in some way or your competitors react in some way and you do something else. my conception is everybody is kind of figuring it out as they go. Everybody is like learning on the job. And I don't think that's like false humility. I think that's just the way the world works. And it's like a little, it's a little strange to be on this side of it,

16:19

but that is what it seems like.

16:21

I'm not even speaking so much about the authenticity of the humility or not, but more again from the lens of somebody starting to build for tomorrow, because that's who we speak to.

16:34

Is that a good image to project into the world? Does it, the image of humility really work today? You mean should someone project that image? I mean I certainly have a very negative reaction to

16:54

people who are

16:59

projecting certainty and confidence when they don't really know what's going to happen. And the reason is not just because it's annoying, which it is, but also because I think if you have that kind of a mindset, it is harder to have the culture of intellectual openness and to sort of listen to other viewpoints and make good decisions. A thing I say all the time to people is,

17:30

no one knows what happens next. And the more you forget that, and the more you're like, I am smarter than everybody else, I think you just make worse decisions. So having that kind of open mindset and the sort of like curiosity and willingness to adapt to new data and change your mind, I think that's super important.

17:59

Like the number of times we have thought we knew something to get smacked in the face by reality here has been a lot. And one of our strengths as a company is when that happens, we change what we're gonna do. Where that's been, I think that's been really great and a big part of what's helped us succeed.

18:19

So maybe there are other ways to succeed and maybe projecting a ton of bravado into the world works, but the best founders that I have watched up close throughout my career have all been more like the sort of quick learning and adapting style.

18:42

And you probably know more about this than most because of your role at Y Combinator. I have a lot of data points on it at least. learning and adapting style. I have a lot of data points on it at least. When we met in Washington a couple of years ago at the White House, I remember when we were speaking

19:00

and you went somewhere, I was speaking to your partner. You guys had a kid. We did. And how is that? It is my favorite thing ever. But I mean, I know that I have nothing that is not a cliche to say here. But it is the coolest, most amazing, most emotionally overwhelming in the best ways and hard ways to experience everything everyone said about how great it is, how intense it is,

19:26

how it's like a kind of love you didn't know you could feel. It's all true. I have nothing to add other than I strongly recommend it. And I think it's been really wonderful. It's amazing.

19:37

So I ponder on this a lot, Sam, kids, why people have kids and also questions like what happens to religion in marriage tomorrow?

19:49

Can I ask you why you had a kid? Family has always been an incredibly important thing to me. It felt like the most... And I didn't even know how much I underestimated what it was actually going to be like, but it felt like the most important and meaningful and fulfilling thing I could imagine doing, and it has, so far, so early, exceeded all expectations.

20:18

Do you think it's the biological need to procreate? I don't know. This seems like a thing that is so deep it's difficult to put into words. But I feel confident. Everyone I know, looking back on their life, who has had a great career and had a family, has either said, you know,

20:45

I'm so glad I took the time to have kids. That was one of the most important things I've ever done. The best things I've ever done. Or they've said that was by far the best thing I've ever done. That was way more important

20:54

than any of the work I ever did. And I was like willing to take the leap of faith that that would be true for me too. And it certainly seems like it will be. It's like, yeah, it is. And if it is just a biological hack, I don't care. I'm so happy. But it, you know, it like, there's a sense of responsibility and kind of like family is the word

21:16

that keeps coming to mind that is just really great.

21:19

The world seems to be having lesser kids. And do you have an insight into the future of marriage,

21:28

religion and kids? Yeah, I hope that creating family, creating community, whatever you want to call it, will become far more important in a sort of post-AGI world. I think it's a real problem for society that those things have been in retreat. I think that that just feels strictly bad to me. I have, I don't, I'm obviously not sure why that's been happening, but I hope we'll really reverse course on that.

21:59

And in a world where people have more abundance, more time, more sort of resources and potential and ability. I think it's like pretty clear that family and community are two of the things that make us the happiest. And I hope we will turn back to that.

22:20

As societies get more affluent, if one were to buy into the mimetic desires of people, we all tend to want what other people want, not necessarily what other people have.

22:35

If we all had more, do you think we would still want more if we all had enough? I do sort of think that human demand, desire, ability to play status games, whatever, is like, it seems pretty limitless. I don't think that's necessarily bad. Or, not all bad.

23:00

But yeah, I think we will figure out new things to want and new things to compete over.

23:05

Do you think the world retains, largely the world retains, the current model of capitalism and democracy in a way? Let me give you a scenario. What happens if a company X, let's say OpenAI, gets to the point where it is 50% of world GDP. Does society allow for that?

23:32

I would bet not. I don't think that will happen. I think this will be a much more distributed thing. But if for some reason that did happen, I think society would say, we don't think so. Like, let's figure out something to do here. The analogy I like most for AI is the transistor, which was this really important scientific discovery that for a while looked like it was going to capture a ton of the value, and then turned out to be something that just gets built into tons of products and services. And you don't really think about the fact that you're using transistors all day long. It's just kind of in everything, and all these companies into tons of products and services.

24:05

You're using transistors all day long. It's just in everything, and all these companies make incredible products and profits from it in this very distributed way. I would guess that's what happens, and it's not like one company is ever half of global GDP.

24:20

At one point I did worry that that might happen, but I think that was a naive take. But do you think the odds of the world moving towards socialism go up? Or if something gets that large, will it get nationalized and we become more socialist?

24:36

I don't know if something will get nationalized, I don't know if the world will like, officially turn towards socialism, but I expect that I expect that like that social support or redistribution or whatever you want to call it will increase over time as society gets richer and as the technological landscape shifts. I don't know what

24:56

way it's going to happen and I expect in different countries it'll happen in different ways. I think you'll see experimentation of new kinds of sovereign wealth funds, new kinds of sovereign wealth funds, new kinds of universal basic income ideas, redistribution of AI compute, I don't know exactly what. But I suspect we'll see a lot of experimentation in society here.

25:15

On universal basic income, I think WorldCoin was a very interesting experiment.

25:20

Can you tell us a bit about what's happening there? We have all this AI coming. We really want to care about humans as special. Can we find a privacy-preserving way to identify unique humans and then create a new network and a new currency around that? It's a very interesting experiment, still early but growing quite fast. If AGI eliminates scarcity or scarcity by virtue of increasing productivity, could one also assume that it would be deflationary in nature? Capital

25:58

or money loses its ability to return a rate of return, and capital no longer remains a moat in the world of tomorrow?

26:11

I feel confused about this. I think if you look at the basic economic principles, it's supposed to be hugely deflationary. And yet, if the world decides that building out AI compute today is super important to things tomorrow, maybe something very strange happens with the economy and maybe capital is really, really important because every piece of compute is so valuable.

26:45

I was asking someone at dinner the other night every piece of compute is so valuable.

26:52

I was asking someone at dinner the other night if they thought that interest rates should be minus 2% or 25% and he kind of laughed. He's like, well that's a ridiculous question. And then he stopped and said, actually I'm not sure. It should be deflationary eventually, but I could see it being weird in the short term.

27:08

That's actually a very interesting thing to say. Do you suspect it would be minus 2%?

27:14

Eventually. But I'm not sure. And maybe it's just like we're in this massive expansionary time where you're trying to like build the Dyson sphere in the solar system and you're borrowing money at crazy rates to do that. And then there's more expansion beyond and more and more. And I don't know, like, I find it very hard to see more than a few years in the future at this point.

27:38

The conversation we were having a couple of weeks ago, I was doing more research on the sectors you had suggested. a couple of weeks ago. I think we agreed on older and thicker world. You also made a case for as discretionary spend goes up, gateway luxury brands might do well. What happens to them in a deflationary world? Because the value of these purchases go down?

28:06

Maybe not. I mean, in a deflationary world, some things can face huge deflationary pressure and others can be the sink for all of the extra capital. So I'm actually not sure they do go down in a deflationary world. I would bet they go up, actually.

28:20

Yeah?

28:21

I think so. Because the excess capital has to flow somewhere.

28:25

When you look at classical economic theory, like Adam Smith and stuff like that, the Austrian school always spoke about the marginal utility of things. If you have one kettle at home to make tea, it has x in value. When you have two kettles, it still has some value. But when you have 20 kettles, it has x in value. When you have two kettles, it still has some value.

28:50

But when you have 20 kettles, it has no value. Do you think the world goes in that direction? Yeah, so 20 kettles doesn't help you. two hours a day playing video games or whatever and that amount of time is fixed and so you don't need 20 hours worth of video games. If that two hours of games gets better and more entertaining forever and ever and ever and just keeps getting more and more compelling, that still has value to you. And I think there are a lot of categories where we will find that people can just get much better stuff,

29:25

even if they don't necessarily get more of them. I think we'll see this in a really big way.

29:32

Do you think there's a use case for the rappers that are getting built on these large models right now? Like I was in the US recently and I met Harvey, for example.

29:42

What happens to a wrapper like that? Does it get innovated out by the model itself at some point of time? Some of them yes and some of them no. And it's like, sometimes you can obviously predict when one is going to the company makes down the line. The main thing I would say is, using AI itself does not create a defensible business. You see this with every technology boom where people are like, well, I'm a startup doing X and because I'm using the latest technology trend, the normal rules of business don't apply to me. And that's never true. the latest technology trend,

30:25

the normal rules of business don't apply to me. You've always got to parlay that advantage that comes from using the new technology into a durable business with real value that gets created. to do that. So, you know, you can definitely build an amazing thing with AI, but then you have to go build a real defensible layer around it.

30:50

If I were to build a business on top of your model, let me give you the example of Amazon, for example. If I sold a certain kind of t-shirt, and I sold a lot, and Amazon had all the data, eventually Amazon probably started a white-labeled brand which was very similar to mine and cannibalized my business almost. Should one worry that will happen here as well because you're no longer just a model but you're foraying into so many different businesses?

31:22

I'd come back to that example of the transistor. The, you know, we are building this general-purpose technology that you can integrate into something in a lot of ways, but we keep following our equivalent of Moore's law, and the general capability keeps going up. If you build a business that gets better, the business But we keep following our equivalent of Moore's Law,

31:45

and the general capability keeps going up. If you build a business that gets better, the business gets better when the model gets better, then you should keep doing well if we continue where when the model gets better, your business gets worse because the wrapper was too thin, or whatever, then that's probably bad in the same way

32:15

that it's been bad in other technology revolutions. So there are clearly companies building on top of AI models relationships with their customers for themselves. Cursor is a recent example of a company that is just exploding in popularity and really durable relationships with their customers. And then there's many others that don't. It does seem to me like there are more companies getting created now than in previous technological

32:47

revolutions that feel like they have a chance at real durability. Maybe an example we could use to ground this is when the iPhone first came out and the App Store first came out, the first set of apps were pretty light and a lot of them ended up being features that kind of made it into future versions of iOS. You know, you could like sell a flashlight for a dollar that turned on the flash on your on your phone and you made a lot of dollars doing that, but it wasn't sticky because eventually Apple just added that into the operating system where it

33:19

belonged. But if you started something that was complicated and the iPhone was just an enabler for it, like Uber, that was a very valuable long-term thing to do. And in the early days, like the GPT-3 days, I think you had a lot of toy applications, as you should, many of which didn't need to be standalone companies or products. But now as the markets matured, you're really seeing some of these

33:45

more durable businesses form. the exchange happens once, but if it's a service company, it happens repeatedly, and there is room for me to build in taste in that transaction, which is repetitive in nature.

34:12

Yeah, generally I agree with that.

34:15

A small part of my world, or a part of my world, is creating content, which I do once a month. If a model, to a a large extent is able to factor in my vintage, my tenure and my evolution and throw out an output which is predictive in nature with a fair degree of efficiency. Now if I behave in the same predictable manner tomorrow, will be less valuable than me being contrarian? Contrarian not to the world, but contrarian to my own behavior almost.

34:54

So do you think the world inordinately favors contrarian behavior tomorrow? Yeah, that's a good point. I think so. I think, the thing I'm thinking is how much will the models learn to do that? You know, you wanna be contrarian and right. Most of the time you're contrarian,

35:16

you're contrarian and wrong, and that's not that helpful. But yeah, I bet the ability to come up with the kind of contrarian and right idea that the models today just can't do it all, and maybe they'll get better at it at some point.

35:31

That value of that should go up over time. Getting good at doing things models can't do seems like an obvious increase in value.

35:42

Outside of being contrarian, is there anything else that I could do that a model will take longer to learn? value.

35:45

Look, the models are going to be much smarter than we are, but there's a lot of things that people care about that have nothing to do with intelligence. Maybe there can be an AI podcast host that is much better than you at asking interesting questions and engaging know, kind of engaging whatever. And I personally don't think that podcast host, that AI podcast host is likely to be more popular than you. People really

36:13

care about other humans. This is like very deep. People want to know a little bit about your life story. What got you here? They want to be able to like talk to other people about this like shared sense of who you are. And there's like some cultural and social value in that. We are obsessed with other people.

36:29

And-

36:30

Why is that Sam? Why do you think that is?

36:34

I think that's also like deep in our biology. And, you know, again, earlier comment, you don't fight things that are deep in biology. I mean, I think it'd make a ton of sense of why we would evolve that way, but you know, here we are. So, you know, we're going to keep caring about real people.

36:53

And even if the AI podcast host is much smarter than you, I think it's very unlikely he'll be more popular than you.

36:58

So in a perverse way, being stupider will be more novel than being smart.

37:05

I don't know if it's like stupider or smarter that has the novelty, but I think being a real person in a world of unlimited AI content will increase in value. Is a real person somebody who screws up, unlike a model? I mean certainly real people do screw up so maybe that's part of what we associate with a real person I'm not sure but I do think we we just knowing that it's a real person or not we really extremely care about. What is the difference between AGI and human intelligence today and tomorrow? So so like with GPT-5 you have something that is incredibly smart in a lot of domains at tasks that take, you know, seconds to a few minutes. It's very superhuman at knowledge, at pattern recognition, at recall on these shorter term tasks. But in terms of figuring out what questions to ask or to work on something over a very long period of time, we are definitely not close to human performance. And an interesting example that one of our researchers gave me recently is if you look at our performance in math, you know, a couple of years ago, we could solve math problems that would take an expert human a few minutes to solve.

38:29

Recently, we got gold-level performance on the International Math Olympiad. Each of those problems takes about an hour and a half. So we've gone from a thinking horizon of a few minutes to an hour and a half. To prove an important new mathematical theorem

38:43

maybe takes like a thousand hours. And you can predict when we can do a thousand hour problem, but certainly in the world today we cannot at all. And so that's like another dimension where AI can't do it.

38:55

So I was in the US between San Francisco and New York the last couple of months, and I met a whole bunch of AI founders. The one thing everybody seemed to agree on is, like for AI, the US is a few years ahead of most others. They also thought that for robotics, China seems to be ahead. Do you have a view on robotics and what happens there, like humanoid or other form of robotics?

39:31

I think it will be incredibly important in a couple of years. I think one of the things that is going to feel most AGI-like is seeing robots just walk by you on the street doing kind of normal day-to-day tasks.

39:46

Is there a reason why they need to have human-like form?

39:49

Well, you can certainly have non-humanoid forms, but the world is really built for humans. You know, like door handles, steering wheels in cars, factories, like a lot of this is built for, we built it for our own kind of morphology. So there will of course be other specialized robots too, but the world like is built and I hope stays built for us to start a robotics company,

40:25

but somebody else has manufacturing scale, and I really want, as an Indian guy, to be able to build and compete there.

40:35

How do I make up for manufacturing scale as someone starting up? Eventually, once you interested in robots, so we're thinking about this. And it's definitely a new skill for us to learn. I know you will not likely speak about what you're doing with John AI and what happens there, or maybe you will.

41:07

But what happens to form factor overall?

41:10

One of the things that I think we'll be defining about the difference of AI versus the sort of previous way we've been using computers and technology is you really want AI to be have as much context as possible, do stuff for you and be proactive. So a computer or a phone, you know, it's kind of either on or off. It's in your pocket or it's in your hand and you're using it. But you might want AI to just be, you know, like a companion with you throughout your day and alerting you in different ways

41:49

when it can do something to help you or when there's something really important you need to know or reminding you of something that you said you needed to do earlier in the day. And the current form factors of computers are I think not quite right for that.

42:04

They do have this either on or off binary that I think isn't quite what we want for like a, the sort of sci-fi dream of the AI companion. So the form factor that enables that, you could imagine a lot of things. There's people talking about glasses and wearables and little, you know, things that sit on your table. I think the world will experiment with a lot of those, but this idea of sort of ambiently aware physical hardware, that feels like it's going to be important.

42:35

Is that the form factor with Johnny Ive, like a sensor?

42:39

So we'll try multiple products, but I think this idea of trying to build hardware that an AI companion can sort of embody itself in will be an important thread.

42:53

The last two things I want to ask you, Sam, is one about fusion, because I know you've invested in Helion and you're a big proponent of it. Does that solve the climate change problem? Would you put money on that?

43:10

It certainly helps a lot. I suspect that we've already done sufficient damage to the climate. We're going to have to undo some damage too, even if we got to switch to fusion right away. But it would certainly be a great step forward.

43:23

And the last question I have for you, Sam, is the question I care most about. What's in this AI realm for India as a country? What's the opportunity for us?

43:36

As I mentioned earlier, I think India may be our largest market in the world at some point in the not very distant future. The excitement, the embrace of AI in India and the ability for Indian people to use AI to just sort of leapfrog into the future

43:54

and invent a totally new and better way of doing things and the sort of the economic benefits that come from that, the societal benefits. If there is one large society in the world that seems most enthusiastic to transform with AI right now, it's India. And the energy is incredible. I'm looking forward to coming soon. And it's really quite amazing to watch.

44:17

And I think the momentum is unmatched anywhere in the world.

44:25

I feel like the question really is how do we transition from being a consumer to being a producer where we can build something that other people use

44:35

outside of India? I think that's really happening already in a big way. The entrepreneurial energy around building with AI in India is quite amazing and we hope to see much more of it.

44:53

Right.

44:54

Yeah.

44:55

Super. Thank you, Sam, for doing this. Great, thank you for having me. Thank you.

45:00

Appreciate it.

45:01

I'm gonna message you.

45:02

Okay, good to see you. Okay, good to see you. Thanks for doing this.

Get ultra fast and accurate AI transcription with Cockatoo

Get started free →

Cockatoo