Altman speaks at BlackRock's US Infrastructure Summit

Altman speaks at BlackRock's US Infrastructure Summit

OpenAI CEO Sam Altman speaks at BlackRock's U.S. Infrastructure Summit in Washington, D.C. Read the transcript here.

Sam Altman speaks at summit.
Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
LinkedIn
Facebook
X logo
Pinterest
Reddit logo
Email

Copyright Disclaimer

Under Title 17 U.S.C. Section 107, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is permitted by copyright statute that might otherwise be infringing.

Interviewee (00:00):

... The other one is when can a CEO of a major company, a president of a major country, a Nobel Prize winning scientist, when can they not do their job without making heavy use of AI? This doesn't mean that there will be an AI CEO or an AI president, but it does mean that the role of, let's say a human CEO, when I think about my job, it's really quite different. You still do need a person to stand behind decisions and kind of exercise human judgment and all of the understanding that we expect out of someone running an important organization to do.

(00:40)
But the actual parts of my role that I will increasingly have to rely on an AI to do, because no human can. No human CEO can talk to every employee at a company, every customer, be in every meeting, be an expert in every field. And so more and more, I think of these jobs will be supervising a bunch of AI, providing oversight, deciding how to trust the outputs, how to provide guidance. And that threshold of when you really wouldn't want to be doing your job, running a large organization without heavy reliance on AI, I think that's another sort of interesting threshold that may take a little bit longer, but probably not a lot longer.

Interviewer (01:30):

And as you do your job, how much are you finding yourself relying on some of the agents and some of the artificial intelligence that we are developing at OpenAI?

Interviewee (01:42):

It's ramping incredibly quickly. If I have a new idea for a business model, a strategy shift, a product offering we should do, the very first thing I do before I even bounce it off somebody else is to ask our tools. And as they get more context, and I think this really is the next big thing to happen, is they can get close to full context of our company, access to all of our internal docs, communication, code, customer data, everything, the quality of the answers, thought, whatever you want to call it, gets better and better.

Interviewer (02:20):

Right. Okay. So let's shift a little bit. Two weeks ago, now it's a $110 billion funding round. I asked ChatGPT, how does that compare to any other fundraising that's been done in the public markets?

Interviewee (02:37):

I actually don't know.

Interviewer (02:39):

Four times as large. Okay. The largest public offering ever done was roughly 25 billion that Aramco did several years ago. And the public markets are supposed to be the broadest and deepest sources of capital. Okay? Three strategic partners, Amazon, NVIDIA, and SoftBank. Tell us a little bit about this. How is this an inflection point of the company? And one of the questions somebody asked me is, what are we spending all that money on?

Interviewee (03:12):

There's many hard parts of this business, but one of the hardest ones is the infrastructure is so expensive. You need so much of it and you have to commit so far in advance. I have never seen any other industry quite like this. I mean, there have been clearly many capital intensive industries throughout history, but as I look at what's to come in front of us in future years, if the ramp stays as steep as it looks like it is right now, the demand is growing as fast as it's growing, you have to do some pretty unusual things. OpenAI does a lot of things that look weird. We spend a ton of money on infrastructure in advance of revenue. We do new business models like ads that seem like maybe not the most profitable thing we could do. A long list of other things, but we have this fundamental belief in abundance of intelligence and that one of the most important things in the future is that we make intelligence, to borrow an old phrase from the energy industry that didn't quite work, too cheap to meter.

(04:22)
We want to flood the world with intelligence. We want people to just use it for everything. We want this to just be something that the future generation doesn't think about. They expect everywhere and everybody has access to geniuses, as many as they need in any area that they need. And this principle, which is one of our kind of top guiding principles, does lead to a lot of behavior that would look less natural for other companies. And one of those is we really want to get out of this world that we have been in that we still think we are on a trajectory to stay on without changing what we do, of always being capacity constrained.

Interviewer (05:00):

Right. And capacity constrained, you mean compute?

Interviewee (05:03):

Yeah.

Interviewer (05:04):

And I've often heard you say a lot that compute is revenues. You want to talk a little bit about how you think about that?

Interviewee (05:14):

Fundamentally, our business, and I think the business of every other model provider is going to look like selling tokens. They may come from bigger or smaller models, which makes them more or less expensive. They may use more or less reasoning, which also makes them more or less expensive. They may be running all the time in the background trying to help you out. They may run only when you need them if you want to pay less. They may work super hard, spend tens of millions, hundreds of millions, someday billions of dollars on a single problem that's really valuable.

(05:46)
But we see a future where intelligence is a utility like electricity or water and people buy it from us on a meter and use it for whatever they want to use it for. The demand that we see for that seems like it's going to continue to just go like this. And if we don't have enough, we either can't sell it or the price gets really high and it kind of goes to rich people or society makes a bunch of sort of central planning decisions that I think almost always go badly about, we're going to use our limited impute supply for this and not that. So the best thing to me throughout all the history of capitalism, innovation, whatever you want, is to just flood the market.

Interviewer (06:35):

Yeah. And obviously a critical part of addressing that compute demand is Stargate. And this is an infrastructure conference and you announced that several months ago, or almost a year ago, I guess it was. Yeah. How's it going in the US? And then how is it going because there's also Stargate in Abu Dhabi.

Interviewee (06:59):

Yeah. Yeah. Honestly, there's many cool parts of the job. One of the coolest is getting to go visit these mega data centers under construction and operation. Just the scale of these gigawatt campuses. It's really hard to explain and you see these photos and it's like, "Okay, it does look big." And then you go there and you're walking through it from building to building and it's just 10,000 people there, all these different skilled trades doing all these different things. And it looks like a spaceship inside. It's really quite incredible. We are training right now on the first site in Abilene, what I think will be the best model in the world, hopefully by a lot. And it's so amazing to have gone from many visits while it was under construction and just really internalizing the scale and incredible complexity to the day that one researcher at OpenAI types in like one command and it pushes enter, unbelievable number of GPUs spin up and start doing this one huge computation altogether. It's very cool.

Interviewer (08:04):

And what have been the most pleasant surprises and what have been the hardest things about getting it up and running and scaling it going forward?

Interviewee (08:16):

I mean, there have been all the expected challenges and then the unknown unknowns. There was a crazy weather event in Abilene that was outside what we had planned for and that brought things down for a little while. There's all of supply chain challenges. Anything at this scale, it's just like so much stuff goes wrong. Lots goes right, but just trying to build something around the clock with such complexity, there's all the stuff that goes wrong. One of the biggest surprises on the upside is how many different organizations had to come together to do this in an incredibly short period of time, and how much we all ended up working together as one team under a lot of pressure.

Interviewer (00:00):

Interviewer (09:00):

Mm, Mm. And of course, the power demand part of it is the one that a lot of people are focused on. Are you optimistic that we will solve that challenge first in the U.S. and then we can talk about it in other places?

Interviewee (09:20):

In the long term, I am.

Interviewer (09:22):

Okay.

Interviewee (09:24):

I have no doubt we will figure out how to build huge amounts of power generation. AI will help, of course. But I think the portfolio the world has in front of it, gas, solar, nuclear fission, nuclear fusion, more... I feel good about what we will be capable of and what we will eventually do.

(09:47)
Given the demand growth we are seeing, I am hoping for a miracle in terms of figuring out how we can get way more efficient per watt with models to give us time to build out all of this infrastructure. Now, the track record there has been incredible. People cite whatever amazing statistic they like about how much more efficient our models have gotten over... Our industry's models have gotten over time, but one that I think is incredible.

(10:17)
Our first reasoning model was called 01, came out 16 months ago. And our latest model where we've now integrated reasoning is 5.4. To get the same answer to a hard problem from that first model to 5.4 has been a reduction in cost of about 1,000 X. Maybe I was a little bit wrong on the timeline, maybe it was a little bit longer, but in any case, since 01 till now. 1,000 X.

(10:51)
That is unbelievable in a relatively short period of time and the two things that I think that points to, one, we are still so early and in this paradigm and we have so much more to gain about our understanding of how to develop these models and train them and run them efficiently, that were then and still are doing things in dumb ways and will get better and better, and two is that human ingenuity and the ability to operate in constraints and to find ways to solve problems almost always surprised you on the upside.

(11:33)
So, it's not just that the models have gotten better, it's that we have figured out... Kernel engineers came to help figure out how to write more efficient kernels and power engineers and the people that design data centers found more efficient ways to do that. So, people are answering the call well beyond just the model side to make this more efficient.

Interviewer (11:49):

Right. And then of course, obviously OpenAI is a big customer of the NVIDIAs and AMDs of the world, but I think we signed a big contract with Amazon, but we're building our own chip.

Interviewee (12:01):

Yeah.

Interviewer (12:03):

What's the thinking behind that?

Interviewee (12:05):

So, the chip we're doing is inference only.

Interviewer (12:07):

Okay.

Interviewee (12:07):

And the thinking behind it is that on this come rise up to solve problems in front of us. We think that a specialized chip to be not necessarily the fastest inference chip, but the cheapest inference chip. The most efficient per watt, given the constraints we see in front of us, is going to be important for all of the agent demand we see in the future. So, it's an opinionated bet, it's a limited chip, but the thing that it does in a world where we're energy constrained, I think will be very important.

Interviewer (12:37):

Maybe you can explain, because I'm not sure everybody in the audience is AI proficient with the difference between an inference chip and a training chip.

Interviewee (12:46):

Sorry, I should've done that first. So, there are two main phases to AI workloads. Now, they're going to blend together eventually, and it'll all be one continuous thing. But right now, first we train a model, a gigantic number of GPUs crunch for weeks or months on a bunch of data. And you can think of that like maybe it takes you 22 years in life to get your education and you learn a bunch of stuff starting when you're a baby and you drop things and see they fall and then eventually like in college physics, understanding at a very detailed level, what's going on there.

(13:25)
And then after that, if you ask a model to solve a physics question, that's called inference and that's quite efficient. But this is true for people as well. And when people talk about, "Oh, these AI models are so efficient." They're usually comparing the 22 years that a human takes to train to the one second that an adult takes to solve a physics problem. If you compare the model solving the physics problem to the human solving physics problem, actually the models already probably are more energy efficient, but training is this hugely massive amount of processing power that then produces really just a file of numbers that you can then pose a question to and get a response.

Interviewer (14:07):

And are you optimistic about the progress we're making on the chip?

Interviewee (14:11):

Yeah. We should have the first chips deployed at scale by the end of this year. We should have the first chips back just in a few months now and it looks like it'll be really good.

Interviewer (14:21):

Fantastic. So, you announced a new partnership this morning actually with the North American Building Trades Unions to expand training pathways for skilled infrastructure, which has been a subject of many of the discussions today. Can you tell us a little bit about what you and Sean McGarvey agreed-

Interviewee (14:40):

Yes.

Interviewer (14:40):

... and what's... And he's going to be on stage later on, so let's hear your version of it and then see if his version's the same.

Interviewee (14:47):

So, we talked earlier and we've talked, the world has talked many times in the past about the need for AI infrastructure, the need for the world to have the physical infrastructure. Power plants, transmission lines, data center halls, chillers, obviously the racks and the GPUs and everything that goes inside of those. And I wish everyone would go visit at some point one of these mega scale data centers because it's hard... When you ask ChatGPT a question, you get your answer back and it's really hard to visualize the scale of what it took to make that or what it takes to make that happen.

(15:36)
People talk about all of these different places that were limited. We are limited in the number of turbines, or now it's the voltage transformers, or it's the memory fabs, or it's building the data centers, whatever. All of these things have one thing in common, which is they are massively complex physical infrastructure that require skilled tradespeople, and a lot of them to do. And no matter where the choke point in the supply chain is at any one time, when I talk to people about what it would take to accelerate it is more skilled trades workers to build out all of this infrastructure that we all depend on. I think this will be incredible jobs. More than that, I think they will lay the foundation for the next generation of American infrastructure and economic prosperity, and we are thrilled to get to work together to drive that faster.

Interviewer (16:31):

Just so you know, we at BlackRock share that view and we think it's really exciting that you announced this partnership today. We think that's really, really terrific. I think another question that's on people's minds is competition with China. Where do you think that stands? And what do we need to do to make sure... I assume we're ahead, but I don't know. So, you might tell us whether we are or not. What do we need to do to make sure we stay ahead?

Interviewee (17:01):

So, a general framework... Thought first, and then I'll answer the question more specifically. I think the discovery of deep learning is closer to discovering an element or a fundamental property of physics than it is of a secret technology. And that means that eventually, and eventually probably not being very long, the fundamental ideas that make a model so capable will be simplified, they'll be very well known, and just like we understand how big parts of physics work, we will understand as a scientific principle, how big parts of artificial intelligence work. We started to appreciate this with the scaling laws that OpenAI published maybe seven years ago now. There was such a measurable,

Interviewee (18:01):

Beautiful correlation between the resources that go into a model and the intelligence of that model, that it kind of felt like hair-raising, but clear at the time that there was just something fundamental going on here as a scientific principle.

(18:16)
Now, there have been a lot of details we've discovered since, there will be more to come, but like other scientific frontiers, it is simplifying and becoming more clear over time. And eventually, this recipe will be well understood as a scientific principle. It will not be a trade secret in the sense that other things have been.

(18:36)
Now, the analogy I like best historically from technological history is the transistor. Transistor was also a sort of fundamental scientific breakthrough, very hard to discover, kind of chancy to discover. Took us a little while to refine our discovery, but once we understood it, the scientific principle was clear to everyone.

(18:57)
There were still massive amounts of operational knowledge that went around that. TSMC can still do things that no one else in the world can do. I expect the industrial process around this to have a lot of advantages, competitive advantages. I also expect that the integration in workflows and training data and other and usability of models, there will be a lot of differentiation there. Maybe most of all, I expect that it'd be differentiation on who has the infrastructure and how much of it, but the fundamental scientific principles are going to be well known and they will fit on a T-shirt, I think.

(19:37)
In terms of where we are, the most capable models in the world, the frontier the US is leading on, the cheapest inference usage for a two generation earlier model, China is leading on. Infrastructure of the US is currently leading on, but China's moving much faster on. And that kind of industrialization, productization, whatever you want to call it, the US is leading in closed source, China is leading in open source. I think the US is probably leading overall.

Interviewer (20:10):

And you were recently in India and sounded like you were very excited about how India is thinking about the challenges and the opportunities.

Interviewee (20:19):

I was blown away talking to Indian startups and how they are using this technology. I got there and someone handed me a briefing sheet for India and it was like Codex usage in India has like 10x'd in some small number of months and I was like, "That's got to be a bug."

Interviewer (20:40):

It can't be right.

Interviewee (20:41):

It can't be right. But it was true. And then I started talking to these startups and it's an even stronger version of what's happening in the US of people saying, "Hey, the world is different." You talk about a one person startup, I'm trying to build a zero person startup, I'm trying to just write a prompt that's going to make my whole startup and write my software, do my customer support, do my legal stuff, whatever, and then I want to go on vacation.

(21:14)
And the companies, big companies in India that were just saying like, "How much capacity can we buy from you? How long can we reserve it for? Can we negotiate this right now? We're not going to let you leave the room until you agree on this with us." Just the level of aggression and speed and sort of a belief that AI was going to reshape the business landscape in India was really quite impressive. Yeah.

Interviewer (21:43):

Is that different from when we talk to customers in the US?

Interviewee (21:48):

It's the same vector, but they seem a little further along or moving faster.

Interviewer (21:53):

Yeah. The other comment I think I saw you made was that there's a difference between autocratic and democratic AI. What did you mean by that and what do you think is at stake?

Interviewee (22:09):

Once in a while, I think a technological shift comes along that reshapes society to such a degree that decisions about it do not belong to the handful of companies that happen to be developing it. I am a huge believer in capitalism. I'm a huge believer in the rights of companies. I am a huge believer in governments shouldn't interfere too much, but I think this is one of those exceptional times where society has a legitimate interest in what the impact of this technology is going to be.

(22:52)
I think the internet was one of these times too, and I don't think we got all of that right, but I'd like us to learn what we can do better. And if what the AI companies say, and I also believe it comes true and that this is going to reshape the economy, this is going to reshape geopolitical power, this is going to change how we all live our lives, then I don't think it should be up to companies or a government to impose a particular will on how this is going to get used.

(23:33)
I think that this belongs to the will of the people working through the democratic process. And companies like ours, I think are quickly, more quickly than companies of previous generations have had to do moving into a sort of critical infrastructure role where we have to say, "We create this technology, we are experts in it and we should have a real voice and we have opinions and understanding of where its limitations are and where it's not ready to be used and where great harm could come." But the rules, the limitations have to be agreed upon by society through this process. And because the technology is moving so fast, it'd be great for that democratic process to run a little bit faster, but companies, governments need to be able to depend on companies like ours to integrate and be able to use the technology.

Interviewer (24:31):

Yeah. So come back to the global AI race. Where do you think the US is the most vulnerable?

Interviewee (24:48):

Three things come to mind. One, there's been a ton of noise made about the global supply chain dependence and US infrastructure. I don't have anything new or deep to say there, but I can't overstate how scary this is to me. If we fall behind on infrastructure and can't catch back up, if globalization falls apart in any of the many ways it could, and we are not able to fairly independently keep building AI infrastructure, that seems like a big vulnerability. And I don't hate, but I don't love our global position right now.

(25:36)
The second is if we don't ... It's a competitive world and if we don't move as quickly as other countries on economic adoption of this, then I think we will lose the advantage that we have from being the economic powerhouse that we are. And this is about how quickly companies adopt it. This is about how quickly our scientists adopt it. This how quickly our government adopts it. This is like on the positive side. I think this is a once in many generation opportunity to really improve the economy, really rewrite some of the rules of society that aren't working in light of this new incredible wealth fountain we have. So I can see the world where this is not a disadvantage at all, but this becomes our biggest competitive advantage.

(26:29)
I don't think it's like super obvious that we're on the trajectory we want to be there now. Again, I don't hate it. I think it could move faster and you can see a bunch of potential headwinds like AI is not very popular in the US right now. Data centers are getting blamed for electricity price hikes. Almost every company that does layoffs is blaming AI, whether or not it really is about AI. There is this real debate about the relative power between governments and companies going

Interviewee (27:05):

There's a lot of stuff happening there. And then the third category is diffusion into the rest of the world. Is the world mostly going to build on the American AI tech stack; chips, models, applications, whatever, or are we going to enact a set of policies that make that harder?

Interviewer (27:22):

It sounds to me like you think that AI could be the foundation of an immense productivity pool if properly used and if adopted.

Interviewee (27:34):

Yes, for sure. Although I think the way we measure that is going to have to change.

Interviewer (27:40):

Explain.

Interviewee (27:46):

I can see a world where we have an incredible productivity boom. Quality of life goes up and up, most of the things that we say we would care about get better and better, and yet GDP, in the way we currently measure it, goes down and down, like deflation for a very long period of time.

Interviewer (28:07):

Right. Right.

Interviewee (28:10):

I don't know what it means to live in a forever deflationary world. I don't know what it means to think about GDP and GDP's correlation to quality of life in a world where more of the intellectual capability is inside of data centers than outside of them, but maybe we're going to find out. I think there's going to be a lot of debate in the coming years about what the right thing to measure is.

Interviewer (28:42):

Do you think we're thinking about these challenges and issues the right way? Are we starting to talk about that?

Interviewee (28:52):

I think, yes, we are starting to. If there was an easy consensus answer, we have done it by now, so I don't think anyone knows what to do. All of these things that we've depended on for so long as a society are sort of coming into question at the same time.

(29:11)
There was this quote that I saw on the internet a few weeks ago, it's really stuck in my head. It was something like, "For centuries, maybe millennia, we have learned a lot about how to structure society to manage scarcity. Almost none of that helps us as we have to quickly learn towards managing abundance." That's like a real change to how capitalism has worked. Capitalism has also depended on somewhat of a power balance between labor and capital, but if it's hard in many of our current jobs to outwork a GPU, then that changes. I see we're out of time, but I had like a list of 10 things like that changing all at once.

(29:56)
I'm not a long-term jobs doomer. I think we will figure out new things to do. I'm also certainly not a long-term capitalism doomer. I believe in it very, very deeply. I think the next few years are going to be a painful adjustment as we get to this future, we all get to redefine, of what the new system and just this incredible prosperity looks like. There are going to be some very intense and uncomfortable debates on the way there.

Interviewer (30:24):

Okay. Thank you, Sam, for that very thoughtful discussion. Here's what I will say; five years from now, we will come back here and we'll see where we are, and how we have navigated our way through this. Deal?

Interviewee (30:40):

Deal. I look forward to that.

Interviewer (30:41):

Thank you.

Interviewee (30:42):

Thank you. Thank you.

Speaker 1 (30:50):

We'll now take a short break. Please join us in the East Green Room for refreshments and return to your seats in 15 minutes. Thank you.

Topics:
No items found.
Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
LinkedIn
Facebook
X logo
Pinterest
Reddit logo
Email

Copyright Disclaimer

Under Title 17 U.S.C. Section 107, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is permitted by copyright statute that might otherwise be infringing.

Subscribe to The Rev Blog

Sign up to get Rev content delivered straight to your inbox.