[CC may contain inaccuracies] I think the market wanted more on
Blackwell. They wanted more specifics.
And I'm trying to go through all of the call and the transcript.
It seems like a very clearly this was a production issue and not a fundamental
design issue with Blackwell, but the deployment in the real world, what does
that look like tangibly? And is there sort of delay in the
timeline of that deployment and thus revenue from that product?
I. I let's say that just the fact that I
was so clear and it wasn't clear enough kind of tripped me up there right away.
And so so let's see, we are we made a mass change to improve the yield
functionality of Blackwell's. Wonderful.
We're sampling Blackwell all over the world today.
We show people giving tours to people, all of the
Blackwell systems that we have up and running.
You could find pictures of Blackwell systems all over the Web.
We have started volume production. Volume, production will ship in Q4, Q4,
we will have billions of dollars of Blackwell revenues and
we will ramp from there. We will ramp from there.
The demand for Blackwell far exceeds its supply, of course, in the beginning
because the demand is so great. But we're going to have lots and lots of
supply and we will be able to ramp starting in Q4.
We have billions of dollars of revenues and we'll ramp from there into Q1 and Q2
and through next year. We're going to have a great next year as
well. Jensen What is the demand for
accelerated computing beyond the hyperscalers and matter?
Hyperscalers represent about 45% of our total data center business were
relatively diversified. Today we have hyperscalers, we have
Internet service providers, we have sovereign ice, we have industries,
enterprises. So it's fairly fairly diversified aside
outside of Hyperscalers is the other 55% know the application
use across all of that, all of that data center starts with accelerated
computing. Accelerated computing does everything of
course from well, the the models, the things that we know about which is
generative AI and that gets most of the attention.
But at the core we also do database processing pre and post processing of of
of data before you use it for generative AI, transcoding, scientific simulations,
computer graphics, of course, image processing of course.
And so there's tons of applications that people use our accelerated computing for
and one of them is generative AI. And so let's see, what else can I say?
I, I think that's the coverage pin Jensen please.
On on Sovereign Air you and I've talked about that before and it was so
interesting to hear something behind it that in this fiscal year there will be
low double digit. I think you said billions of dollars in
sovereign air sales. But to the layperson, what does that
mean? It means deals with specific
governments. If so, where?
It's not necessarily sometimes it's deals with a particular a regional
service provider that's been funded by the government.
And oftentimes that's the case in the case of in the case of Japan, for
example, the the Japanese government came out and offered
subsidies of a couple billion dollars, I think, for several different Internet
companies and telcos to be able to fund their air infrastructure.
India has a sovereign initiative going and they're building their air
infrastructure. Canada,
the U.K., France, Italy are missing somebody.
Singapore, Malaysia, you know, a large number of countries
are subsidizing their regional data centers so that they could become able
to build out their air infrastructure. They recognize that their countries
knowledge their countries data. Digital data is also their natural
resource. Now, not just the land they're sitting
on, not just the air above them, but they they realize now that their their
digital knowledge is part of their natural and national resource, and they
are to harvest that and process that and transform it into their national digital
intelligence. And so this is a this is what we call
sovereign air. You could imagine almost every single
country in the world will eventually recognize this and build out their air
infrastructure density. You use the word resource, and that
makes me think about the energy requirements here.
I think on the call you you talk about how the next generation models will have
many orders of magnitude greater compute needs, but how would the energy needs
increase and what is the advantage you feel NVIDIA has in that sense?
Well, the most important thing that we do is increase the performance of and
increase the performance efficiency of our next generation.
So Blackwell is many times more performant than Hopper at the same level
of power used. And so that's energy efficiency, more
performance worth the same amount of power or same performance at a lower
power. And that's number one.
And the second is using local cooling. We support air.
We support air cooling, we support lift cooling.
But liquid cooling is a lot more energy efficient.
And so so the combination of all of that, you're going to get a pretty
large, pretty large step up. But the important thing to also realize
is that air doesn't really care where it goes to school.
And so increasingly we're going to see air be trained somewhere else, have that
model come back and be used near to population or even running on your PC or
your phone. And so we're going to train large
models, but the goal is not to run the large models necessarily all the time.
You can you can surely do that for some of the premium services and the very
high value eyes. But.
It's very likely that these large models would then help to train and teach
smaller models. And what we'll end up doing is have one
large, you know, few large models that are able to train a whole bunch of small
models and they run everywhere. Jensen You explained clearly that demand
to build generative AI products on models or even at the GPU level is
greater than current supply. In Blackwell's case in particular.
Explain the supply dynamics to me for your products and whether you see an
improvement sequentially quarter on quarter or at some point by the end of
fiscal year into next year. Well, the fact that we're growing would
suggest that our supply is improving and our supply chain is is quite large, one
of the largest supply chains in the world.
And we have incredible partners and they're doing a great job supporting us
in our growth. As you know, we're one of the fastest
growing technology companies in history, and none of that would have been
possible without very strong demand, but also very strong supply.
We're expecting Q3 to have more supply than Q2.
We're expecting Q4 to have more supply than Q3 and we're expecting Q1 to have
more supply than Q4. And so I think our supply, our supply
condition going into next year will be in would be a large improvement over
this last year. With respect to demand, I Blackwell's
just such a leap and and there are several things that are happening.
You know, just the foundation model makers themselves, the size of the
foundation models are growing from hundreds of billions of parameters to
trillions of parameters. They're also learning more languages.
Instead of just learning human language, they're learning the language of images
and sounds and videos, and they're even learning the
language of 3D graphics. And whenever they are able to learn
these languages, they can understand what they see, but they can also
generate what they're asked to generate. And so they're learning the language of
proteins and chemicals and and physics, you know, could be fluids and it could
be particle physics. And and so they're learning all kinds of
different languages, learned and meaning what we call modalities, but basically
learn languages. And so so these models are growing in
size. They're learning from more data and
there are more model makers than there was a year ago.
And so the number of model makers have grown substantially because of all these
different modalities. And so that's just one just the frontier
model. The Yes.
Foundation model makers themselves have really grown tremendously.
And then the generative A.I. market has really diversified, you know,
beyond the Internet service makers to startups.
And now enterprises are jumping in and different countries are jumping in.
So the demand is really growing. Jensen I'm sorry to cut you off.
I will lose your time soon. And you've also diversified.
And when I said to our audience you were coming on, I got so many questions.
Probably the most common one is what is in video.
We talked about you as a systems vendor, but so many points on Nvidia GPU Cloud.
And I want to ask, finally, do you have plans to become literally a cloud
compute provider? No.
Our GPU cloud was designed to be the best version of Nvidia Cloud that's
built within each cloud and video Cloud is built inside GCP, inside Azure,
inside a WAC, inside OCI. And so we build our clouds within theirs
so that we can implement our best version of our cloud, work with them to
make that cloud, that, that infrastructure, that infrastructure and
video infrastructure as performant, as great TCO as possible.
And and so that strategy has worked incredibly well.
And of course, we are large consumers of it because we create a lot of ourselves
because our chips aren't possible to design without AI or software is not
possible to write without A.I.. And so we use that ourselves tremendous,
you know, tremendous amount of it. Self-driving cars, the general robotics
work that we're doing, the omniverse work that we're doing.
So we're using the TJX Cloud for ourselves.
We also use it for an AI foundry. We make A.I.
models for companies that would like to have expertise in doing so.
And so we are a we're in a we're a foundry for I like TSMC as a foundry for
our chips. And so so there are three fundamental
reasons why we do it. One is to have the best version of
Nvidia inside all the clouds. Two, because we're a large consumer to
ourselves, and third, because we use it for Foundry four to help every other
company.
You actually have been in asia a lot and
you've been looking at the factories and you've been talking to people who are
the buyers of some of these chips, the producers, to understand exactly how
much demand there is. what have you seen that gives you such
confidence that this really is a take off story... Read more
Thank you let me make a couple more make a couple of comments that i made earlier again that data center worldwide are in full steam to modernize the entire computing stack with accelerated computing and generative ai hopper demand remains strong and the anticipation for blackw is incredible let me... Read more
Envidia se enfoca en ganancias pero parece que podría invertir en open ai y de hacerlo se estaría sumando a apple y también a microsoft microsoft es el mayor patrocinador de open ai el cual invierte unos 1300 millones en la empresa envidia invertirá 100 millones así que está bastante lejos de eso la... Read more
Happy thursday, everyone. nvidia earnings have finally come out. and was it the biggest
earnings of the year? it was as expected. and that's a good thing, even if the stock
might be, headed for another dip again. i don't know, post earning stock
reactions, we don't pay much attention to, unless it's... Read more
[music] n [music] [music] there we go welcome to the nvidia earnings call you say nidia nvidia i think so nv that sounds almost pornographic um yeah so we're here talking about nvidia oh my god you have an indie world shirt that's pretty cool anyway sorry uh yeah nvidia's out with earnings results uh... Read more
Broadcom's business overview welcome back everybody
to chip stock investor. broadcom just released their q3
fiscal year 2024 for earnings. let's double click on the numbers,
put some color on the earnings call commentary, and see what zip code
broadcom's growth is going to fall into as they round out... Read more
Daybreak europe >> good morning, this is
"bloomberg daybreak: europe." nvidia forecast disappoints i am joumanna bercetche i need
to buy. nvidia tumbles temples as its
revenue forecast misses some of the most optimistic estimates. >> we are going to have lots of
supply, and we will be able to ramp starting... Read more
Hello welcome back to the show we have a bit of a breaking news episode and no this isn't about oasis reunion this is about nvidia's earnings so the second biggest piece of news we had this week and that's because invidia earnings came out last night the headline read nvidia reports of 122% revenue... Read more
Time now to talk winners and losers on wall street with financial expert rob black and rob this morning i'm seeing nothing but a sea red uh doesn't look all that hot good morning yeah we've had a great year so we're up almost 20% in the s&p 500 so there's going to be days like this but today's a weird... Read more
Nvidia's earning call on the 28th of august 2024 was the most anticipated news in the ai industry now that we have it you may be interested to know where else the investors who invest in the ai space are looking for profits while nvidia's massive market cap makes them an outlier they are only one of... Read more
In 23, they switched to deloitte touche.
so i guess they're going through the filings again to make sure that there
are no issues. i mean, hindenburg does have a short
position in the stock and then they do published research that recommends
shorting the stock based on accounting irregularities.
so,... Read more
>> from the heart of where
innovation money and power collide in silicon valley and
beyond, this is bloomberg technology with caroline hyde
and ed ludlow. caroline:
live from new york and san francisco this is bloomberg
technology. today all eyes are on nvidia.
markets are on hold as investors are anxiously... Read more