$NVDA NVIDIA Q2 2024 Earnings Conference Call

Published: Aug 28, 2024 Duration: 01:06:23 Category: People & Blogs

Trending searches: nvda conference call
good afternoon my name is Abby and I will be your conference operator today at this time I would like to welcome everyone to nvidia's second quarter earnings call all lines have been placed on mute to prevent any background noise after the speaker remarks there will be a question and answer session if you would like to ask a question during that time simply press the star key followed by the number one on your telephone keypad if you would like to withdraw your question press star18 second time thank you and Mr Stuart stcker you may begin your conference thank you good afternoon everyone and welcome to nvidia's conference call for the second quarter of fiscal 2025 with me today from Nvidia are Jensen Wong president and chief executive officer and Colette Crest Executive Vice President and Chief Financial Officer I would like to remind you that our call is being webcast live on nvidia's invest relations website the webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2025 the content of today's call is invidious property it cannot be reproduced or transcribed without prior written consent during this call we may make forward-looking statements based on current expectation these are subject to a number of risks significant risks and uncertainties and our actual results May differ materially for discussion of factors that could affect our future Financial results and business please refer to the disclosure in today's earnings release our most recent forms 10K and 10q and the reports that we may file on Form 8K with the Securities and Exchange Commission all our statements are made as of today August 28th 2024 based on information currently available to us except as required by law we assume no obligation to update any such statements during this call we will discuss non-gaap Financial measures you can find a Reconciliation of these non-gaap Financial measures to Gap Financial measures in our CFO commentary which is posted on our website let me highlight an upcoming event for the financial Community we will be attending the Goldman Sachs communic copia and Technology conference on September 11th in San Francisco where Jensen will participate in a keynote fireside chat our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday November 20th 2024 with that let me turn the call over to kette thanks Duart Q2 was another record quarter revenue of 30 billion was up 15% sequentially and up 122% year on-year and well above our Outlook of 28 billion starting with data center data center revenue of 26.3 billion was a record up 16% sequentially and up 154% year on-ear driven by strong demand for NVIDIA Hopper GPU Computing and our networking platforms compute Revenue grew more than 2.5x networking Revenue grew more than 2x from the last year cloud service providers represented roughly 45% of our data center revenue and more than 50% stem from the consumer internet and Enterprise companies customers continue to accelerate their Hopper architecture purchases while gearing up to adopt Blackwell Key workloads Driving our data center growth include generative AI model training and inferencing video image and Text data pre- and postprocessing with Cuda and AI workloads synthetic data generation AI powered recommender systems SQL and Vector database processing as well next Generation models will require 10 to 20 times more compute to train with significantly more data the trend is expected to continue over the trailing four quarters we estimate that inference drove more than 40% of our data center Revenue csps consumer internet companies and Enterprises benefit from the incredible throughput and efficiency of nvidia's inference platform demand for NVIDIA is coming from Frontier Model makers consumer internet services and tens of thousands of companies and startups building generative AI applications for consumers advertising education Enterprise and Healthcare and Robotics developers desire in 's Rich ecosystem and availability in every cloud csps appreciate the broad adoption of Nvidia and are growing their Nvidia capacity given the high demand Nvidia h200 platform began ramping in Q2 shipping to large csps consumer internet and Enterprise companies the Nvidia h200 builds upon the strength of our Hopper architecture and offering over 40 % more memory bandwidth compared to the h100 our data center Revenue in China grew sequentially in Q2 and is significant contributor to our data center Revenue as a percentage of total data center Revenue it remains below levels seen prior to the imposition of export controls we continue to expect the China Market to be very competitive going forward the the latest round of ml perf inference benchmarks highlighted nvidia's inference leadership with both Nvidia Hopper and Blackwell platforms combining to win gold medals on all tasks at computex Nvidia with the top computer manufacturers unveiled an array of Blackwell architecture powered systems and Nvidia networking for building AI factories and data centers with the Nvidia mgx modular reference AR ecture our oems and odm partners are building more than 100 Blackwell based systems designed quickly and cost effectively the Nvidia Blackwell platform brings together multiple gpus CPU dpu Envy link EnV link switch and the networking chips systems and Nvidia Cuda software to power the next generation of AI across the cases Industries and countries the Nvidia gb200 NV 72 system with the fifth generation NV link enables all 72 gpus to act as a single GPU and deliver up to 30 times faster inference for llms workloads and unlocking the ability to run trillion parameter models in real time copper demand is strong and Blackwell is widely sampling we executed a change to the Blackwell GPU Mass to improve production yields Blackwell production ramp is scheduled to begin the fourth quarter and continue into fiscal year 26 in Q4 we expect to ship several billion doar in Blackwell Revenue coer shipments are expected to increase in the second half of fiscal 2025 hoer Supply and availability have improved demand for black well platforms is well above Supply and we expect this to continue into next year networking Revenue increased 16% sequentially our ethernet for AI Revenue which includes our Spectrum X endend ethernet platform doubled sequentially with hundreds of customers adopting our ethernet offerings Spectrum X has broad Market support from OEM and odm partners and is being adopted by csps GPU Cloud providers and Enterprise including xai to connect the largest GPU compute cluster in the world Spectrum X supercharges ethernet for AI processing and delivers 1.6x the performance of traditional EAP we plan to launch new Spectrum X products every year to support demand for scaling compute clusters from tens of thousands of dpus today to millions of dpus in the near future Spectrum X is well on track to begin a multi-billion dollar product line within a year our Sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at National imperatives for their society and industries Japan's National Institute of advanced industrial science and technology is building its AI bridging Cloud infrastructure 3.0 supercomput with Nvidia We Believe Sovereign AI Revenue will reach low double digigit billions this year the Enterprise AI wave is started Enterprises also drove sequential Revenue growth in the quarter we are working with most of the Fortune 100 companies on AI initiatives across Industries and geographies a range of applications are fueling our growth including AI powered chatbots generative Ai co-pilots and agents to build new monetizable business applications and enhance employee productivity amdoc is using Nvidia generative AI for their smart agent transforming the customer experience and reducing customer service cost by 30% service now is using Nvidia for its now assist offering the fastest growing new product in the company's history sap is using Nvidia to build dual co-pilots cohesity is using Nvidia to build a generative AI agent and lower generative AI development costs snowflake reserves over 3 billion queries a day for over 10,000 Enterprise customers is working with Nvidia to build co-pilots and lastly widraw is using Nvidia AI Omniverse to reduce endtoend cycle times for their factories by 50% Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using aidia in their data centers Automotive will drive multi-billion dollars in Revenue across on-prem and Cloud consumption and will grow as Next gener Generation AV models require significantly more compute Healthcare is also on its way to be a multi-billion Dollar business as AI revolutionizes Medical Imaging surgical robots patient care electronic health record processing and Drug Discovery during the quarter we announced a new Nvidia AI Foundry service to Super supercharge generative AI for the world's Enterprises with meta's llama 3.1 collection of models this marks a watershed moment for Enterprise AI company for the first time can leverage the capabilities of an open-source Frontier level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business Accentra is the first to adopt the new service to build custom llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications Nvidia Nims accelerate and simplify model deployment companies across Healthcare energy Financial Services Retail transportation and Telecommunications are adopting Nims including aramco Lowe's and Uber AT&T realized 70% cost savings and eight times latency reduction after moving into Nims for generative Ai call transcription and classification over 150 partners are embedding Nims across every layer of the AI ecosystem we announced Nim agent blueprints a catalog of customizable reference applications that include a full Suite of software for building and deploying Enterprise generative AI applications with Nim agent blueprints Enterprises can refine their AI applications over time creating a datadriven AI flywell the first Nim agent blueprints include workloads for customer service computer aided drug Discovery and Enterprise retrieval augmented generation our system integrators technology solution providers and system Builders are bringing Nvidia Nim agent blueprints to Enterprises Nvidia Nim and Nim agent blueprints are available through the Nvidia AI enterprise software platform which has great momentum we expect our software SAS and support Revenue to approach a $2 billion annual run rate exiting this year with Nvidia AI Enterprise notably contributing to growth moving to gaming and aips Gaming revenue of 2.88 billion increased 9% sequentially and 16% year on-ear we saw sequential growth in console notebook and desktop revenue and demand is strong and growing and channel inventory remains healthy every PC with RTX is an AI PC RTX PCS can deliver up to 1,300 AI tops and there are now over 200 RTX AI laptops designed from leading PC manufacturers with 600 AI powered applications and games and an installed base of 100 million devices RTX is set to revolutionize consumer experiences with generative AI Nvidia Ace a suite of generative AI Technologies is available for RTX AI PCS Mega break is the first game to use Nvidia Ace including our small small large small language model minitron 4B optimized on device inance the Nvidia gaming in ecosystem continues to grow recently added RTX and dlss titles include Indiana Jones and the great circle Dune Awakening and Dragon Age The Veil guard the GeForce now Library continues to expand with total catalog size of over 2,000 titles the most content of any cloud gaming service moving to provis visualization revenue of 454 million was up 6% sequentially in 20% year onye demand is being driven by Ai and graphic use cases including model fine-tuning and Omniverse related workloads automotive and Manufacturing were among the key industry verticals driving growth this quarter companies are racing to digital at ties workflows to drive efficiency across their operations the world's largest electronics manufacturer foxcon is using Nvidia Omniverse to power digital twins of the physical plants that produce Nvidia black hole systems and several large Global Enterprises including Mercedes ben Mercedes-Benz signed multi-year contracts for NVIDIA Omniverse Cloud to build industrial digital twins of factories we announced new Nvidia USD Nims and connectors to open Omniverse to new Industries and enable developers to incorporate generative Ai co-pilots and agents into USD workflows accelerating their ability to build highly accurate Virtual Worlds wpp is implementing USD Nim microservices in its generative Ai and abled content creation pipeline for customers such as the Coca-Cola Company moving to automotive and Robotics Revenue was 346 million up 5% sequentially and up 37% year on-year year- on-year growth was driven by the new customer rants in self-driving platforms and increased demand for AI cockpit Solutions at the consumer at the computer vision and pattern recognition conference Nvidia won the autonomous Grand Challenge in the end to end driving a scale category outperforming more than 400 entries worldwide Boston Dynamics byd Electronics figure intrinsic seens skilled Ai and paradig Robotics are using the Nvidia Isaac robotics platform for autonomous robot arms humanoids and mobile robots now moving to the rest of the p&l Gap growth margins were 75.1% and non-gaap growth margins were 75.7% down sequentially due to a higher mix of new products within Data Center and inventory Provisions for low yielding Blackwell material sequentially Gap and non-gaap operating expenses were up 12% primarily reflecting higher compensation related costs cash flow from operations was 14. 5 billion in Q2 we utilize cash of 7.4 billion toward shareholder returns in the form of share repurchases and test dividends reflecting the increase in dividend per Sher our board of directors recently approved a $50 billion share repurchase authorization to add to our remaining 7.5 billion of authorization at the end of Q2 let me turn the outlook for the third quarter total revenue is expected to be 32.5 billion plus or minus 2% our third quarter Revenue Outlook incorporates continued growth of our Hopper architecture and sampling of our Blackwell products we expect Blackwell production ramp in Q4 Gap and non-gaap gross margins are expected to be 74.4% and 75% respectively plus or minus 50 basis points as our data center mix continues to shift to new products we expect this trend to continue into the fourth quarter of fiscal 2025 for the full year we expect gross margins to be in the mid 70% range Gap and non-gaap operating expenses are expected to be approximately 4.3 billion and 3.0 billion respectively full year operating expenses are expected to grow in the mid to Upper 40% range as we work on developing our next generation of products Gap and non-gaap other income and expenses are expected to be about 350 million including gains and losses from non Affiliated Investments and publicly held Equity Securities Gap and non-gaap tax rates are expected to be 177% plus or minus 1% excluding any discrete items further Financial detail are included in the CFO commentary and other information available on our IR website we are now going to open the call for questions operator would you please help us and pull for questions thank you and at this time I would like to remind everyone in order to ask a question press star and then the number one on your telephone keypad we will pause for just a moment to compile the Q&A roster and as a reminder we ask that you please limit yourself to one question and your first question comes from the line of vivec Arya with Bank of America Securities your line is open uh thanks for taking my question um jent you mentioned um in the prepared uh comments that there's a change in the Blackwell GPU mask I'm curious are there any other incremental changes in backend packaging or anything else and I think related um you suggested that you could ship several billion dollars of blackw in Q despite a change in in the design is it because all these issues will be solved by then just help us size what is the overall impact of any changes in in Blackwell climbing uh what that means to your kind of Revenue profile and how are customers uh reacting to it yeah thanks V uh the change to the mask is complete uh there were no functional changes necessary and so we're sampling uh functional sample of uh Blackwell Grace Blackwell and a variety of system configurations as we speak uh there are something like a hundred different types of Blackwell based systems that are built that were shown at comput text and we're enabling uh our ecosystem to start sampling those uh the functionality of Blackwell is as it is and we expect to start production in Q4 and your next question comes from the line of toshia Hari with Goldman Sachs your line is open hi thank you so much for taking the question uh Jensen I had a relatively longer term question uh as you may know there's a pretty heated debate in in the market on you know your customers and customers customers return on investment um and what that means for the sustainability of of capex going forward uh currently at Nvidia like what what are you guys watching you know what's on your dashboard as you try to gauge customer return and and how that impact capex uh and then a quick followup maybe for Colette um I think your Sovereign AI number for the full year went up uh maybe a couple billion uh what's driving the improved Outlook and and how should we think about fiscal 26 thank you thanks toia uh first of all when I said ship production in Q4 I mean shipping out I don't mean starting to ship but I mean I don't mean starting production but shipping out uh on the longer term longer term question let's take a step back and and you've heard me say that we're going through two simultaneous platform transitions at the same time the first one is transitioning from accelerated Computing to from uh general purpose Computing to accelerated Computing and the reason for that is because CPU scaling has been known to be slowing for some time and it is it is slow to a crawl and yet the amount of computing demand continues to grow quite significantly you could maybe even estimate it to be doubling every single year and so if we don't have a new approach Computing inflation would be driving up the cost for every company and it would be driving up the energy consumption of data centers around the world uh in fact you're seeing that and so the answer is accelerated Computing we know that accelerated Computing of course speeds up applications it also enables you to uh do Computing at a much larger scale for example scientific simulations or database processing but what that translates directly to is lower cost and lower energy consumed and uh in fact this week uh we there's a Blog that came out that talked about a whole bunch of new libraries that we offer and that's really the core of the first platform transition going from general purpose Computing uh to accelerated Computing and it's not it's not unusual to see uh Someone Saved 90% of their Computing cost and and um and the reason for that is of course you just SP up an application 50x uh you would expect the Computing cost to to uh decline quite significantly the second was enabled by R Computing because because we drove down the cost of training large language models or training deep learning so incredibly that it is now possible to have gigantic Scale Models multi-trillion parameter models and train it on pre-train it on just about the world's uh knowledge Corpus and let the model go figure out how to understand uh human human language representation and how to codify knowledge into its neural networks and how to learn reasoning and so so uh which which caused the generative AI Revolution now gener generative AI uh taking a step back about why it is that we went so deeply into it is because it's not just a feature it's not just the capability it's a fundamental new way of doing software instead instead of human engineered algorithms we now have uh data we tell the AI we tell the model we tell the computer what's the what are the expected answers what are our what are our previous observations and then for it to figure out what the algorithm is what's the function it learns a universal you know AI is a bit of a universal function approximator and it learns the function and so you could learn the function of almost anything you know and anything that you have that's predictable anything that has structure anything that um uh uh that you have um uh previous examples of and so so now here we are with generative AI it's a fundamental new form of computer science it's affecting uh how every layer of computing is done from CPU to GPU from Human engineered algorithms to machine learned algorithms and the type of applications you could now develop and and um uh produce is uh fundamentally uh remarkable and there are several things that are happening in generative AI so the first thing that's happening is the frontier models are uh growing in quite substantial scale and they're still seeing we're still all seeing uh the benefits of scaling and whenever you double the size of a model you also have to more than double the size of the data set to go train it and so the amount of flops necessary in order to create that model uh goes up quadratically and and so um it's not unus it's not unexpected to see that the Next Generation models could take 20 you know 10 20 40 times more compute uh than last generation so we have to continue to drive the generational um performance up quite significantly so we can drive down the energy consumed and drive down the cost necessary to do so the first one is um uh there are larger Frontier models train on more modalities and surprisingly there are more Frontier Model makers than last year and so you have more on more on more that's that's one of the Dynamics going on in gen generative AI the second is although it's below the tip of the iceberg you know what we see are chat GPT um uh image generators uh we see um uh coding we use we use uh generative AI for coding quite extensively here at Nvidia now uh we of course have a lot of digital designers and things like that but those are kind of the tip of the iberg what's below the iceberg are the largest systems largest Computing systems in the world today which are and you've heard me talk about this in the past which are recommender systems moving from CPUs it's now moving from CPUs to generative AI so recommend your systems uh ad generation custom ad generation targeting ads at very very large scale and quite hyper targeting uh search and user generated content these are all very large scale applications have now uh evolved to generative AI of course the number of generative AI startups uh is generating tens of billions of dollars of uh Cloud renting uh opportunities for our Cloud Partners uh and uh Sovereign AI you know countries that are now realizing that uh their data is natural and National resource and they have to use they have to use AI build their own AI infrastructure so that they could uh have their own digital intelligence uh Enterprise AI as Colette mentioned earlier is uh starting and uh you might have seen our announcement uh that uh the world's leading it uh companies are joining us to take the mvidia AI Enterprise platform uh to the world's Enterprises uh that the the comp companies that we're talking to uh so many of them are just so incredibly excited to drive more productivity out of their company and then and then General robotics the the big the big um uh transformation last year as we uh are able to now learn uh physical AI from watching video and human demonstration and synthetic data generation from uh reinforcement learning uh from systems like Omniverse uh we are now able to uh work with just about every uh robotics companies now to start thinking about start building um General uh General Robotics and so you can see that there just so many different directions that General AI is going and so we're we're actually seeing the the momentum of gener generative AI accelerating and toia to answer your question um regarding uh uh Sovereign Ai and our our goals in in terms of growth in terms of Revenue uh certainly is a unique um and growing opportunity uh something that uh surfaced uh with generative Ai and the desires of countries around the world to have their own uh generative AI that would be able to incorporate uh their own language incorporate their own culture incorporate their own data in that in that country uh so more and more um excitement around these uh models and what they can be spefic specific for those countries so yes we're see we are seeing some growth opportunity in front of us and your next question comes from the line of Joe Moore with Morgan Stanley your line is open great thank you um Von in the press release you talked about Blackwell anticipation being incredible um but it seems like Hopper demand is also really strong I mean you're guiding for a very strong quarter without Blackwell in October so you know how long do you see sort of coexisting strong demand for both and can you talk about the transition to Blackwell do you see people intermixing clusters do you think most of the Blackwell activities new clusters just some sense of what that transition looks like yeah thanks Jo the demand for Hopper is really strong and it's true the demand for uh Blackwell is incredible uh there there's a couple reasons for that the first reason is is um if you just look at look at the world's cloud service providers the amount of GPU capacity they have available it's basically none and the reason for that is because they're either being deployed internally for accelerating their own workloads data processing for example uh data processing you know we hardly ever talk about it because it's mundane you know it's not it's not very cool because it doesn't generate a picture or you know generate words but almost every single company in the world processes data in the background and and um uh Nvidia GPU are the only accelerators on the planet that process and accelerate data SQL data um pandas data data science uh toolkits like pandas and the new one polers uh these are the one most popular data processing Platforms in the world and aside from CPUs which as I've mentioned before really running out of steam uh nvidia's accelerated Computing is is really the only way to to get uh boosting performance out of that and so so that's number one is the primary the number one use case long before General T came along is that the migration of applications one after another uh to accelerated Computing the the second the second is of course rent the rentals they're they're renting uh capacity uh to model makers they renting it to uh startup companies and a generative AI company uh spends the vast majority of their uh invest the capital uh into into infrastructure so that they could use an AI to help them create products and and so these companies need it now they they just simply can't afford you know you just raise money you uh they want you to put it to use now uh you have uh processing that you have to do you can't do it next year you got to do it today and so so there's a there's a fair that's one reason the second reason for Hopper Demand right now is because of the race to the next Plateau the first person to the next Plateau um uh gets to be you know a gets to introduce a revolutionary level of AI the second person who gets there is incrementally you know better or about the same and so so the ability to systematically and consistently race to the next plateau and be the first one there is how you establish leadership um you know Nvidia is constantly doing that and we show that uh to the world and the gpus we make and the AI factories that we make uh the networking systems that we make um the S so's we create I mean we we want we want to set the pace we want to be consistently the world's best and that's the reason why we drive ourselves so hard um of course we also want to see our dreams come true and and all of the the the capabilities that that we uh imagine in the future and the benefits that we can bring to society we want to see all that come true and and so these model makers are are um are the same they're they're of course they want to be the world's best they want to be the world's first um and and uh although Blackwell will start uh shipping out in billions of dollars at the end of this year um the the uh standing up of the capacity is still uh probably you know weeks than a month or so away and so between now and then is a lot of generative AI Market Dynamic and so everybody is just really in a hurry it's a it's either operational reasons that they need it they need accelerated Computing um they don't want to build any more uh general purpose Computing infrastructure and even hoer uh you know of course h200 state-ofthe-art uh Hopper if you have a choice between building CPU infrastructure right now for business or Hopper uh infrastructure for business right now that decision is relatively clear and so I think people are just clamoring uh to uh transition the trillion dollars of uh uh established installed infrastructure to a modern infrastructure in Hopper state of the art and your next question comes from the line of Matt Ramsey with TD Cowan your line is open um thank you very much good afternoon everybody um and I I wanted to kind of circle back to an earlier question about uh the debate that investors are having about I don't know the ROI on all of this capex and hopefully this question and the distinction will make some some sense but what I'm what I'm having discussions about is is with like the percentage of folks that you see that are spending all this money um and looking to sort of push the frontier towards um AGI convergence and as you just said a new plateau and capability um and they're going to spend regardless to get to that level of capability because it opens up so many doors for for um the industry and for their company versus customers that are really really focused today on capex versus Roi I don't know if that distinction makes sense I'm just trying to get a sense of how you're seeing the priorities of people that are putting the dollars in the ground on on this new technology and and what their priorities are and and their time frames are for that investment thanks thanks man the people who are investing in uh Nvidia infrastructure are getting Returns on it right away it's the best Roi uh infrastructure Computing infrastructure investment you can make today and so so one way to think through it you know probably the most the easiest way to think through it is just go back to First principles you have a trillion dollars worth of general purpose Computing infrastructure and the question is do you want to build more of that or not and for every billion dollars worth of General purp CPU based infrastructure uh that you stand up you probably rent it for less than a billion and so um because it's it's commoditized there's already a trillion dollars on the ground what's the point of getting more and so so the the people who are who are clambering to get this infrastructure one um when they build out Hopper based infrastructure and soon uh Blackwell based infrastructure they start saving money that's tremendous return on investment and the reason why they started saving money is because data processing saves money um you know data processing is probably just a giant part of it already and so recommender system save money um so on so forth okay and so you start saving money the second thing is everything you stand up uh are going to get rented because so many many companies are are being founded to create generative Ai and so your uh your uh uh capacity gets rented right away and the return on investment of that is really good and then the third reason is your own business you know you want to either create the next Frontier yourself or uh your your own internet services uh benefit from a you know a a a Next Generation ad system or next Generation recommender system or next Generation search system uh so for your own service uh for your own stores uh for your own user generated content social media platforms um you know for for for your own Services generative AI uh is also uh a um a fast Roi and so there's a lot of ways you could think through it um but at the core it's because it is the best Computing infrastructure you could put in the ground today the world of general purpose computer is Shifting to accelerated Computing the world of human engineered software is moving to generative AI software um if you were to build infrastructure to modernize your uh your your uh cloud and your data centers uh build it with accelerated Computing Nvidia that's the best way to do it and your next question comes from the line of Timothy arur with UBS your line is open thanks a lot um I had a question on the shape of the revenue growth both near and longer term I know Colette you did um you know increase Opex for the year and if I look at the increase in your purchase commitments and your supply obligations that's also quite bullish on the um uh other hand there's some you know school of thought that not that many customers really seem ready for liquid cooling and I do recognize that some of these racks can be air cooled but Jensen is that something to consider sort of on the shape of how blackw is going to ramp and then I guess when you look Beyond uh you know next year which is obviously going to be a great year and you look into 26 do you worry about any other you know gating factors like say the power supply chain or uh at some point models start to get smaller I'm just wondering if you can speak to that thanks um I'm going to work backwards I I really appreciate the question Tim uh so remember the world is moving from general purpose Computing to accelerated Computing and and the world builds about a trillion dollars worth of data centers um you know a trillion dollars worth the data centers in a few years will be all accelerated Computing in the past no gpus are in data centers just CPUs in the future every single data center while gpus and the reason for that is very clear because we need to accelerate workloads so that we can continue to be sustainable continue to drive down the cost of computing so that when we do more Computing our we don't experience uh Computing inflation uh second uh we need we need GP isuse for uh a new Computing model called generative AI that we can all acknowledge uh is going to be quite transformative to the future of computing and so so I think I think um working backwards uh the way to think about that is is the next trillion dollars of the world's infrastructure will clearly be uh different than the last trillion and it'll be vastly accelerated um with respect to to uh the shape of our ramp we offer multiple configurations of uh Blackwell Blackwell comes in either a you know Blackwell classic if you will that uses the hgx form factor that we pioneered uh with uh with Volta and I think it was Volta and so um uh we've been shipping the hgx hgx form factor for some time it is air cooled uh the grace Blackwell um is liquid cooled how ever that the number of data centers that want to go liquid cooled is is quite significant the reason for that is because we can uh in a liquid cool data center in any data center power limited data center whatever size data center you choose you could install and deploy anywhere from three to five times the AI throughput compared to the past and so liquid cooling is cheaper liquid cooling uh TCO is better and liquid cooling allows you to have the benefit of this capability we call MV link which allows us to expand it to 72 Grace black wall packages which has essentially 144 gpus and so imagine 144 gpus connected in MV link and that when is we're increasingly showing you the benefits of that and the next you know the next click is obviously very low latency very high throughput large language model inference and the large MV link domain is going to be a game changer for that and so so I think I think people are uh are very comfortable deploying both and so almost every CSP we're working with are deploying uh some of both and so I uh I'm pretty confident that that we'll wrap it up just just fine uh your your second question out of the third is that looking forward yeah next year is going to be a great year uh we expect to uh grow our data center business uh quite significantly next year black wall is going to be going to be a a complete uh game changer for the industry and um uh black wall is going to carry into into the following year and as I mentioned earlier working backwards from first principles uh remember that Computing is going through two platform transitions at the same time and that's just really really important to keep your head on your your mind focused on which is uh general purpose Computing is Shifting to accelerated Computing and human engineered software is going to transition to G AI for artificial intelligence learn software okay and your next question comes from the line of Stacy Rasin with Bernstein research your line is open hi guys thanks for taking my questions I have two short questions for Colette um the first uh several billion dollars of black oil Revenue in Q4 is that additive you you said you expected Hopper demand to strengthen in the second half does that mean Hopper strengthens Q3 to Q4 as well on top of Blackwell adding several billion dollars and the second question on Gross margins if I have mid mid 70s for the year where want to draw that if I have 75 for the year I'd be something like 71 to 72 for Q4 somewhere in that range is that the kind of exit rate for gross margins that you're expecting and how should we think about the drivers of margin Evolution into next year um as Blackwell ramps and I mean hope hopefully I guess the yields and and and the inventory res and everything come up yes JY let's first take your uh question um that you had about Hopper and Blackwell uh so we believe our Hopper um will continue to grow into the second half we have many new products uh for Hopper our existing products for Hopper that we believe will start continuing to ramp um in the next uh uh quarters including our Q3 and um those new products moving to Q4 so let's say Hopper there for versus H1 is a growth opportunity for that additionally we have the black well on top of that and the black well starting of um ramping in Q4 so hope that helps you on those two pieces uh your second piece is in terms of onor gross Market we provided gross margin uh for our Q3 we provided our gross margin on a non Gap at about uh 75 um we'll work um with all the different uh transitions that we're uh going through but we do believe we can do that 75 and Q3 we provided that we're still on track for the full year also in the mid 70s or approximately the 75 so we're going to see some slight um uh difference possibly in Q4 um um again with our Transitions and the different CLA structures that we have on our new product introductions however I'm not in the same number that you are um there we don't have exactly guidance um but uh I do believe you're lower than where we are and your next question comes from the line of Ben rites with melas your line is open yeah hey um thanks a lot for the question Jensen and Colette um I wanted to ask about the geographies uh there was uh the 10q that came out and the United States was down sequentially while uh several Asian geographies were up a lot sequentially just wondering what the Dynamics are there um you know and um obviously China did very well you mention it in your remarks what are the puts and takes and then I just wanted to clarify from Stacy's question um if that means that the sequential overall Revenue growth rates for the company accelerate in the fourth quarter given all those favorable Revenue Dynamics thanks let me talk about um a bit in terms of our disclosure in terms of the 10 q a requireed disclosure in uh a choice of geographies very challenging sometimes to uh create that uh right disclosure as we have to come up with uh one key piece pieces in terms of have in terms of who we sell to and or specifically who we invoice to and so what you're seeing in terms of there is who we invoice that's not necessarily where the product will eventually be um uh and where it may even travel to the End customer these are just moving to our oems our odms and our system integrators for the most part across our product portfolio so what you're seeing there is sometimes just a Swift uh a shift in terms of who they are using uh to complete their full configuration before those things are going into the data center going into notebooks and those pieces of it uh and that shift happens uh from time to time but yes uh our China number there are invoicing into China keep in mind that is incorporating both gaming also data center also Automotive in those uh numbers that we have going back to your statement in regarding gross margin um and um also what we're seeing in terms of uh what we're looking at for Hopper and Blackwell in terms of Revenue Hopper will continue to grow in the second half um will continue to grow from what we are currently seeing during determining that exact mix um in each Q3 and Q4 we don't have here we are not here to guide uh yet in terms of Q4 but we do see right now the demand expectations we do see um the visibility that that will be a growth opportunity in in Q4 on top of that we will have our Blackwell architecture and your next question comes from the line of CJ Muse with caner Fitzgerald your line is open yeah good afternoon thank you for taking the question um you've embarked on a remarkable annual product Cadence with with challenges only likely becoming more and more given in a rising complex complexity in a retical limit Advanced package world so curious you know if you take a step back how does this backdrop alter your thinking around potentially greater vertical integration supply chain Partnerships and and then thinking through consequential impact to your margin Pro profile thank you yeah thanks uh let's see I think the uh the fir well the first the the first answer to your the answer to your first question is that the reason why our velocity is so high is simultaneously because uh the complexity of the model is growing and we want to continue to drive its cost down um it's growing so we want to continue to increase its scale and we believe that uh by continuing to scale the AI models that will reach a a level of of extraordinary usefulness and that would it would um open up I I realize the next Industrial Revolution we believe it and and so we're we're going to drive ourselves uh really hard to do to to continue to uh uh uh go up that scale um we have the ability uh fairly uniquely to integrate uh to design a um uh an AI Factory uh because we have all the parts it it's not possible to come up with a new AI Factory every year unless you have all the parts and so we have uh next year we're going to ship a lot more CPUs than we've ever had in the history of our company more gpus of course uh but also mvlink switches um uh CX uh dpus connectx dpu for East and West uh Bluefield dpus for north and south and uh data and storage processing uh to um infin band for supercomputing centers to ethernet which is a brand new product for us uh which is well on its way to becoming a multi-billion Dollar business uh to to bring AI to ethernet and so the fact that we could build we have we have access to all of this we have one architectural stack as you know um it allows us to introduce new capabilities to the market you know as we complete it otherwise what happens you ship these parts you go find customers to sell it to and then you've got to build somebody's got to build up an AI Factory and the AI Factory has got a mountain of software and so it's not about it's not about who integrates it we love the fact that our supply chain is disintegrated in the sense that we could service um uh you know quanta foxcon HP Dell Lenovo uh super micro I uh we used to be able to serve a ZT um I they were recently uh purchased and um I and so on so forth and so the the number of ecosystem partners that we have uh gigabyte assus uh the number of ecosystem partners that we have that allows it allows us to allows them to take our architecture which all works but integrated in a bespoke way into all of the world's cloud service providers Enterprise data centers the scale and reach necessary from our odms and our integrators integr integrator supply chain is vast and gigantic because the world is huge and so that part we don't we don't want to do and we're not good at doing and um uh but we know how to design the AI infrastructure provide it the way that customers would like it and let the ecosystem integrated um well yeah so anyways that's the reason why and your final question comes from the line of Aaron rakers with Wells Fargo your line is open yes thanks for taking the question I wanted to go back into the the Blackwell product cycle one of the questions that that we tend to get asked is is you see the the rack scale system mix dynamic as as you think about leveraging MD link you think about GB you know nvl 72 and and how that goto Market you know dynamic looks you know as far as the the black W products that go I guess I put distinctly how do you see that mix of rack scale systems as we start to think about the black W black wall cycle playing out yeah eron thanks the U the black wall rack system it's designed designed and architected as a rack but is sold in a disag in disaggregated system components we don't sell the whole rack and the reason for that is because everybody's rack's a little different surprisingly you know some some of them are ocp standards some of them are not some of them are Enterprise uh and uh the the power limits for everybody could be a little different choice of cdus uh the choice of um uh Power bu bars the the the configuration and integration into people's data centers all different and so so the way we designed it we architected the whole rack the software is going to work perfectly across the whole rack and then we uh provide the system components like for example the uh CPU and GPU compute uh uh board is then integrated into an mgx it's a modular system architecture mgx is is completely ingenious and uh we have mgx odms and integrators and oems uh all over the planet and so so just about you know any configuration you would like uh where you would like that 3,000lb rack to be uh delivered you know it's got to be close to it's it has to be integrated and assembled close to the data center because it's fairly heavy and so everything from the supply chain from the moment that we shipped the GPU CPUs uh the switches the Nicks from that point forward the integration is done quite close to the location of the csps and the locations of the the data centers and so you could imagine how many data centers in the world there are and how many Logistics hubs uh We've uh uh scaled out to with our odm partners and so I I think that because we we show it as one rack and because it's always you know rendered that way and and shown that way we we might have left the impression that we're doing the integration our customers hate that we do integration the supply chain hates us doing integration they want to do the integration that's their value added um there's a final design design in if you will you know it's not quite as simple as shimmy into a data center but that design fit in is really complicated and so the inst the design fit in the installation the bring up the um uh uh repair uh repair and replace that entire cycle is done all over the world and we have a sprawling network of odm and OEM partners that does this incredibly well so uh integration is not the reason why we're doing uh racks it it's it's the anti-reason of doing it um the way we don't want to be an integrator we want to be a a technology provider and I will now turn the call back over to Jensen Hanan for closing remark thank you let me make a couple more make a couple of comments that I made earlier again the data center worldwide are in Full Steam to modernize the entire Computing stack with accelerated Computing and generative AI Hopper demand remains strong and the anticipation for blackw is incredible let me highlight the top five things the top five things of our company accelerated Computing has reached the Tipping Point CPU scaling slows developers must must accelerate everything possible accelerated Computing starts with Cuda X libraries new libraries open new markets for NVIDIA we released many new libraries including could accelerated polers pandas and Spark deleting data science and data processing libraries C vs for Vector Pro Vector databases this is incredibly hot right now Ariel and shiona for 5G wireless base station a whole Suite of a whole world of data centers that we can go into now pair of bricks for Jean sequencing and Alpha 2 for protein structure prediction is now cud accelerated we are at the beginning of our journey to modernize a trillion dollar worth of data centers from general purpose Computing to a accelerated Computing that's number one number two Blackwell is a step function leap over Hopper Blackwell is an AI infrastructure platform not just a GPU also happens to be in the name of our GPU but it's an AI infrastructure platform as we reveal more of Blackwell and sample systems to our partners and customers the extent of Blackwell's lead becomes clear the Blackwell Vision took nearly five years and seven one-of-a-kind chips to realize the gray CPU the Blackwell dual GPU and a Coos package connectx dpu for East West traffic blue field dpu for north north north south and storage traffic mvlink switch for all to all GPU Communications and Quantum and Spectrum X but for both infin ban ethernet can support the massive burst traffic of ai blackwall ai factories are building-sized computers Nvidia designed and optimized the blackw platform full stack end to end from chips systems networking even structured cables power and Cooling and mountains of software to make it fast for customers to build AI factories these are very Capital intensive infrastructures customers want to deploy it as soon as they get their hands on the equipment and the deliver the best performance and TCO Blackwell provides three to five times more AI throughput in a power limited data center than Hopper the third is MV link this is a very big deal with its all to all GPU switch is gamechanging the Blackwell system lets us connect 144 gpus in 72 gb2 200 packages into one MV link domain with an aggregate aggregate MV link bandwidth of 250 9 terabytes per second in one rack just put that in perspective that's about 10 times higher than Hopper 200 59 terabytes per second kind of makes sense because you need to boost the training of multi-trillion parameter models on trillions of tokens and so that natural amount of data needs to be moved around from GPU to GPU for inference MV link is vital for low latency High throughput large language model token generation we now have three networking platforms mvlink for GPU scale up Quantum infiniband for supercomputing and dedicated AI factories and Spectrum X for AI on ethernet Mia networking footprint is much bigger than before generative AI momentum is accelerating generative AI Frontier Model makers are racing to scale to the next AI Plateau to increase model safety and IQ we're also scaling to understand more modalities from text images and video to 3D physics chemistry and biology Chad Bots coding AIS and image generators are growing fast but it's just the tip of the iceberg internet services are deploying generative AI for large scale recommenders ad targeting and search systems AI startup are consuming tens of billions of dollars yearly of csp's cloud capacity and countries are recognizing the importance of AI and investing in Sovereign AI infrastructure and Invidia Ai and Nvidia Omniverse is opening up the next era of AI General Robotics and now the Enterprise AI wave has started and we're poised to help companies transform their businesses the Nvidia AI Enterprise platform consists of Nemo Nims NM agent Blueprints and AI Foundry that our ecosystem Partners the world leading it companies used to help customer C companies customize AI models and build bespoke AI applications Enterprises can then Deploy on Nvidia AI Enterprise runtime and at $4,500 per GPU per year Nvidia AI Enterprise is an exceptional value for deploying AI anywhere and for nvidia's software Tam can be significant as the Cuda compatible GPU install base grows from Millions to tens of millions and as Colette mentioned Nvidia software will exit the year at a 2 billion doll run rate thank you all for joining us today and ladies and gentlemen this concludes today's call and we thank you for your participation you may now disconnect

Share your thoughts

Related Transcripts

$CRM Salesforce Q2 2024 Earnings Conference Call thumbnail
$CRM Salesforce Q2 2024 Earnings Conference Call

Category: People & Blogs

Welcome to salesforce's fiscal 2025 second quarter results conference call if you have a question today please press star one on your telephone keypad i would now like to hand the conference over to your speaker mike spencer executive vice president of finance and strategy and investor relations sir... Read more

$NVAX Novavax Q2 2024 Earnings Conference Call thumbnail
$NVAX Novavax Q2 2024 Earnings Conference Call

Category: People & Blogs

Good morning and welcome to novac second quarter 2024 financial results and operational highlights conference call all participants will be in listen only mode did you need assistance please signal the conference specialist by pressing star followed by zero after today's presentation there will be an... Read more

$ULTA Ulta Beauty Q2 2024 Earnings Conference Call thumbnail
$ULTA Ulta Beauty Q2 2024 Earnings Conference Call

Category: People & Blogs

Good afternoon and welcome to ulta's beauty conference call to discuss results for the ulta beauty second quarter 2024 earnings results at this time all participants are in a listen only mode every question and answer session will follow the formal presentation we ask that you please limit yourself... Read more

$CRWD CrowdStrike Q2 2024 Earnings Conference Call thumbnail
$CRWD CrowdStrike Q2 2024 Earnings Conference Call

Category: People & Blogs

Hello and welcome to crowd strikes fiscal second quarter 2025 financial results conference call at this time all participants or in a listen only speakers presentation we will conduct a question and answer session please be advised that today's conference is being recorded i would now like to hand the... Read more

$BNTX BioNTech Q2 2024 Earnings Conference Call thumbnail
$BNTX BioNTech Q2 2024 Earnings Conference Call

Category: People & Blogs

Thank you for joining biontech 2 quarter 2024 earnings call as a reminder the slides we will be using during this call and the corresponding press release we issued this morning can be found in the investor relations section of our website on the next slide you will see our forward-looking statements... Read more

TD Bank Runs Into Regulatory Trouble thumbnail
TD Bank Runs Into Regulatory Trouble

Category: News & Politics

So anyone who that has owned td bank stock over the last little while i'm sure you've heard that they've run into some regulatory issues they were fined recently by finra uh governing body here in canada $9.2 million for uh not going by or not having the best anti-money laundering or not following the... Read more

$TD Bank faces potential fines of up to 2 billion dollars and additional regulatory restrictions thumbnail
$TD Bank faces potential fines of up to 2 billion dollars and additional regulatory restrictions

Category: News & Politics

Td bank is in so many mutual funds like nell now i'm not going to say it's another nell because we know what happened to nell and i do not believe and i stress do not believe td bank will go the same way as nell went but we could see some pretty big fines uh charged to td bank some people are talking... Read more

SoFi Growth Strategy - SOFI Stock Analysis thumbnail
SoFi Growth Strategy - SOFI Stock Analysis

Category: People & Blogs

All in all i think we're setting up for a pretty strong uh back half of the year in 2025 if we get the right right uh fiscal policy that is soi ceo anthony notto commenting on the potential rate cuts in the back half of the year and now a little over a month after that interview we have gotten the announcement... Read more

DELL The Stock Swoosh Show Play Of The Day 8-30-2024 thumbnail
DELL The Stock Swoosh Show Play Of The Day 8-30-2024

Category: Education

Hello there everyone and welcome this is melissa arma with a stock swish and reviewing dell dell was a stock swish show play of the day uh it's really really this was not easy today actually i did the trade got stopped in the first trade here in this because i was actually in it before the open which... Read more

Nvidia's Q2 Earnings: 3 Reasons Why Investors Aren't Impressed #short #trading #trending thumbnail
Nvidia's Q2 Earnings: 3 Reasons Why Investors Aren't Impressed #short #trading #trending

Category: Nonprofits & Activism

This is going to blow your mind nvidia smashed q2 expectations but their stock still dipped let's break it down nvidia exceeded analy predictions in both revenue and earnings so why the cold shoulder from investors first supply chain issues are still a headache despite strong numbers production bottlenecks... Read more

Cramer Today BEFORE NVIDIA Earnings - NVIDIA Stock, NVDA Update thumbnail
Cramer Today BEFORE NVIDIA Earnings - NVIDIA Stock, NVDA Update

Category: Education

Cramer & faber all right we'll start though with nvidia of course given it's do out after the bell and we got a number of other earnings after the bell as well jim that are not insignificant including salesforce tonight is huge um give me your thoughts as we head in here nvidia shares you can kind of... Read more

Apple is trading near a key moving average after unveiling theiPhone 16 with AI-enabled features. 📱 thumbnail
Apple is trading near a key moving average after unveiling theiPhone 16 with AI-enabled features. 📱

Category: News & Politics

Apple unveiled new versions of the iphone airpods and apple watch on september 9th the spotlight was on the debut of the ai enabled iphone 16 with the apple intelligence platform the next generation of iphone has been designed for apple intelligence from the ground up it marks the beginning of an exciting... Read more