Nvidia Q2 FY 2025 Earnings Conference Call | NVDA earnings

Conference Operator good afternoon my name is Abby and I will be your conference operator today at this time I would like to welcome everyone to nvidia's second quarter earnings call all lines have been placed on mute to prevent any background noise after the speaker remarks there will be a question and answer session if you would like to ask a question during that time simply press the star key followed by the number one on your telephone keypad if you would like to withdraw your question press star one a second time thank you and Mr Stuart stcker you Director of IR may begin your conference thank you good afternoon everyone and welcome to nvidia's conference call for the second quarter of fiscal 2025 with me today from Nvidia are Jensen Wong president and chief executive officer and Colette cres Executive Vice President and Chief Financial Officer I would like to remind you that our call is being webcast live on nvidia's investor relations website the webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2025 the content of today's call is invidious property it cannot be reproduced or transcribed without prior written consent during this call we may make forward-looking statements based on current expectation these are subject to a number of risks significant risks and uncertainties and are actual results May differ materially for discussion of factors that could affect our future Financial results and business please refer to the disclosure in today's earnings release our most recent forms 10K and 10q and the reports that we may file on Form 8K with the Securities and Exchange Commission all their statements are made as of today August 28 2024 based on information currently available to us except as required by law we assume no obligation to update any such statements during this call we will discuss non-gaap Financial measures you can find a Reconciliation of these non-gaap Financial measures to Gap Financial measures in our CFO commentary which is posted on our website let me highlight an upcoming event for the financial Community we will be attending the Goldman Sachs communic copia and Technology conference on September 11th in San Francisco where G will participate in a keynote fireside chat our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday November 20th 2024 with that let me turn the call over Colette Kress , CFO to Colette thanks Stuart Q2 was another record quarter revenue of 30 billion was up 15% sequentially and up 122% year on-ear and well above our Outlook of 20 8 billion starting with data center data center revenue of 26.3 billion was a record up 16% sequentially and up 154% year on-year driven by strong demand for NVIDIA hupper GPU Computing and our networking platforms compute Revenue grew more than 2.5x networking Revenue grew more than 2x from the last year cloud service provider represented roughly 45% of our data center revenue and more than 50% stem from the consumer internet and Enterprise companies customers continue to accelerate their Hopper architecture purchases while gearing up to adopt Blackwell Key workloads Driving our data center growth include generative AI model training and inferencing video image and Text data pre and postprocessing with Cuda and AI workloads synthetic data generation AI powered recommender systems SQL and Vector database processing as well next Generation models will require 10 to 20 times more compute to train with significantly more data the trend is expected to continue over the trailing four quarters we estimate that inference drove more than 40% of our data center Revenue CSP consumer internet companies and Enterprises benefit from the incredible throughput and efficiency of nvidia's inference platform demand for NVIDIA is coming from Frontier Model makers consumer internet services and tens of thousands of companies and startups building generative AI applications for consumers advertising education Enterprise and health care and Robotics Developers desire nvidia's Rich ecosystem and availability in every cloud csps appreciate the broad adoption of Nvidia and are growing their Nvidia capacity given the high demand Nvidia h200 platform began ramping in Q2 shipping to large csps consumer internet and Enterprise companies the Nvidia h200 builds upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the age 100 our data center Revenue in China grew sequentially in Q2 and is significant contributor to our data center Revenue as a percentage of total data center Revenue it remains below levels seen prior to the imposition of export controls we continue to expect the China Market to be very competitive going forward the latest round of ml perf inference benchmarks highlighted nvidia's inference leadership with both envidia Hopper and Blackwell platforms combining to win gold medals on all tasks at computex Nvidia with the top computer manufacturers unveiled an array of Blackwell architecture powered systems and Nvidia networking for building AI factories and data centers with the Nvidia mgx modular reference architecture our oems and odm partners are building more than 100 Blackwell based systems designed quickly and cost effectively the Nvidia Blackwell platform brings together multiple GPU CPU dpu NV link NV link switch and the networking chips systems and Nvidia Cuda software to power the next generation of AI across the cases Industries and countries the Nvidia gb200 NBL 72 system with the fifth generation NV link enables all 72 gpus to act as a single GPU and deliver up to 30 times faster inference for llms workloads and unlocking the ability to run trillion parameter models in real time Hopper demand is strong and Blackwell is widely sampling we we executed a change to the Blackwell GPU Mass to improve production yields Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year 26 in Q4 we expect to ship several billion dollars in Blackwell Revenue coer shipments are expected to increase in the second half of fiscal 2025 Hopper Supply and availability have improved demand for Blackwell platforms is well above Supply and we expect this to continue into next year networking Revenue increased 16% sequentially our ethernet for AI Revenue which includes our Spectrum X endtoend ethernet platform doubled sequentially with hundreds of customers adopting our ethernet offerings Spectrum X has broad Market support from OEM and odm partners and is being adopted by csps GPU Cloud providers and Enterprise including xai to connect the largest GPU compute cluster in the world Spectrum X supercharges ethernet for AI processing and delivers 1.6x the performance of traditional ethernet we plan to launch new Spectrum X products every year to support demand for scaling compute clusters from tens of thousands of gpus today to millions of gpus in the near future Spectrum X is well on track to begin a multi-billion dollar product line within a year our Sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at National imperatives for their society and industries Japan's National Institute of advanced industrial science and technology is building its AI bridging Cloud infrastructure 3.0 supercomputer with Nvidia We Believe Sovereign AI Revenue will reach low double digigit billions this year the Enterprise AI wave is started Enterprises also drove sequential Revenue growth in the quarter we are working with most of the Fortune 100 companies on AI initiatives across Industries and geographies a range of applic are fueling our growth including AI powered chatbots generative Ai co-pilots and agents to build new monetizable business applications and enhance employee productivity amox is using Nvidia generative AI for their smart agent transforming the customer experience and reducing customer service cost by 30% service now is using Nvidia for its now assist offering the fastest growing new product in the company's history sap is using Nvidia to build jewel co-pilots cohesity is using Nvidia to build their generative AI agent and lower generative AI development costs snowflake serves over three billion queries a day for over 10,000 Enterprise customers is working with Nvidia to build Co pilots and lastly withdrawn is using Nvidia AI Omniverse to reduce endtoend cycle times for their factories by 50% Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using Nidia in their data centers Automotive will drive multi-billion dollars in Revenue across on-prem and Cloud consumption and will grow as next generous generation AB models require significantly more compute Healthcare is also on its way to being a multi-billion Dollar business as AI revolutionizes Medical Imaging surgical robots patient care electronic health record processing and Drug Discovery during the quarter we announced a new Nvidia AI Foundry service to Super supercharge generative AI for the world's Enterprises with meta llama 3.1 collection of models this marks a watershed moment for enterpr AI companies for the first time can leverage the capabilities of an open-source Frontier level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business Accentra is the first to adopt the new service to build custom llama 3.1 models for both its own use and to assist clients seeking to deploy Center today AI applications Nvidia Nims accelerate and simplify model deployment companies across healthc care energy Financial Services Retail transportation and Telecommunications are adopting Mims including aramco Lowe's and Uber at& realized 70% cost savings and eight times latency reduction after moving into Nims for generative AI call transcription and classification over 150 partners are embedding Nims across every layer of the AI ecosystem we announced Nim agent blueprints a catalog of customizable reference applications that include a full Suite of software for building and deploying Enterprise generative AI applications with Nim agent blueprints Enterprises can refine their AI applications over time creating a datadriven AI flywell the first Nim agent blueprints include workloads for customer service computer aided drug Discovery and Enterprise retrieval augmented generation our system integrators technology solution providers and system Builders are bringing Invidia Nim agent blueprints to Enterprises Nvidia Nim and Nim agent blueprints are available through the Nvidia AI enterprise software platform which has great momentum we expect our software SAS and support Revenue to approach a$2 billion annual run rate exiting this year with Nvidia AI Enterprise notably contributing to growth moving to gaming and AIP PCS Gaming revenue of 2.88 billion increased 9% sequentially and 16% year year we saw sequential growth in console notebook and desktop revenue and demand is strong and growing and channel inventory remains healthy every PC with RTX is an aipc RTX PCS can deliver up to 1,300 AI tops and there are now over 200 RTX AI laptops designed from leading PC manufacturers with 600 AI powered applications and games and an installed base of 100 million devices RTX is set to revolutionize consumer experiences with generative AI Invidia Ace a suite of generative AI Technologies is available for RTX AI PCS Mega break is the first game to use Nvidia Ace including our small small large small language model minitron for will be optimized on device inference the Nvidia gaming in ecosystem continues to grow recently added RTX and dlss titles including Indiana Jones and the great circle Dune Awakening and Dragon Age The Veil guard the GeForce now Library continues to expand with total catalog size of over 2,000 titles the most content of any cloud gaming service moving to provis visualization revenue of 454 million was up 6% sequentially and 20% year onye demand is being driven by Ai and graphic use cases including model fine-tuning and Omniverse related workloads automotive and Manufacturing were among the key industry verticals driving growth this quarter companies are racing to digital at ties work flows to drive efficiency across their operations the world's largest electronics manufacturer foxcon is using Nvidia Omniverse to power digital twins of the physical plants that produce Nvidia black hole systems and several large Global Enterprises including Mercedes ben Mercedes-Benz signed multi-year contracts for NVIDIA Omniverse Cloud to build industrial digital twins of factories we announced new Nvidia USD Nims and connectors to open Omniverse to new Industries and enable developers to incorporate generative Ai co-pilots and agents into USD workloads accelerating their ability to build highly accurate Virtual Worlds wpp is implementing usdm microservices in its generative AI enabled content creation pipeline for customers such as the Coca-Cola Company moving to automotive and Robotics Revenue was 346 million up 5% sequentially and up 37% year on-year year-on-year growth was driven by the new customer rants in self-driving platforms and increased demand for AI cockpit Solutions at the consumer at the computer vision and pattern recognition conference Nvidia won the autonomous Grand Challenge in the end to end driving at scale category outperforming more than 400 entries worldwide Boston Dynamics byd Electronics figure intrinsic Seamans skilled ADI and paradine Robotics are using the Nvidia Isaac robotics platform for autonomous robot arms humanoids and mobile robots now moving the rest of the p&l Gap growth margins were 75.1% and non-gaap growth margins were 75.7% down sequentially due to a higher mix of new products within Data Center and inventory Provisions for low yielding blackw material sequentially Gap and non-gaap operating expenses were up 12% primarily reflecting higher compensation related costs cash flow from operation s was 14.5 billion in Q2 we utiliz cash of 7.4 billion toward shareholder returns in the form of share repurchases and cost dividends reflecting the increase in dividend per s our board of directors recently approved a $50 billion share repurchase authorization to add to our remaining 7.5 billion of authorization at the end of Q2 let me turn the Outlook for the third quarter total revenue is expected to be 32.5 billion plus or minus 2% our third quarter Revenue Outlook incorporates continued growth of our Hopper architecture and sampling of our blackw products we expect Blackwell production ramp in Q4 Gap and non-gaap gross margins are expected to be 74.4% and 75% respectively plus or minus 50 B points as our data center mix continues to shift to new products we expect this trend to continue into the 4th quarter fiscal 2025 for the full year we expect gross margins to be in the mid 70% range Gap and non-gaap operating expenses are expected to be approximately 4.3 billion and 3.0 billion respectively full year operating expenses are expected to grow in the mid to Upper 40% range as we work on developing our next generation of products Gap and non-gaap other income and expenses are expected to be about 350 million including gains and losses from non-affiliated Investments and publicly held Equity Securities Gap and non-gaap tax rates are expected to be 177% plus or minus 1% excluding any discreete items further Financial detail are included in the C commentary and other information available on our IR website we are now going to open the call for questions operator would you please help us impose for questions thank you and at this time I would like to remind everyone in order Q&A Session to ask a question press star and then the number one on your telephone keypad we will pause for just a moment to compile the Q&A roster and as a reminder we ask that you please limit yourself to one question and your first question comes from the line of vivec Arya with Bank of Vivek Arya , Analyst America Securities your line is open uh thanks for taking my question um Jensen you mentioned um in the prepared uh comments that there's a change in the Blackwell GPU mask I'm curious are there any other incremental changes in backend packaging or anything else and I think related um you sugested that you could ship several billion dollars of Blackwell in Q4 despite a change in in the design is it because all these issues will be solved by then just help a size what is the overall impact of any changes in in Blackwell climing uh what that means to your kind of Revenue profile and how are customers uh Jensen Huang , CEO reacting to it yeah thanks V uh the change to the mask is complete uh there were no functional changes necessary and so we're sampling uh functional samples of uh Blackwell Grace Blackwell and a variety of system configurations as we speak uh there are something like a hundred different types of Blackwell based systems that are built that were shown at comput text and we're enabling uh our ecosystem to start sampling those uh the functionality of Blackwell is as it is and we expect to start production in Q4 and your next question comes from the line of Toshi aari with Goldman Sachs your line is open hi thank you so much for taking the question uh Jensen I had a relatively longer term question uh as you may know there's a pretty heated debate in in the market on you know your customers and customers customers return on investment um and what that means for the sustain ability of of capex going forward uh internally at Nvidia like what what are you guys watching you know what's on your dashboard as you try to gauge customer return and and how that impacts capex uh and then a quick followup maybe for Colette um I think your Sovereign AI number for the full year went up uh maybe a couple billion uh what's driving the improved Outlook and and how should we think about fiscal 26 thank you thanks toshia uh first of all when I said ship production in Q4 I mean shipping out I don't mean starting to ship but I mean I don't mean starting production but shipping out uh on the longer term longer term question let's take a step back and and you've heard me say that we're going through two simultanous platform transitions at the same time the first one is transitioning from accelerated Computing to from uh general purpose Computing to accelerated Computing and the reason for that is because CPU scaling has been known to be slowing for some time and it is it is slow to a crawl and yet the amount of computing demand continues to grow quite significantly you could maybe even estimate it to be doubling every single year and so if we don't have a new approach Computing inflation would be driving up the cost for every company and it would be driving up the energy consumption of data centers around the world uh in fact you're seen that and so the answer is accelerated Computing we know that accelerated Computing of course speeds up applications it also enables you to uh do Computing at a much larger scale for example scientific simulations or database processing but what that translates directly to is lower cost and lower energy consumed and uh in fact this week uh we there's a Blog that came out that that talked about a whole bunch of new libraries that we offer and that's really the core of the first platform transition going from general purpose Computing uh to accelerated Computing and it's not it's not unusual to see uh Someone Saved 90% of their Computing cost and and um and the reason for that is of course you just sped up an application 50x uh you would expect the Computing cost to to uh decline quite significantly the second was enabled by accelerated Computing because because we drove down the cost of training large language models or training deep learning so incredibly that it is now possible to have gigantic Scale Models multi-trillion parameter models and train it on pre-train it on just about the world's uh knowledge Corpus and let the model go figure out how to understand uh human represent human language representation and how to cautify knowledge into its neural networks and how to learn reasoning and so so uh which which caused the generative AI Revolution now gener generative AI uh taking a step back about why it is that we went so deeply into it is because it's not just a feature it's not just a capability it's a fundamental new way of doing software instead of human engineered algorithms we now have uh data we tell the AI we tell the model we tell the computer what's the what are the expected answers what are our what are our previous observations and then for it to figure out what the algorithm is what's the function it learns a universal you know AI is a bit of a universal function approximator and it learns the function and so you could learn the function of almost anything you know and anything that you have that's predictable anything that has structure anything that um uh uh that you have um previous examples of and so so now here we are with generative AI it's a fundamental new form of computer science it's affecting uh how every layer of computing is done from CPU to GPU from Human engineered algorithms to machine learn algorithms and the type of applications you could now develop and and um uh produce is uh fundamentally uh remarkable and there are several things that are happening in generative AI so the first thing that's happening is the frontier models are uh growing in quite substantial scale and they're still seeing we're still all seeing uh the benefits of scaling and whenever you double the size of a model you also have to more than double the size of the data set to go train it and so the amount of flops necessary in order to create that model uh goes up quadratically and and so U it's not unus it's not unexpected to see that the Next Generation models could take 20 you know 10 20 40 times more compute uh than last generation so we have to continue to drive the generational um performance up quite significantly so we can drive down the energy consumed and drive down the cost necessary to do it so the first one is there are larger Frontier models trained on more modalities and surprisingly there are more Frontier Model makers than last year and so you have more on more on more that's that's one of the Dynamics going on in gen generative AI the second is although it's below the tip of the iceberg you know what we see are chat GPT um uh image generators uh we see um uh coding we use we use uh generative AI for coding quite extensively here at Nvidia now we of course have a lot of digital designers and things like that um but those are kind of the tip of the iceberg what's below the iceberg are the largest systems largest Computing systems in the world today which are and you've heard me talk about this in the past which are recommender systems moving from CPUs it's now moving from CPUs to generative AI so recommender systems uh add gener eration custom ad generation targeting ads at very very large scale and quite hyper targeting uh search and user generated content these are all very large scale applications have now uh evolved to generative AI of course the number of generative AI startups uh is generating tens of billions of dollars of uh Cloud renting uh opportunities for our Cloud Partners uh and uh Sovereign AI you know countries that are now realizing that uh their data is their natural and National resource and they have to use they have to use AI build their own AI infrastructure so that they could uh have their own digital intelligence uh Enterprise AI as Colette mentioned earlier is uh starting and uh you might have seen our announcement uh that uh the world's leading it uh companies are joining us to take the mvidia AI Enterprise plat form to the world's Enterprises that the the compan companies that we're talking to uh so many of them are just so incredibly excited to drive uh more productivity out of their company and then and then General robotics the the big the big um uh transformation last year as we uh are able to now learn uh physical AI from watching video and human demonstration and synthetic data generation from uh reinforcement learning from systems like Omniverse uh we are now able to uh work with just about every uh robotics companies now to start thinking about start building um General uh General Robotics and so you can see that there just so many different directions that generative AI is going and so we're we're actually seeing the the momentum of gener generative AI accelerating and toia to answer your question um regarding us Sovereign Ai and our our goals in terms of growth in terms of Revenue uh it certainly is a unique um and growing opportunity uh something that uh surfaced uh with generative Ai and the desires of countries around the world to have their own uh generative AI that would be able to incorporate uh their own language incorporate their own culture incorporate their own data in that in that country uh so more and more um excitement around these U models and what they can be specific for those countries so yes we're see we are seeing some growth opportunity in front of us and your next question comes from the line of Joe Moore with Morgan Stanley your line is open great thank you um vinen in the fres release you talked about Blackwell anticipation being incredible um but it seems like copper demand is also really strong I mean you're guiding for a very strong core order without Blackwell in October so you know how long do you see sort of coexisting strong demand for both and can you talk about the transition to Blackwell do you see people intermixing clusters do you think most of the Blackwell activities new clusters just some sense of what that transition looks like yeah thanks show the demand for Hopper is really strong and it's true the demand for uh Blackwell is incredible uh there there's a couple reasons for that the first reason is is um if you just look at look at the world's cloud service providers and the amount of GPU capacity they have available it's basically none and the reason for that is because they're either being deployed internally for accelerating their own workloads data processing for example uh data processing you know we hardly ever talk about it because it's mundane you know it's not it's not very cool because it doesn't generate a picture or you know generate words but almost every single company in the world processes data in the background and and um uh Nvidia gpus are the only accelerators on the planet that process and accelerate data SQL data um pandas data data science uh toolkits like pandis and the new one polers uh these are the one the most popular data processing Platforms in the world and aside from CPUs which as I've mentioned before really running out of steam uh nvidia's accelerated Computing is is really the only way to to get uh boosting performance out of that and so so that's number one is the primary the number one use case long before General came along is that the migration of applications one after another uh to accelerated Computing the the second the second is of course rent the rentals they're they're renting uh capacity uh to model makers are renting it to uh startup companies and a gener AI company uh spends the vast majority of their uh invested Capital uh into into infrastructure so that they could use an AI to help them create products and and so these companies need it now they they just simply can't afford you know you just raised money you uh they want you to put it to use now uh you have uh processing that you have to do you can't do it next year you got to do it today and so so there's a there's a fair that's one reason the second reason for Hopper Demand right now is because of the race to the next Plateau the first person to the next Plateau um uh gets to be you know a gets to introduce a revolutionary level of AI the second person who gets there is incrementally you know better or about the same and so so the ability to systematically and consistently raise to the next plateau and be the first one there is how you establish leadership um you know Nvidia is constantly doing that and we show that uh to the world and the gpus we make and the AI factories that we make uh the networking systems that we make um the S so's we create I mean we we want we want to set the pace we want to be consistently the world's best and that's the reason why we drive ourselves so hard um of course we also want to see our dreams come true and and all of the the the capabilities that that we imagine in the future and the benefits that we can bring to society we want to see all that come true and and so these model makers are are um are the same they're they're of course they want to be the world's best they want to be the world's first um and and uh although Blackwell will start uh shipping out in billions of dollars at the end of this year um the the uh standing up of the capacity is still uh probably you know weeks and uh a month or so away and so between now and then is a lot of generative AI Market Dynamic and so everybody is just really in a hurry it's a it's either operational reasons that they need it they need accelerated Computing um they don't want to build any more uh general purpose Computing infrastructure and even Hopper uh you know of course h200 state of the art uh Hopper if you have a choice between Building C CPU infrastructure right now for business or Hopper uh infrastructure for business right now that decision is relatively clear and so I think people are just clamoring uh to uh transition the trillion dollars of uh uh established installed infrastructure to a modern infrastructure in Hopper state of the art and your next question comes from the line of Matt Ramsey with TD Cowan your line is open um thank you very much good afternoon everybody um in I wanted to kind of circle back to an earlier question about uh the debate that investors are having about I don't know the ROI on all of this capex and hopefully this question and the distinction will make some some sense but what I'm what I'm having discussions about is is with like the percentage of folks that you see that are spending all of this money um and looking to sort of push the frontier towards um AGI convergence and as you just said a new plateau and capability and and they're going to spend regardless to get to that level of capability because it opens up so many doors for for um the industry and for their company versus customers that are really really focused today on capex versus Roi I don't know if that distinction makes sense I'm just trying to get a sense of how you're seeing the priorities of people that are are putting the dollars in the ground on on this new technology and and what their priorities are and and their time frames are for that investment thanks thanks man the people who are investing in uh Nvidia infrastructure are getting Returns on it right away it's the best Roi uh infrastructure Computing infrastructure investment you can make today and so so one way to think through it you know probably the most the easiest way to think through it is just go back to First principles you have a trillion dollars worth of general purpose Computing infrastructure and the question is do you want to build more of that or not and for every billion dollars worth of General CPU based infrastructure uh that you stand up you probably rent it for less than a billion and so um because it's it's commoditized there's already a trillion dollars on the ground what's the point of getting more and so so the the people who are who are clamoring to get this information structure one um when they build out Hopper based infrastructure and soon uh Blackwell based infrastructure they start saving money that's tremendous return on investment and the reason why they start saving money is because data processing saves money um you know data processing is price just a giant part of it already and so recommender system save money um so on so forth okay and so you start saving money the second thing is everything you stand up uh are going to get rented because so many companies are are being founded to create generative Ai and so your uh your uh uh capacity gets rented right away and the return on investment of that is really good and then the third reason is your own business you know you want to either create the next Frontier yourself or uh your your own internet services uh benefit from a you know a a a Next Generation ad system or next Generation recommender system or next Generation search system uh so for your own Services uh for your own stores uh for your own user generated content social media platforms um you know for for for your own Services generative AI uh is also uh a um a fast Roi and so there's a lot of ways you could think through it um but at the core it's because it is the best Computing infrastructure you kit put in the ground today the world of general purpose Computing is Shifting to accelerated Computing the world of human engineered software is moving to generative AI software um if you were to build infrastructure to modernize your uh your your uh cloud and your data centers uh build it with accelerated Computing and videoid that's the best way to do it and your next question comes from the line of Timothy arur with UBS your line is open thanks a lot um I had a question on the shape of the revenue growth both near and longer term I know Colette you did um you know increase Opex for the year and if I look at the increasing your purchase commitments and your supply obligations that's also quite bullish on the um uh other hand there's some you know school of thought that not that many customers really seem ready for liquid cooling and I do recognize that some of these racks can be air cooled but Jensen is that something to consider sort of on the shape of how blackw is going to ramp and and then I guess when you look Beyond uh you know next year which is obviously going to be a great year and you look into 26 do you worry about any other you know gating factors like say the power supply chain or uh at some point models start to get smaller I'm just wondering if you can speak to that thanks um I'm going to work backwards I really appreciate the question Tim so remember the world is moving from general purpose Computing to accelerated Computing and and the world builds about a trillion dollars worth of data centers um you know a trillion dollars worth the data centers in a few years will be all accelerated Computing in the past no gpus are in data centers just CPUs in the future every single data center will gpus and the reason for that is very clear because we need to accelerate workloads so that we can continue to be sustainable continue to drive down the cost of computing so that when we do more Computing are we don't experience uh Computing inflation uh second uh we need we need gpus for uh a new computer model called generative AI that we can all acknowledge uh is going to be quite transformative to the future of computing and so so I think I think um working backwards uh the way to think about that is is the next trillion dollars of the world's infrastructure will clearly be um different than the last trillion and it'll be vastly accelerated um with respect to to uh the shape of our ramp we offer multiple configurations of uh Blackwell Blackwell comes in either a you know Blackwell classic if you will that uses the hgx form factor that we pioneered uh with uh with Volta and I think it was Volta and so um uh we've been shipping the htx htx form factor for some time it is cooled the grace Blackwell um is liquid cooled however the number of data centers that want to go liquid cooled is is quite significant and the reason for that is because we can uh in a liquid cooled data center in any data center power limited data center whatever size data center you choose you could install and deploy anywhere from three to five times the AI throughput compared to the past and so liquid cooling is cheaper liquid cooling uh TCO is better and liquid cooling allows you to have the benefit of this capability we call MV link which allows us to expand it to 72 Grace black wall packages which has essentially 144 gpus and so imagine 144 gpus connected in MV link and that we're we're increasingly showing you the benefits of that and the next you know the next click is obviously uh very low latency very high throughput large language model inference and the large mvlink domain is going to be a game Cher for that and so so I think I think people are uh are very comfortable deploying both and so almost every CSP we're working with are deploying uh some of both and so I uh I'm pretty confident that that we'll ramp it up just just fine uh your your second question out of the third is that looking forward yet next year is going to be a great year uh we expect to uh grow our data center business uh quite significantly next year uh black well is going to be going to be a a complete uh game changer for the industry and um uh black wall is going to carry into into the following year and as I mentioned earlier working backwards from first principles uh remember that Computing is going through two platform transitions at the same time and that's just really really important to keep your head on your your mind focused on which is uh general purpose Computing is Shifting to accelerated Computing and human engineered software is going to transition to generative AI or artificial intelligence learn software okay and your next question comes from the line of Stacy Rasin with Bernstein research your line is open hi guys thanks for taking my questions I have two short questions for collect um the first uh several billion dollars of blackw Revenue in Q4 like is that additive you you said you expected Hopper demand to strengthen in the second half does that mean Hopper strengthens Q3 to Q4 as well on top of blackw adding several billion dollars and the second question on Gross margins if I have mid mid 70s for the year dep where I want to draw that if I have 75 for the year I'd be something like 71 to 70 two for Q4 somewhere in that range is that the kind of exit rate for gross margins that you're expecting and how should we think about the drivers of gross margin Evolution into next year um as Blackwell ramps and I mean hope hopefully I guess the yields and and and the inventory reserves and everything come up yes J let's first take your uh question um that you had about Hopper and Blackwell uh so we believe our Hopper um will continue to grow into the second half PA we have many new products uh for Hopper our existing products for Hopper that we believe will start continuing to ramp um in the next uh uh quarters including our Q3 and um those new products moving to Q4 so let's say Hopper there for versus H1 is a growth opportunity for that additionally we have the black well on top of that and the black well starting of um ramping in Q4 so hope that helps you on those two pieces uh your second piece is in terms of on our gross margin we provided gross margin uh for our Q3 we provided our gross margin on a non Gap at about uh 75 um we'll work um with all the different uh transitions that we're uh going through but we do believe we can do that 75 and Q3 we provided that we're still on track for the full year also in the mid 70s or approximately the 75 so we're going to see some slight um uh difference possibly in Q4 um again with our Transitions and the different cost structures that we have on our new product introductions however I'm not in the same number that you are um there we don't have exactly guidance um but uh I do believe you're lower than where we are and your next question comes from the line of Ben writes is with melas your line is open yeah hey um thanks a lot for the question Jensen and clet um I wanted to ask about the geographies uh there was uh the TQ that came out and the United States was down sequentially while uh several Asian geographies were up a lot sequentially just wondering what the Dynamics are there um you know and um obviously China did very well you mention at your remarks what are the puts and takes and then I just wanted to clarify from Stacy's question um if that means the sequential overall Revenue growth rates for the company accelerate in the fourth quarter given all those favorable Revenue Dynamics thanks let me talk about um a bit in terms of our disclosure in terms of the 10q a required disclosure in uh a choice of geographies very challenging sometimes to create that uh right disclosure as we have to come up with uh one key piece pieces is in terms of we have in terms of who we sell to Andor specifically who we invoice to and so what you're seeing in terms of there is who we invoice that's not necessarily where the product will eventually be um uh and where it may even travel to the End customer these are just moving to our oems our odms and our system integrators for the most part across our product portfolio so what you're seeing there is sometimes just a swift shift in terms of who they are using uh to complete their full configuration before those things are going into the data center going into notebooks and those pieces of it uh and that shift happens uh from time to time but yes uh our China number there are invoicing to China keep in mind that is incorporating both gaming also data center also Automotive in those numbers that we have going back to your statement in regarding gross margin um and also what we're seeing in terms of uh what we're looking at for Hopper and Blackwell in terms of Revenue Hopper will continue to grow in the second half um will continue to grow from what we are currently seeing during determining that exact mix um in each Q3 and Q4 we don't have here we are not here to guide uh yet in terms of Q4 but we do see right now the demand expectations we do see um the visibility that that will be a growth opportunity in Q4 on top of that we will have our Blackwell architecture and your next question comes from the line of CJ Muse with CER Fitzgerald your line is open yeah good afternoon thank you for taking the question um you've embarked on a remarkable annual product Cadence with with challenges only likely becoming more and more given you know Rising complexity complexity in a retical limit Advanced package world so curious you know if you take a step back how does this backdrop alter your thinking around potentially greater vertical integration supply chain Partnerships and and then thinking through consequential impact to your margin Pro profile thank you yeah thanks uh let's see I think the uh the first well the first the the first answer to your the answer to your first question is that the reason why our velocity is so high is simultaneously because uh the complexity of the model is growing and we want to continue to drive its cost down um it's growing so we want to continue to increase its scale and we believe that uh by continuing to scale the AI models that will reach a a level of of extraordinary usefulness and that would it would um open up I I realize the next Industrial Revolution we believe it and and so we're we're going to drive ourselves uh really hard to do to to continue to uh uh go up that scale um we have the ability uh fairly uniquely to integrate uh to design a um uh an AI Factory uh because we have all the parts it's not possible to come up with a new AI Factory every year unless you have all the parts and so we have uh next year we're going to ship a lot more CPUs than we've ever had in the history of our company U more gpus of course uh but also mvlink switches um uh CX uh dpus connectx dpu for East and West uh Bluefield dpus for north and south and uh data and storage processing uh to um infiniband for supercomputing centers to ethernet which is a brand new product for us which is well on its way to becoming a multi-billion Dollar business uh to to bring AI to ethernet and so the fact that we could build we have we have access to all of this we have one architectural stack as you know um it allows us to introduce new capabilities to the market you know as we complete it otherwise what happens you ship these parts you go find customers to sell it to and then you've got to build some got to build up an AI Factory and the AI Factory has got a mountain of software and so it's not about it's not about who integrates it we love the fact that our supply chain is disintegrated in the sense that we could service um uh you know quanta foxcon HP Dell Lenovo uh super micro I uh we used to be able to serve as ZT um they were recently purchased and um uh and so on so forth and so the the number of ecosystem partners that we have uh gigabyte assus the number of ecosystem partners that we have that allows it allows us to allows them to take our architecture which all works but integrated in a bespoke way into all of the world's cloud service providers Enterprise data centers the scale and reach necessary from our odms and our integrators integr integrator supply chain is vast and gigantic because the world is huge and so that part we don't we don't want to do and we're not good at doing and um uh but we know how to design the AI infrastructure provide it the way that customers would like it and lets the ecosystem integrated um well yeah so anyways that's the reason why and your final question comes from the line of Aaron rakers with Wells Fargo your line is open yes thanks for taking the question I wanted to go back into the the Blackwell product cycle one of the questions that that we tend to get asked is is how you see the the rack scale system mixed dynamic as as you think about leveraging NV link you think about GB you know nvl 72 and and how that goto Market you know dynamic looks you know as far as the the blackw product cycle I guess I put distinctly how do you see that mix of rack scale systems as we start to think about the black W black wall cycle playing out yeah eron thanks the um the black wall rack system it's designed and architected as a rack but it's sold in a disag in disaggregated system components we don't sell the whole rack and the reason for that is because everybody's rack's a little different surprisingly you know some some of them are ocp standards some of them are not some of them are inter price uh and uh the the power limits for everybody could be a little different choice of cdus uh the choice of um uh Power bus bars the the the configuration and integration into people's data centers all different and so so the way we designed it we architected the whole rack the software is going to work perfectly across the whole rack and then we uh provide the system components like for example the uh CPU and GPU compute uh uh board is then integrated into an mgx it's a modular system architecture mgx is is completely ingenious and uh we have mgx odms and integrators and oems all over the planet and so so just about you know any configuration you would like uh where you would like like that 3,000lb rack to be uh delivered you know it's got to be close to it's it has to be integrated and assembled close to the data center because it's fairly heavy and so everything from the supply chain from the moment that we ship the GPU CPUs uh the switches the nxs from that point forward the integration is done quite close to the location of the csps and the locations of the the data centers and so you could imagine how many data centers in the world there are and how many Logistics hubs uh We've uh scaled out to with our odm partners and so I I think that because we we show it as one rack and because it's always you know rendered that way and and shown that way we we might have left the impression that we're doing the integration our customers hate that we do integration the supply chain hates us doing integration they want to do the integration that's their value added um there's a final design design in if you will you know it's not quite as simple as shimmy into a data center but that design fit in is really complicated and so the install the design fit in the installation the bring up the um uh uh repair uh repair and replace that entire cycle is done all over the world and we have a sprawling network of odm and OEM partners that does this incredibly well so uh integration is not the reason why we're doing uh racks it it's it's the anti-reason of doing it um the way we don't want to be an integrator we want to be a a technology provider and I will now turn the call back over to Jensen Hong for closing remarks thank you let me make a couple more make a couple of comments that I made earlier again that data center worldwide are in Full Steam to modernize the entire Computing stack with accelerated Computing and generative AI Hopper demand remains strong in the anticipation for black well is incredible let me highlight the top five things the top five things of our company accelerated Computing has reached the Tipping Point CPU scaling slows developers must must accelerate everything possible accelerated computing starts with Cuda X libraries new libraries open new markets for NVIDIA we released many new libraries including could accelerated polers pandas and Spark the leading data science and data processing libraries CVS for Vector Pro Vector databases this is incredibly hot right now Ariel and shiona for 5G wireless base station a whole Suite of a whole world of data centers that we can go into now parir bricks for Gene sequencing and Alpha 2 for protein structure prediction is now C accelerated we are at the beginning of our journey to modernize a trillion dollars worth of data centers from general purpose Computing to accelerated Computing that's number one number two Blackwell is a step function leap over Hopper Blackwell is an AI infrastructure platform not just the GPU also happens to be in the name of our GPU but it's an AI infrastructure plan platform as we reveal more of Blackwell and sample systems to our partners and customers the extent of Blackwell's leap becomes clear the Blackwell Vision took nearly five years and seven one-of-a-kind chips to realize the gr CPU the Blackwell dual GPU and a Coos package connectx dpu for eastwest traffic Bluefield dpu for north north north south and storage traffic mvlink switch for all to all GPU Communications and Quantum and Spectrum X but for both infin ban ethernet can support the massive burst traffic of AI Blackwell AI factories are building siiz computers Nvidia designed and optimized the Blackwell platform full stack end to end from chips systems networking even structured cables power and Cooling and mountains of software to make it fast for customers to build AI factories these are very Capital intensive infrastructures customers want to deploy it as soon as they get their hands on the equipment and deliver the best performance and TCO Blackwell provides three to five times more AI throughput in a power limited data center than Hopper the third is MV link this is a very big deal with it's all to all GPU which is gamechanging the blackwall system lets us connect 144 gpus in 72 gb200 packages into one MV link domain with an aggregate aggregate MV link bandwidth of 259 terabytes per second in one rack just put that in perspective that's about 10 times higher than Hopper 259 terabytes per second kind of makes sense because you need to boost the training of M multi-trillion parameter models on trillions of tokens and so that natural amount of data needs to be moved around from GPU to GPU for inference MV link is vital for low latency High throughput large language model token generation we now have three networking platforms MV link for GPU scale up Quantum infiniband for supercomputing and dedicated AI factories and Spectrum X for AI on ethernet Mia's networking footprint is much bigger than before generative AI momentum is accelerating generative AI Frontier Model makers are racing to scale to the next AI Plateau to increase model safety and IQ we're also scaling to understand more modalities from text images and video to 3D physics chemistry and biology Chad Bots coding AIS and image generators are grow going fast but it's just a tip of the iceberg internet services are deploying generative AI for large scale recommenders ad targeting and search systems AI startups are consuming tens of billions of dollars yearly of csp's cloud capacity and countries are recognizing the importance of AI and investing in Sovereign AI infrastructure and Nvidia Ai and Nvidia Omniverse is opening up the next era of AI Robotics and now the Enterprise AI wave has started and we're poised to help companies transform their businesses the Nvidia AI Enterprise platform consists of Nemo Nims NM agent Blueprints and AI Foundry that our ecosystem Partners the world leading it companies used to help customer C companies customize AI models and build bespoke AI applications Enterprises can then Deploy on Nvidia AI Enterprise runtime and at $4,500 per GPU per year Nvidia AI Enterprise is an exceptional value for deploying AI anywhere and for nvidia's software Tam can be significant as the Cuda compatible GPU install base grows from Millions to tens of millions and as Colette mentioned Nvidia software will exit the year at a $2 billion run rate thank you all for joining us today and ladies and gentlemen this

Share your thoughts

Related Transcripts

Adobe Q3 FY 2024 Earnings Conference Call thumbnail
Adobe Q3 FY 2024 Earnings Conference Call

Category: News & Politics

Good day and welcome to the q3 fy4 adobe earnings conference call today's conference is being recorded at this time i'd like to turn the conference over to jonathan voss vp of investor relations please go ahead good afternoon and thank you for joining us with me on the call today are shantanu orion... Read more

NVIDIA (NVDA) Q2 2025 Earnings Call + Q&A thumbnail
NVIDIA (NVDA) Q2 2025 Earnings Call + Q&A

Category: Gaming

[music] n [music] [music] there we go welcome to the nvidia earnings call you say nidia nvidia i think so nv that sounds almost pornographic um yeah so we're here talking about nvidia oh my god you have an indie world shirt that's pretty cool anyway sorry uh yeah nvidia's out with earnings results uh... Read more

Nvidia Stock Price Targets Soar After Q2 Earnings! thumbnail
Nvidia Stock Price Targets Soar After Q2 Earnings!

Category: News & Politics

Why do 85% of analysts suddenly boost nvidia's stock price targets let's dive in nvidia the kingpin of ai chips has delivered a blockbuster second quarter performance the tech giant not only exceeded revenue expectations but also projected a whopping $ 32.5 billion in future revenue this stellar performance... Read more

SMCI Suspected Of Accounting Fraud; Nvidia FLOPS! thumbnail
SMCI Suspected Of Accounting Fraud; Nvidia FLOPS!

Category: News & Politics

Happy nvidia day and welcome back to your market news smci continued to get hammered this week as popular short seller hindenberg research filed a short report saying that they believe there's accounting manipulation and multiple red flags within the company's financials since this the company has delayed... Read more

NVIDIA CEO Jen-Hsun Huang's Closing Remarks - NVDA Q2 2025 Earnings Conference Call thumbnail
NVIDIA CEO Jen-Hsun Huang's Closing Remarks - NVDA Q2 2025 Earnings Conference Call

Category: Science & Technology

Thank you let me make a couple more make a couple of comments that i made earlier again that data center worldwide are in full steam to modernize the entire computing stack with accelerated computing and generative ai hopper demand remains strong and the anticipation for blackw is incredible let me... Read more

NVIDIA Earnings Report TLDR, they're making money thumbnail
NVIDIA Earnings Report TLDR, they're making money

Category: News & Politics

Here at the vidia earnings watch party to figure out what do they do uh like chips wall street is expecting $ 28.7 billion in revenue my cooworker sam is going to take a shot for every dollar they outperform viia accounts for 6% of the s&p 500 and over a quarter of their gains this year by having a... Read more

Stock Market Winners & Losers: Nvidia Earnings | Boeing's Starliner| Kelce Brothers thumbnail
Stock Market Winners & Losers: Nvidia Earnings | Boeing's Starliner| Kelce Brothers

Category: News & Politics

Time now to talk winners and losers on wall street with financial expert rob black and rob this morning i'm seeing nothing but a sea red uh doesn't look all that hot good morning yeah we've had a great year so we're up almost 20% in the s&p 500 so there's going to be days like this but today's a weird... Read more

The Trading Floor: Nvidia Earnings Explained thumbnail
The Trading Floor: Nvidia Earnings Explained

Category: Education

Hello welcome back to the show we have a bit of a breaking news episode and no this isn't about oasis reunion this is about nvidia's earnings so the second biggest piece of news we had this week and that's because invidia earnings came out last night the headline read nvidia reports of 122% revenue... Read more

Nvidia's Q2 Earnings: 3 Reasons Why Investors Aren't Impressed #short #trading #trending thumbnail
Nvidia's Q2 Earnings: 3 Reasons Why Investors Aren't Impressed #short #trading #trending

Category: Nonprofits & Activism

This is going to blow your mind nvidia smashed q2 expectations but their stock still dipped let's break it down nvidia exceeded analy predictions in both revenue and earnings so why the cold shoulder from investors first supply chain issues are still a headache despite strong numbers production bottlenecks... Read more

Xcel Energy (XEL) FAST Graphs Stock Analysis #shorts thumbnail
Xcel Energy (XEL) FAST Graphs Stock Analysis #shorts

Category: Education

Xl energy which is another really good utility stock that is very similar to wisconsin energy has the same kind of metrics if you will consistent earnings growth it got really highly valued you can just clearly see this would be poor time to be investing in this utility stock you generate no rate of... Read more

Will Apple iPhone 16 cause Apple to jump #Stocks #Investing #Money thumbnail
Will Apple iPhone 16 cause Apple to jump #Stocks #Investing #Money

Category: Entertainment

Eld device wars whatever we want to call it uh right now phone wars mobile phone wars but huawei the chinese manufacturer has just racked up 3 million pre-orders for its new phone and apple is about to announced the iphone 16 today hoping to get some pre-orders on that so we'll see what happens with... Read more

SoFi Growth Strategy - SOFI Stock Analysis thumbnail
SoFi Growth Strategy - SOFI Stock Analysis

Category: People & Blogs

All in all i think we're setting up for a pretty strong uh back half of the year in 2025 if we get the right right uh fiscal policy that is soi ceo anthony notto commenting on the potential rate cuts in the back half of the year and now a little over a month after that interview we have gotten the announcement... Read more