Published: Aug 27, 2024
Duration: 01:25:00
Category: News & Politics
Trending searches: nvda earnings report
my boy Benji in the building what's what's going on Benji we just waiting for this ear we should be connecting in the next two minutes with the stream so just bear with me we got two minutes before we connect to the stream guys drop your thoughts what we expecting to hear for got one minute left okay let's see they allowed us to connect to the call just yet okay [Music] bottom right corner guys it's in the bottom right corner um that's the earning support we're just going to look at how um we're just going to watch how Market reacts to the [Music] I hope they be uh we got somebody from Germany shout out to you we at in Germany don't forget to share the link out guys your favorite Reddit group your favorite Facebook group Twitter twitch you name it I bought the bottom in video let me know that share price let me know the share price currently got 26 Watchers guys five likes can we please get those likes to add up we're getting a lot of orders here so far let's move this so we can see that those orders flowing through here [Music] [Music] oh yeah I might have to take a trip out there [Music] man wife looking for place to vacation for about two months two mon vacation need at this point um got [Music] a [Music] w [Music] oh [Music] [Music] [Music] oh a [Music] earnings come out in 8 minutes guys we should get a speaker here in 8 minutes shout out to those that is coming uh let's go shout out to I need money GM quit all contracts with bank [Music] credits don't forget to smash that like button guys we're trying to get this thing to 100 likes so what the help with the audience we can easily achieve that [Music] oh [Music] [Music] oh a [Music] it's only five more minutes before the earnings release you should get the audio on that should see a huge reaction here across the board and the market any other stocks you guys holding besides Nvidia let me know in the comment section below live chat as well anybody Trading still holding AMC or gme that's something to talk about right um got now just four minutes [Music] [Music] Benji say he holding FF i e mull 2 EV STS huh anybody else holding any EVS [Music] [Music] [Music] oh [Music] a [Music] [Music] good afternoon my name is Abby and I will be your conference operator today at this time I would like to welcome everyone to nvidia's second quarter earnings call all lines have been placed on mute to prevent any background noise after the speaker's remarks there will be a question and answer session if you would like to ask a question during that time simply press the star key followed by the number one on your telephone kead if you would like to withdraw your question press star one a second time let's get this thing up to 20 guys and Mr Stewart stcker you may begin your conference thank you good afternoon everyone and welcome to nvidia's conference call for the second quarter of fiscal 2025 with me today from Nvidia are Jensen Wong president and chief executive officer and Colette press Executive Vice President and Chief Financial Officer I would like to remind you that our call is being webcast live on nvidia's investor relations website the webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2025 the content of today's call is invidious property it cannot be reproduced or transcribed without prior written consent during this call we may make forward-looking statements based on current expectations these are subject to a number of risks significant risks and uncertainties and our actual results May differ materially for discussion of factors that could affect our future Financial results and business please refer to the disclosure in today's earnings release our most recent forms 10K and 10q and the reports that we may file on Form 8K with the Securities and Exchange Commission all our statements are made as of today August 28 2024 based on information currently available to us except as required by law we assume no obligation to update any such statements during this call we will discuss non-gaap Financial measures you can find a Reconciliation of these non-gaap Financial measures to Gap Financial measures in our CFO commentary which is posted on our website let me highlight an upcoming event for the financial Community we will be attending the Goldman Sachs communic Opia and Technology conference on September 11th in San Francisco where Jensen will participate in a keynote fireside chat our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday November 20th 2024 with that let me turn the call over to Colette thanks Stuart Q2 was another record quarter revenue of 30 billion was up 15% sequentially and up 122% year on year and well above our Outlook of 28 billion starting with data center data center revenue of 26.3 billion was a record up 16% sequentially and up 100 54% year on-year driven by strong demand for NVIDIA Hopper GPU Computing and their networking platforms compute Revenue grew more than 2.5x networking Revenue grew more than 2x from the last year cloud service providers represented roughly 45% of our data center revenue and more than 50% stem from the consumer internet and Enterprise companies customers continue to accelerate their Hopper architecture purchases while gearing up to adopt Blackwell Key workloads Driving our data center growth include generative AI model training and inferencing video image and Text data pre- and postprocessing with Cuda and AI workloads synthetic data generation AI powered recommender systems SQL and Vector database processing as well next Generation models will require 10 to 20 times more compute to train with significantly more data the trend is expected to continue over the trailing four quarters we estimate that inference drove more than 40% of our data center Revenue csps consumer internet companies and Enterprises benefit from the incredible throughput and efficiency of nvidia's inference platform demand for NVIDIA is coming from Frontier Model makers consumer internet services and tens of thousands of companies and startups building generative AI applications for consumers advertising education Enterprise and health care and Robotics developers desire nvidia's Rich ecosystem and availability in every cloud csps appreciate the broad adoption of Nvidia and are growing their Nvidia capacity given the high demand large csps consumer internet and Enterprise company the Nvidia h200 builds upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the h100 our data center Revenue in China grew sequentially in Q2 and a significant contributor to our data center Revenue as a percentage of total data center Revenue it remains below levels seen prior to the imposition of export controls we continue to expect the China Market to be very competitive going forward the latest round of M Mo perf inference benchmarks highlighted nvidia's inference leadership with both Nvidia Hopper and Blackwell platforms combining to win gold medals on all tasks at computex Nvidia with the top computer manufacturers unveiled an array of Blackwell architecture powered systems and Nvidia networking for building AI factories and data centers with the Nvidia MDX modular reference architecture our oems and odm partners are building more than 100 Blackwell based systems designed quickly and cost effectively the Nvidia Blackwell platform brings together multiple gpus CPUs dpu NV link NV link switch and the networking chips systems and Nvidia Cuda software to power the next generation of AI across the cases Industries and countries the Nvidia gb200 NBL 72 system with the fifth generation NV Link in Ables all 72 gpus to act as a single GPU and deliver up to 30 times faster inference for lm's workloads and unlocking the ability to run trillion parameter models in real time coer demand is strong and Blackwell is widely sampling we executed a change to the Blackwell GPU Mass to improve production yields Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year 26 in Q4 we expect to zip several billion dollars in Blackwell Revenue poer shipments are expected to increase in the second half of Fisco 2025 copper Supply and availability have improved demand for Blackwell platforms is well above Supply and we expect this to continue into next year networking Revenue increased 16% sequentially our ethernet for AI Revenue which includes our Spectrum X NN ethernet platform doubled sequentially with hundreds of customers adopting our ethernet offerings Spectrum X has broad Market support from OEM and odm partners and is being adopted by csps GPU Cloud providers and Enterprises including xai to connect the largest GPU compute cluster in the world Spectrum X supercharges ethernet for AI processing and delivers 1.6x the performance of traditional ethernet we plan to launch new Spectrum X products every year to support demand for scaling compute clusters from tens of thousands of gpus today to millions of gpus in the near future Spectrum X is well on track to begin a multi-billion dollar product line with in a year our Sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at National imperatives for their society and industries Japan's National Institute of advanced industrial science and technology is building its AI bridging Cloud infrastructure 3.0 supercomputer with Nvidia We Believe Sovereign AI Revenue will reach low double digigit billions this year the Enterprise AI wave is started Enterprises also drove sequential Revenue growth in the quarter we are working with most of the Fortune 100 companies on AI initiatives across Industries and geographies a range of applications are fueling our growth including AI powered chat boots generative Ai co-pilots and agents to build new monetizable business applications and enhance employee productivity amdo is using Nvidia generative AI for their smart agent transforming the customer experience and reducing customer service cost by 30% service now is using Nvidia for its now assist offering the fastest growing new product in the company's history sap is using Nvidia to build jewel co-pilots cohesity is using Nvidia to build their gener AI agent and lower generative AI development costs snowflake serves over 3 billion queries a day for over 10,000 Enterprise customers is working with Nvidia to build Co pilots and lastly withdrawn is using Nvidia AI Omniverse to reduce endtoend cycle times for their factories by 50% Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using Nidia in their data centers Automotive will drive multi-billion dollars in Revenue across on-prem and Cloud consumption and will grow as Next gener Generation AB models require significantly more compute Healthcare is also on its way to being a multi-billion Dollar business as AI revolutionizes Medical Imaging surgical robots patient care electronic health record processing and Drug Discovery during the quarter we announced a new Nvidia AI flat Foundry service to Super supercharge generative AI for the world's Enterprises with meta's LL 3.1 collection of models this marks a watershed moment for Enterprise AI companies for the first time can leverage the capabilities of an open-source Frontier level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business Accentra is the first to adopt the new service to build custom llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications Nvidia Nims accelerate and simplify model deployment companies across Healthcare energy Financial Services Retail transportation and Telecommunications are adopting Nims including aramco Lowe's and Uber at& realized 70% cost savings and eight times latency reduction after moving into Nims for generative Ai call transcription and classification over 150 partners are embedding Nims across every layer of the AI ecosystem we announc Nim agent blueprints a catalog of customizable reference applications that include a full Suite of software for building and deploying Enterprise generative AI applications with Nim agent blueprints Enterprises can refine their AI applications over time creating a datadriven AI flywell the first Nim agent blueprints include workloads for customer service computer AED drug Discovery and Enterprise retrieval augmented generation our system integrators technology solution providers and system Builders are bringing Nvidia Nim agent blueprints to Enterprises Nvidia Nim and Nim agent blueprints are available through the Nvidia AI enterprise software platform which has great momentum we expect our software SAS and support Revenue to Pro appr a $2 billion annual run rate exiting this year with Nvidia AI Enterprise notably contributing to growth moving to gaming and AIP PCS Gaming revenue of 2.88 billion increased 9% sequentially and 16% year onye we saw sequential growth in console notebook and D help revenue and demand is strong and growing and channel inventory remains healthy every PC with RTX is an AI PC RTX PCS can deliver up to 1,300 AI tops and there are now over 200 RTX AI laptops designs from leading PC manufacturers with 600 AI powered applications and games and an installed base of 100 million devices RTX is set to revolutionize consumer experience es with generative AI Invidia Ace a suite of generative AI Technologies is available for RTX AI PCS mea break is the first game to use Nidia Ace including our small small large small language model minitron 4B optimized on device inference the Nvidia gaming in ecosystem continues to grow recently added RTX and dlss titles including Indiana Jones and the great circle Dune Awakening and Dragon Age The Veil guard the GeForce now Library continues to expand with total catalog size of over 2,000 titles the most content of any cloud gaming service moving to Pro visualization revenue of 454 million was up 6% sequentially and 20% year on year demand is being driven by Ai and graphic use cases including model fine-tuning and Omniverse related workloads automotive and Manufacturing were among the key industry vertical driving growth this quarter companies are racing to digital at TI workflows to drive efficiency across their operations the world's largest electronics manufacturer foxcon is using Nvidia Omniverse to power digital twins of the physical plants that produce Nvidia black hole systems and several large Global Enterprises including Mercedes ben Mercedes-Benz signed multi-year contracts for NVIDIA Omniverse Cloud to build industrial digital twins of factories we announced new Nvidia USD Nims and connectors to open Omniverse to new Industries and enable developers to incorporate gener Ai co-pilots and agents into USD workloads accelerating their ability to build highly accurate Virtual Worlds wpp is implementing usdm microservices in its generative AI enabled content creation pipelines for customers such as the Coca-Cola company moving to automotive and Robotics Revenue was 346 million up 5% sequentially and up 37% % year on-year year on-year growth was driven by the new customer rants in self- thriving platforms and increased demand for AI cockpit Solutions at the consumer at the computer vision and pattern recognition conference Nvidia won the autonomous Grand Challenge in the end to end driving at scale category outperforming more than 400 entries worldwide Boston Dynamics be yd Electronics figure intrinsic seens skilled 8i and paradine Robotics are using the Nvidia Isaac robotics platform for autonomous robot arms humanoids and mobile robots now moving to the rest of the p&l Gap growth margins were 75.1% and non-gap gross margins were 75.7% down sequentially due to a higher mix of new products within Data Center and inventory Provisions for low yielding black raw material sequentially Gap and non-gaap operating expenses were up 12% primarily reflecting higher compensation related costs cash flow from operations was 14.5 billion in Q2 we utilize cash of 7.4 billion toward shareholder returns in the form of share repurchases and Cas div divid reflecting the increase in dividend per shower our board of directors recently approved a 50 billion share repurchase authorization to add to our remaining 7.5 billion of authorization at the end of Q2 let me turn the outlook for the third quarter total revenue is expected to be 32.5 billion plus or minus 2% our third quarter Revenue Outlook in inates continued growth of our Hopper architecture and sampling of our Blackwell products we expect Blackwell production ramp in Q4 Gap and non-gap growth margins are expected to be 74.4% and 75% respectively plus or minus 50 basis points as our data center mix continues to shift to new products we expect this trend to continue into the fourth quarter of fiscal 2025 for the full year we expect gross margins to be in the mid 70% range Gap and non-gaap operating expenses are expected to be approximately 4.3 billion and 3.0 billion respectively full year operating expenses are expected to grow in the mid to Upper 40% range as we work on developing our next generation of products Gap and non-gaap other income and expenses are expected to be about 3 50 million including gains and losses from non-affiliated Investments and publicly held Equity Securities Gap and non-gaap tax rates are expected to be 177% plus or minus 1% excluding any discreet items further Financial detail are included in the CFO commentary and other information available on our IR website we are now going to open the call for questions operator would you please help us PO for questions thank you and at this time I would like to remind everyone in order to ask a question press star and then the number one on your telephone keypad we will pause for just a moment to compile the Q&A roster and as a reminder we ask that you please limit yourself to one question yeah guys um don't forget to smash that your first question comes from the line of vivec Arya with Bank of America Securities your line is open uh thanks for taking my question um jenton you mentioned um in the prepared uh comments that there's a change in the Blackwell GPU mask I'm curious are there any other incremental changes in backend packaging or anything else and I think related um you suggested that you could ship several billion dollars of Blackwell in Q4 despite a change in in the design is it because all these issues will be solved by then just help us size what is the overall impact of any changes in in Blackwell timing what that means to your kind of Revenue profile and how are customers reacting to it yeah thanks V uh the change to the mask is complete uh there were no functional changes necessary and so we're sampling uh functional samples of uh Blackwell Grace Blackwell in a variety of system configurations as we speak uh there are something like a hundred different types of Blackwell based systems that are built that were shown at comput text and we're enabling uh our ecosystem to start sampling those uh the functionality of Blackwell is as it is and we expect to start production in Q4 and your next question comes from the line of toshia Hari with Goldman Sachs your line is open hi thank you so much for taking the question uh Jensen I had a relatively longer term question uh as you may know there's a pretty heated debate in in the market on you know your customers and customers customers return on investment um and what that means for the sustainability of of capex going forward uh internally at Nvidia like what what are you guys watching you know what's on your dashboard as you try to gauge customer return and and how that impacts capex uh and then a quick followup maybe for Colette um I think your Sovereign AI number for the full year went up uh maybe a couple billion uh what's driving the improved Outlook and and how should we think about fiscal 26 thank you thanks toshia uh first of all when I said ship production in Q4 I mean shipping out I don't mean starting to ship but I mean I don't mean starting production but shipping out uh on the longer term longer term question let's take a step back and and you've heard me say that we're going through two simultaneous platform transitions at the same time the first one is transitioning from accelerated Computing to from uh general purpose Computing to accelerated Computing and the reason for that is because CPU scaling has been known to be slowing for some time and it is it is slow to a crawl and yet the amount of computing demand continues to grow quite significantly you could maybe even estimate it to be doubling every single year and so if we don't have a new approach Computing inflation would be driving up the cost for every company and it would be driving up the energy consumption of data centers around the world uh in fact you're seeing that and so the answer is accelerated Computing we know that accelerated Computing of course speeds up applications it also enables you to to uh do Computing at a much larger scale for example scientific simulations or database processing but what that translates directly to is lower cost and lower energy consumed and uh in fact this week uh we there's a Blog that came out that talked about a whole bunch of new libraries that we offer and that's really the core of the first platform transition going from general purpose Computing uh to accelerated Computing and it's not it's not unusual to see uh Someone Save 90% of their Computing cost and and um and the reason for that is of course you you just sped up an application 50x uh you would expect the Computing cost to to uh decline quite significantly the second was enabled by accelerated Computing because because we drove down the cost of training large language models or training deep learning so incredibly that it is now possible to have gigantic Scale Models multi-trillion parameter models and train it on pre-train it on just about the world's uh knowledge Corpus and let the model go figure out how to understand uh human represent human language representation and how to codify knowledge into its neural network works and how to learn reasoning and so so uh which which caused the generative AI Revolution now gener generative AI uh taking a step back about why it is that we went so deeply into it is because it's not just a feature it's not just a capability it's a fundamental new way of doing software instead of human engineered algorithms we now have uh data we tell the AI we tell the model we tell the computer what's the what are the expected answers what are our what are our previous observations and then for it to figure out what the algorithm is what's the function it learns a universal you know AI is a bit of a universal function approximator and it learns the function and so you could learn the function of almost anything you know and anything that you have that's predictable anything that has structure anything that um uh uh that you have um uh previous examples of and so so now here we are with generative AI it's a fundamental new form of computer science it's affecting uh how every layer of computing is done from CPU to GPU from Human engineered algorithms to machine learn algorithms and the type of applications you could now develop and and um uh produce is fundamentally uh remarkable and there are several things that are happening in generative AI so the first thing that's happening is the frontier models are uh growing in quite substantial scale and they're still saying we're still all seeing uh the benefits of scaling and whenever you double the size of a model you also have to more than double the size of the data set to go train it and so the amount of flops necessary in order to create that model um goes up quadratically and and so um it's not unus it's not unexpected to that the Next Generation models could take 20 you know 10 20 40 times more compute uh than last generation so we have to continue to drive the generational um performance up quite significantly so we can drive down the energy consumed and drive down the cost necessary to do it and so the first one is um uh there are larger Frontier models trained on more modalities and surprisingly they are more Frontier Model makers than last year and so you have more on more on more that's that's one of the Dynamics going on in generative AI the second is although it's below the tip of the iceberg you know what we see are chat GPT um uh image generators uh we see um uh coding uh we use we use uh generative AI for coding quite extensively here at Nvidia now uh we of course have a lot of digital designers and things like that um but those are kind of the tip of the iceberg what's below the iceberg are the largest systems largest Computing systems in the world today which are and you've heard me talk about this in the past which are recommender systems moving from CPUs it's now moving from CPUs to generative AI so recommender systems uh ad generation custom ad generation targeting ads at very very large scale and quite hyper targeting uh search and user generated content these are all very large scale applications have now uh evolved to generative AI of course the number of generative AI startups uh is generating tens of billions of dollars of uh Cloud renting uh opportunities for our Cloud Partners uh and uh Sovereign AI you know countries that are now realizing that uh their data is their natural and National resource and they have to use they have to use AI build their own AI infrastructure structure so that they could uh have their own digital intelligence uh Enterprise AI as Colette mentioned earlier is uh starting and uh you might have seen our announcement uh that uh the world's leading it uh companies are joining us to take the mvidia AI Enterprise platform uh to the world's Enterprises that the the compan companies that we're talking to uh so many of them are just so incredibly excited to drive uh more product it out of their company and then and then General robotics the the big the big um uh transformation last year as we uh are able to now learn uh physical AI from watching video and human demonstration and synthetic data generation from uh reinforcement learning uh from systems like Omniverse uh we are now able to uh work with just about every uh robotics companies now to start thinking about start building um General uh General Robotics and so you can see that there just so many different directions that General AI is going and so we're we're actually seeing the the momentum of gener generative AI accelerating and toia to answer your question um regarding us Sovereign Ai and our our goals in terms of growth in terms of Revenue uh certainly is a unique um and growing opportunity uh something that uh surfaced uh with generative Ai and the desires of countries around the world to have their own uh generative AI that would be able to incorporate uh their own language incorporate their own culture incorporate their own data in that in that country uh so more and more um excitement around these U models and what they can be specific for those countries so yes we're see we are seeing some growth opportunity in front of us and your next question comes from the line of Joe Moore with Morgan Stanley your line is open great thank you um Jen in the press release she talked about Blackwell anticipation being incredible um but it seems like Hopper demand is also really strong I mean you're guiding it for a very strong quarter without Blackwell in October so you know how long do you see sort of coexisting strong demand for both and can you talk about the transition to blackw do you see people intermixing clusters do you think most of the Blackwell activities new clusters just some sense of what that transition looks like yeah thanks Joe the demand for Hopper is really strong and it's true the demand for uh Blackwell is incredible uh there there's a couple reasons for that the first reason is is um if you just look at look at the world's cloud service providers the amount of GPU capacity they have available it's basically none and the reason for that is because they're either being deployed internally for accelerating their own workloads data processing for example uh data processing you know we hardly ever talk about it because it's mundane you know it's not it's not very cool because it doesn't generate a picture or you know generate words but almost every single company in the world processes data in the background and and um uh Nvidia gpus are the only accelerators on the planet that process and accelerate data SQL data um pandas data data science uh toolkits like pandas and the new one polers uh these are the one the most popular data processing Platforms in the world and aside from CPUs which as I've mentioned before really running out of steam uh nvidia's accelerated Computing is is really the only way to to get uh boosting performance out of that and so so that's number one is the primary the number one use case long before gener came along is that the migration of applications one after another uh to accelerated Computing the the second the second is of course rent the rentals they're they're renting uh capacity uh to model makers are renting it to uh startup companies and a generative AI company uh spends the vast majority of their uh invested Capital uh into into infrastructure so that they could use an a to help them create products and and so these companies need it now they they just simply can't afford you know you just raised money you uh they want you to put it to use now uh you have uh processing that you have to do you can't do it next year you got to do it today and so so there's a there's a fair that's one reason the second reason for Hopper Demand right now is because of the race to the next Plateau the first person to the next Plateau um uh gets to be you know a gets to introduce a revolutionary level of AI the second person who gets there is incrementally you know better or about the same and so so the ability to systematically and consistently race to the next plateau and be the first one there is how you establish leadership um you know Nvidia is constantly doing that and we show that uh to the world in the gpus we make and the AI factories that we make uh the networking systems that we make um the S so's we create I mean we we want we want to set the pace we want to be consistently the world's best and that's the reason why we drive ourselves so hard um of course we also want to see our dreams come true and and all of the the the capabilities that that we uh imagine in the future and the benefits that we can bring to society we want to see all that come true and and so these model makers are are um are same they're they're of course they want to be the world's best they want to be the world's first um and and uh although Blackwell will start uh shipping out in billions of dollars at the end of this year um the the standing up of the capacity is still probably you know weeks and a month or so away and so between now and then is a lot of generative AI Market Dynamic and so everybody is just really in a hurry it's a it's either op operational reasons that they need it they need accelerated Computing um they don't want to build any more uh general purpose Computing infrastructure and even Hopper uh you know of course h200 state-ofthe-art uh Hopper if you have a choice between building CPU infrastructure right now for business or Hopper uh infrastructure for business right now that decision is relatively clear and so I think people are just clamoring uh to uh transition the trillion dollars of uh uh established installed infrastructure to a modern infrastructure in Hopper state of the art and your next question comes from the line of Matt Ramsey with TD Cowan your line is open um thank you very much good afternoon everybody um I wanted to kind of circle back to an earlier question about uh the debate that investors are having about I don't know the ROI on all of this capex and hopefully this question and the distinction will make some some sense but what I'm what I'm having discussions about is is with like the percentage of folks that you see that are spending all of this money um and looking to sort of push the frontier towards um AGI convergence and as you just said a new plateau and capability um and they're going to spend regardless to get to that level of capability because it opens up so many doors for for um the industry and for their company versus customers that are really really focused today on capex versus Roi I don't know if that distinction makes sense I'm just trying to get a sense of how you're seeing the priorities of people that are putting the dollars in the ground on on this new technology and and what their priorities are and and their time frames are for that investment thanks thanks man the people who are investing in uh Nvidia infrastructure are getting Returns on it right away it's the best Roi uh infrastructure Computing infrastructure investment you can make today and so so one way to think through it you know probably the most the easiest way to think through it is just go back to First principles you have a trillion dollars worth of general purpose Computing infrastructure and the question is do you want to build more of that or not and for every billion dollars worth of General CPU based infrastructure uh that you stand up you probably rent it for less than a billion and so um because it's it's commoditized there's already a trillion dollars on the ground what's the point of getting more and so so the the people who are who are clamoring to get this infrastructure one um when they build out Hopper based infrastructure and soon uh black well-based infrastructure they start saving money that's tremendous return on investment and the reason why they start saving money is because data processing saves money um you know data processing is price just a giant part of it already and so recommender system save money um so on so forth okay and so you start saving money the second thing is everything you stand up uh are going to get rented because so many companies are are being founded to create generative Ai and so you're uh your uh uh capacity gets rented right away and the return on investment of that is really good and then the third reason is your own business you know you want to either create the next Frontier yourself or uh your your own internet services uh benefit from uh you know a a a Next Generation ad system or next Generation recommender system or next Generation search system uh so for your own Services uh for your own stores uh for your own user generated content social media platform um you know for for for your own Services generative AI uh is also uh a um a fast Roi and so there's a lot of ways you could think through it um but at the core it's because it is the best Computing infrastructure you could put in the ground today the world of general purpose Computing is Shifting to accelerated Computing the world of human engineered software is moving to generative AI software um if you were to build infrastructure to modernize your uh your your cloud and your data centers uh build it with accelerated Computing and video that's the best way to do it and your next question comes from the line of Timothy arery with UBS your line is open thanks a lot um I had a question on the shape of the Revenue growth both near and longer term I know Colette you did um you know increase Opex for the year and if I look at the increase in your purchase commitments and your supply obligations that's also quite bullish on the um uh other hand there's some you know school of thought that not that many customers really seem ready for liquid cooling and I do recognize that some of these racks can be air cooled but Jensen is that something to consider sort of on the shape of how blackw is going to ramp and and then I guess when you look Beyond uh you know next year which is obviously going to be a great year and you look into 26 do you worry about any other you know gating factors like say the power supply chain or uh at some point models start to get smaller I'm just wondering if you can speak to that thanks um I'm going to work backwards I really appreciate the question Tim uh so remember the world is moving from general purpose Computing to accelerated Computing and and the world builds about a trillion dollars worth of data centers um you know a milon doll wor the data centers in a few years will be all accelerated Computing in the past no gpus are in data centers just CPUs in the future every single data center while gpus and the reason for that is very clear because we need to accelerate workloads so that we can continue to be sustainable continue to drive down the cost of computing so that when we do more Computing our we don't experience uh Computing inflation uh second uh we need we need gpus for uh a new computer model called generative AI that we can all acknowledge uh is going to be quite transformative to the future of computing and so so I think I think um working backwards uh the way to think about that is is the next trillion dollars of the world's infrastructure will clearly be um different than the last trillion and it'll be vastly accelerated um with respect to to uh the shape of our ramp we offer multiple configurations of uh Blackwell uh Blackwell comes in either a you know Blackwell classic if you will that uses the hgx form factor that we pioneered uh with uh with Volta and I think it was Volta and so um uh we've been shipping the hgx hgx corn factor for some time it is air cooled I the grace Blackwell um is liquid cooled however that the number of data centers that want to go liquid cool is is quite significant and the reason for that is because we can uh in a liquid cool data center in any data center power limited data center whatever size data center you choose you could install and deploy anywhere from three to five times the AI throughput compared to the past and so liquid cooling is cheaper liquid cooling the newest TCO is better and liquid cooling allows you to have the benefit of this capability we call MB link which allows us to expand it to 72 Grace Blackwell packages which has essentially 144 gpus and so imagine 144 gpus connected in mvlink and that when we're increasingly showing you the benefits of that and the next you know the next click is obviously uh very low latency very high throughput large language model inference and the large mvlink domain is going to be a game Cher for that and so so I think I think people are uh are very comfortable deploying both and so almost every CSP we're working with are deploying uh some of both and so I uh I'm pretty confident that that we'll wrap it up just just fine uh your your second question out of the third is that looking forward yet next year is going to be a great year uh we expect to to uh grow our data center business uh quite significantly next year uh Blackwell is going to be going to be a a complete uh game changer for the industry and um uh blackwall is going to carry into into the following year and as I mentioned earlier working backwards from first principles uh remember that Computing is going through two platform transitions at the same time and that's just really really important to keep your head on your your mind focused on which is uh general purpose Computing is shifting to accelerated Computing and human engineered software is going to transition to ctive AI or artificial intelligence learn software okay and your next question comes from the line of Stacy Rasin with Bernstein research your line is open hi guys thanks for taking my questions I have two short questions for collect um the first uh several billion dollars of blackw Revenue in Q4 is that additive you you said you expected Hopper demand to strengthen in the second half does that mean Hopper strengthens Q3 to Q4 as well on top of Blackwell adding several billion dollars and the second question on Gross margins if I have mid mid 7s for the year where want to draw that if I have 75 for the year I'd be something like 71 to 72 for Q4 somewhere in that range is that the kind of exit rate for gross margins that you're expecting and how should we think about the drivers of gross margin Evolution into next year um as Blackwell ramps and I mean hope hopefully I guess the yields and and and the inventory reserves and everything come up yes so Stacey let's first take your uh question um that you had about Hopper and Blackwell uh so we believe our Hopper um will continue to grow into the second half we have many new products uh for Hopper our existing products for Hopper that we believe will start continuing to ramp um in the next uh uh quarters including our Q3 and um those new products moving to Q4 so let's say Hopper there for versus H1 is a growth opportunity for that additionally we have the black well on top of that and the Blackwell starting of um ramping in Q4 so hope that helps you on those two pieces uh your second piece is in terms of on our gross margin we provide a gross margin uh for our 3 we provided our gross margin on a non Gap at about uh 75 um we'll work um with all the different uh transitions that we're uh going through but we do believe we can do that 75 and Q3 we provided that we're still on track for the full year also in the mid 70s or approximately the 75 so we're going to see some slight um uh difference possibly in Q4 um again with our Transitions and the different clost structures that we have on our new product introductions however I'm not in the same number that you are um there we don't have exactly guidance um but uh I do believe you're lower than where we are and your next question comes from the line of Ben rites with melus your line is open yeah hey um thanks a lot for the question Jensen and Colette um I wanted to ask about the geographies uh there was the 10q that came out and the United States was down sequentially while uh several Asian geographies were up a lot sequentially just wondering what the Dynamics are there um you know and um obviously China did very well you mention it in your remarks what are the puts and takes and then I just wanted to clarify from Stacy's question um if that means the sequential overall Revenue growth rates for the company accelerate in the fourth quarter given all those favorable Revenue Dynamics thanks let me talk about um a bit in terms of our disclosure in terms of the T Q a required disclosure in uh a choice of geographies very challenging sometimes to uh create that uh right disclosure as we have to come up with uh one key piece piece is in terms of we have in terms of who we sell to Andor specifically who we invoice to and so what you're seeing in terms of there is who we invoice that's not necessarily where the product will eventually be um uh and where it may even travel to the End customer these are just moving to our o or odms and our system integrators for the most part across our product portfolio so what you're seeing there is sometimes just a Swift uh shift in terms of who they are using uh to complete their full configuration before those things are going into the data center going into notebooks and those pieces of it uh and that shift happens uh from time to time but yes uh our China number there are invoicing into China keep in mind that is incorporating both gaming also data center also Automotive in those uh numbers that we have going back to your statement in regarding gross margin um and um also what we're seeing in terms of uh what we're looking at for Hopper and Blackwell in terms of Revenue copper will continue to grow in the second half um we'll continue to grow from what we are currently seeing during determining that exact mix um in each Q3 and Q4 we don't have here we are not here to guide uh yet in terms of Q4 but we do see right now the demand expectations we do see um the visibility that that will be a growth opportunity in Q4 on top of that we will have our black architecture and your next question comes from the line of CJ Muse with caner Fitzgerald your line is open yeah good afternoon thank you for taking the question um you've embarked on a remarkable annual product Cadence with with challenges only likely becoming more and more given you know Rising complex complexity in a retical limit Advanced package world so curious you know if you take a step back how does this backdrop alter your thinking around potentially greater vertical integration supply chain Partnerships and and then thinking through consequential impact to your margin Pro profile thank you yeah thanks uh let's see I think the uh the fir well the first the the first answer to your the answer to your first question is that the reason why our velocity is so high is simultaneously because uh the complexity of the model is growing and we want to continue to drive its cost down um it's growing so we want to continue to increase its scale and we believe that uh by continuing to scale the AI models that will reach a a level of of extraordinary usefulness and that would it would um open up I uh realize the next Industrial Revolution we believe it and and so we're we're going to drive ourselves uh really hard to do to to continue to to uh uh uh go up that scale um we have the ability uh fairly uniquely to integrate uh to design a um uh an AI Factory uh because we have all the parts it's not possible to come up with a new AI Factory every year unless you have all the parts and so we have uh next year we're going to ship a lot more CPUs than we've ever had in the history of our company U more gpus of course uh but also MV link switches um uh CX uh dpus connectx dpu for East and West uh Bluefield dpus for north and south and uh data and storage processing uh to um infin band for supercomputing centers to ethernet which is a brand new product for us which is well on its way to becoming a multi-billion Dollar business uh to to bring AI to ethernet and so the fact that we could build we have we have access to all of this we have one architectural stack as you know um it allows us to introduce new capabilities to the market you know as we complete it otherwise what happens you ship these parts you go find customers to sell it to and then you've got to build somebody's got to build up an AI Factory and the AI Factory has got a mountain of software and so it's not about it's not about who integrates it we love the fact that our supply chain is disintegrated in the sense that we could service um uh you know quanta foxcon HP Dell Lenovo uh super micro I uh we used to be able to serve as ZT um I they were recently uh purchased and um I and so on so forth and so the the number of ecosystem partners that we have uh gigabyte assus the number of ecosystem partners that we have that allows it allows us to allows them to take our architecture which all works but integrated in a bespoke way into all of the world's cloud service providers Enterprise data centers the scale and reach necessary from our odms and our integrators integr integrator supply chain is vast and gigantic because the world is huge and so that part we don't we don't want to do and we're not good at doing and um uh but we know how to design the AI infrastructure provide it the way that customers would like it and lets the ecosystem integrated um well yeah so anyways that's the reason why and your final question comes from the line of Aaron rakers with Wells Fargo your line is open yes thanks for taking the question I wanted to go back into the the Blackwell product cycle one of the questions that that we tend to get asked is is how you see the the rack scale system mix dynamic as as you think about leveraging MV link you think about GB you know nvl 72 and and how that goto Market you know dynamic looks you know as far as the the Blackwell product cycle I guess put distinctly how do you see that mix of rack scale systems as we start to think about the black W Blackwell cycle playing out yeah eron thanks the um the blackwall rack system it's designed and architected as a rack but it's sold in a disag in disaggregated system components we don't sell the whole rack and the reason for that is because everybody's rack's a little different surprisingly you know some some of them are ocp standards some of them are not some of them are Enterprise uh and uh the the power limits for everybody could be a little different choice of cdus uh the choice of um uh power bus bars the the the configuration and integration into people's data centers all different and so so the way we designed it we architected the whole rack the software is going to work perfectly across the whole rack and then we uh provide the system components like for example the uh CPU and GPU compute um U board is then integrated into an mgx it's a modular system architecture mgx is is completely ingenious and uh we have mgx odms and integrators and oems all over the planet and so so just about you know any configuration you would like uh where you would like that 3,000lb rack to be uh delivered you know it's got to be close to it's it has to be integrated and assembled close to the dat Center because it's fairly heavy and so everything from the supply chain from the moment that we ship the GPU CPUs uh the switches the Nicks from that point forward the integration is done quite close to the location of the csps and the locations of the the data centers and so you could imagine how many data centers in the world there are and how many Logistics hubs uh We've uh uh scaled out to with our odm partners and so I I think that because we we show it as one rack and because it's always you know rendered that way and and shown that way we we might have left the impression that we're doing the integration our customers hate that we do integration the supply chain hates us doing integration they want to do the integration that's their value added um there's a final design design in if you will you know it's not quite as simple as shimmy into a data center but that design fit in is really complicated and so the install the design fit in the installation the bring up the um uh uh repair uh repair and replace that entire cycle is done all over the world and we have a sprawling network of odm and OEM partners that does this incredibly well so so uh integration is not the reason why we're doing uh racks it it's it's the anti-reason of doing it um the way we don't want to be an integrator we want to be a a technology provider and I will now turn the call back over to Jensen Hong for closing remarks thank you let me make a couple more make a couple of comments that I made earlier again that data center worldwide are in Full Steam to modernize the entire computing back with accelerated Computing and generative AI Hopper demand remains strong and the anticipation for black o is incredible let me highlight the top five things the top five things of our company accelerated Computing has reached the Tipping Point CPU scaling slows developers must must accelerate everything possible accelerated Computing starts with Cuda X libraries new libraries open new markets for NVIDIA we released many new libraries including could accelerated poers pandas and Spark the leading data science and data processing libraries qvs for Vector Pro Vector datab bases this is incredibly hot right now Ariel and shiona for 5G wireless base station a whole Suite of a whole world of data centers that we can go into now parir bricks for Gene sequencing and alphao 2 for protein structure prediction is now kud accelerated we are at the beginning of our journey to modernize a trillion dollar worth of data centers from general purpose Computing to accelerated Computing that's number one number two Blackwell is a step function leap over Hopper Blackwell is an AI infrastructure platform not just a GPU also happens to be in the name of our GPU but it's an AI infrastructure platform as we reveal more of Blackwell and sample systems to our partners and customers the extent of Blackwell's leap becomes clear the Blackwell vision took nearly 5 years and seven one-of-a-kind chips to realize the gray CPU the black well dual GPU and a Coos package connectx dpu for eastwest traffic blue field dpu for north north north south and storage traffic mvlink switch for all to all GPU Communications and Quantum and Spectrum X for both infin ban ethernet can support the mass burst traffic of AI Blackwell AI factories are building siiz computers Nvidia designed and optimized the Blackwell platform full stack end to end from chips systems networking even structured cables power and Cooling and mountains of software to make it fast for customers to build AI factories these are very Capital intensive infrastructures customers want to deploy deoy it as soon as they get their hands on the equipment and deliver the best performance and TCO Blackwell provides three to five times more AI throughput in a power limited data center than Hopper the third is MV link this is a very big deal with its all to all GPU switch is gamechanging the blackwall system lets us connect 144 gpus in 72 gb2 200 packages into one MV link domain with an aggregate aggregate mvlink bandwidth of 259 terabytes per second in one rack just put that in perspective that's about 10 times higher than Hopper 259 terabytes per second kind of makes sense because you need to boost the training of multi-trillion parameter models on trillions of tokens and so that natural amount of data needs to be moved around from GPU to GPU for inference MV link is vital for low latency High throughput large language model token generation we now have three networking platforms MV link for GPU scale up Quantum infiniband for supercomputing and dedicated AI factories and Spectrum X for AI on ethernet mvia networking footprint is much bigger than before generative AI momentum is accelerating generative AI Frontier Model make are racing to scale to the next AI Plateau to increase model safety and IQ we're also scaling to understand more modalities from text images and video to 3D physics chemistry and biology Chad Bots coding AIS and image generators are growing fast but it's just a tip of the iceberg internet services are deploying generative AI for large scale recommenders add targeting and search systems AI startups are consuming tens of billions of dollars yearly of csp's cloud capacity and countries are recognizing the importance of AI and investing in Sovereign AI infrastructure and Nvidia Ai and Nvidia Omniverse is opening up the next era of AI General Robotics and now the Enterprise AI wave has started and were poised to help companies transform their businesses the Nvidia AI Enterprise platform consists of Nemo Nims Nim agent Blueprints and AI Foundry that our ecosystem Partners the world leading it companies use to help customer C companies customize AI models and build bespoke AI applications Enterprises can then Deploy on Nvidia AI Enterprise runtime and at $4,500 per G PPU per year Nvidia AI Enterprise is an exceptional value for deploying AI anywhere and for NVIDIA software Tam can be significant as the Cuda compatible GPU install base grows from Millions to tens of millions and as Colette mentioned Nvidia software will exit the year at a $2 billion run rate thank you all for joining us today and ladies and gentlemen this concludes today's call and we thank you for your participation you may now disconnect [Music] so with that being said guys what you think about that earnings call um as you can see here the market isn't acting um reacting to it positive giving it a positive feedback right now but as you guys can see here if we check these postmarket orders it is a lot of orders flying it look like it's starting to climb back here guys look like it want to climb back to $120 I want to know your thoughts in the live chat if you're new to the channel don't forget to smash that live that like button guys get this thing to 30 likes really quickly who's going to be that 30th like but um anyway though guys I'm going to let you guys go everybody and may the gains be with you all