NVIDIA (NVDA) Q2 2025 Earnings Call + Q&A

Published: Aug 28, 2024 Duration: 01:20:42 Category: Gaming

Trending searches: nvidia conference call
[Music] n [Music] [Music] there we go welcome to the Nvidia earnings call you say Nidia Nvidia I think so NV that sounds almost pornographic um yeah so we're here talking about Nvidia oh my God you have an indie world shirt that's pretty cool anyway sorry uh yeah nvidia's out with earnings results uh they beat expectations on Revenue reporting $30 billion expectations were 28.7 so they beat Revenue by 1.3 billion in the quarter uh expectations for earnings were 64 cents a share they came in at 68 sense a share which it beats expectations but the earnings whisper number which is like kind of the the the the rumor that like you know like the higher expectation one uh they missed that it was 71 cents they came in at 68 uh they did boost their full year or sorry not their full year guidance but their upcoming oh you know what I could we could break out a story about that they uh issued guidance that was more than uh previously expected guance that you expected their stock to go down right but it went down but like what 2% or 3% no I did not expect it to go down I I don't really know what's going to happen when this stuff goes out uh I just said that the stock had rallied for the past few weeks into this you know it when the Japanese Market crashed earlier this month uh Nvidia stock went down two and it was below 100 and then it went up to almost 130 so that's a huge run ahead of these results so a little profit taking is not not that surprising uh they did sell the stock down sorry go ahead you sent this like this this sheet that said Jensen was selling stock was that like this the transaction was it s he was dumping is that him dumping selling stock or is that something that I'm not yeah uh I did point out that yeah Jensen Wong uh he's been selling 240,000 shares at a time uh he did it on the 15th of July the 17th of July the 19th of July the 23rd of July the 25th of July that's 29th and the 31st and then he's done it three times this month so what does that tell you he's an expert like what is is that bad it tells me that he's selling but he could be selling it because he's buying some TRC of land in Hawaii or something you know God he's be neighbors with yeah like who knows why He's he but he has raised like you know we're talking about 1 2 3 4 five 6 7even 8 n 10 times you know he's raised about $300 million by doing this but he still owns 862 million shares of Nvidia so it's like yeah he sold a few million shares but like he owns 860 some million shares still you know what I'm saying like it's not that he's uh devoid of owning shares uh but yeah he did take some off the top the thing that I noticed he it didn't seem like it was the price that was why he was selling he sold it at 124 he sold it at 116 he sold it at 91 like you said sometimes people just need cash I guess like they just need cash that's what I'm saying like maybe he has some tax stuff coming due maybe he has maybe he wants to buy a sports team you know what I mean like who knows wild I can't never imagine having that much wealth to manage like yeah having billions is is a little different for sure um so yeah going back into the results uh because you know the conference call is going to start here in about 9 minutes uh they came in with a pro gross margin of 75.1% uh so that's down from the previous quarter where they had 78.4% gross margins but it's still insanely high that means that for every dollar of Revenue they make 75 cents of that is profit that is nuts no one I don't know a single company with margins that good right now and it does seem like they would be making more money if they could ship these new h20s to more people uh but yeah like not just uh not just their AI story but like the data center Revenue as a whole was up 16% from the previous quarter and it was up 154% from a year ago so you're talking about data center revenue of 26.3 billion that's up 154 4% in a year that's crazy you don't see numbers like that and they mentioned that h200 GPU powered systems are now available on Cor wee that's the first cloud provider to announce General availability of that architecture that's the thing like that's the chip I don't even want to call them chips they're giant systems basically uh that's the one that a lot of these AI trainers are moving towards and they cost like $40,000 a piece is that the same one that said like it's hard to manufacture and it's hard to move so like it's very difficult for them to mass produ these quickly like they he mentioned that I think an interviewer or one of his calls we watched his called like we can't he's like this it's not as easy just it takes like all these machines to build it and you have to like move it really weirdly without damaging it people want like people just want them to like assembly line it out as fast as humanly possible yeah and I think it's coming from everything my end is like it's very light light like Echo I have no idea where it's coming do you have the stream open no I've nothing seen nothing open I open the stream I don't hear it I hear it I have no idea what's causing it like it's our first time chatter I think our first time chatter might have come from The Shack News website where the stream is embedded so there's a chance that they're listening to it on two no there's NE I hear it but I was looking around and there's nothing open on my entire PC that would be causing you you to have feedback it's like very it's like very very subtle Echo I can hear it too but there's nothing I hear it I do hear it from you anyway yeah uh it's got to be coming from your computer somewhere would be my guess but I don't know where it might be another tab open or something anyway uh I don't think I'm echoing maybe I am faintly nothing there's no browser open to Echo there's nothing left to tick yeah um there nothing here see yeah they they mentioned the success of their Cloud their cloud they're still doing pretty well there um on the gaming side they said that Gaming revenue was 2.9 billion up 9% from the previous quarter and up 16% from a year ago so that breaks like a multi-year trend where Gaming revenue had kind of been declining at the company it's just wild that like most people in our industry still view Nvidia as a GPU company right uh a graphical processors and while sure some of that Tech is being used in their data center that's not what these are anymore so it's just like they have their hands in multiple pots now I mean they didn't like the switch uses them I think doesn't the Tesla or there was some kind of EVS that used them too uh their chips that they outsourc so like their their hands were multiple projects like they've gotten huge absolutely huge and maybe that's why you're those huge margins but yeah you're talking about 3030 billion dollar of quarterly revenue and 2.9 billion of that was from gaming that's it it's not gaming is not what drives the stock anymore and I just think that's an interesting point to make at this juncture because it's very much an AI story AI machine like when did we start calling AI machine learning now it's AI again which is weird but no AI is I think there's like you'll hear you'll hear like four terms thrown around you'll hear computer vision you'll hear neural network uh you know and then yeah you'll hear AI or large language model but yeah all of these words have been used interchangeably machine learning is another one Apple famously referred to AI as machine learning for about a decade before they ex they embraced Ai and then of course they had to reinvent Ai and call it Apple intelligence but uh yeah it's uh they're definitely they're they're the AI company right now here at Nvidia their robotics Revenue was $346 Million uh their their computer vision division was only4 54 million uh so these are still massive amounts of money but just compared to what's going on in AI right now it doesn't really matter uh I mean it does because if you back out gaming from this they don't beat earnings uh or they don't beat their uh Revenue expectations it's like the cherry on top of whatever they're doing right that's keep an eye on the earnings fkks I will switch it over if they start they're not started yet yeah we have about um they're still on hold so uh yeah we have about four minutes until the earnings result or sorry the conference call will start I'm actually very interested to hear their commentary uh about demand uh because I think that's what has people spooked right now I don't think if a company can confidently say that we expect to earn this amount of Revenue in the next quarter I don't know why you would question them but you know here we are you look at their guidance let me see if I can find their forecast real Qui quick but yeah I said who's ready to lose a lot of money but A's already said that we don't know where it's going to go depends on what they say right it's it is loser ready to lose a lot of money not Nvidia man they're making a ton of money uh I guess if you're a shareholder you might be down a little bit today it's down about 5% yeah that's not that bad I mean it's it's like we if you missed it at the beginning of the call the the show we we were talking about how the stock has run like $40 ahead of earnings so there's a little bit of profit taking happening here and I think it's also that uh a lot of people tried to speculate on the call option and the put option side of this that expire on Friday and they may be left holding worthless pieces of paper at this point so a what what don't you want to hear in this call what's like the one thing you don't want to hear Jensen talk about or like mention like supply chain issues or anything like that or I would say the worst thing that he could say would be something about supply chain issues second worst thing he could say is something about how uh the Chinese tariffs and those rules about AI Tech are hurting the company uh it's gotten really shady what's going on on that front Greg there's uh people who are they're buying gpus in America they're buying these h100s in America and then they're renting them to Chinese companies uh or they're going around the problem like people with bureaucracy yeah yeah yeah so it's uh I think it's that those are probably the two things one that he says like we're we we're having supply chain issues and then two Regulators are are hurting our growth um I think those are the just from a shareholder perspective those are probably the two worst things that he could talk about uh but what I'm interested in is like what's the next year look like right what's the next two years look like if Jensen can spin that yarn and do it in a compelling way I don't think the stock will suffer uh and he tends to be an optimist he tends to speak kind of positively about things unless he sees something turn around like if he sees a risk he'll say it um but yeah that that's that's my takeaway it looks like he's doing an interview for Bloomberg after the after the I'm sure yeah it's already done and then embargo till after the yeah it's probably in the can already and then yeah you know CNBC used to do that with Steve Jobs uh after Apple earnings and then that stopped but uh yeah I I I like hearing what Jensen has to say so we're about to hear it in a minute so uh I'll let you uh get started with this Greg but yeah the the numbers look good uh they're not they didn't beat their whisper number so I think that's why the Stock's suffering a little bit uh but they also added $50 billion to a share buyback program uh Gaming revenue was up 16% on the year uh gross margins above 75% things are going pretty well there I guess is my point looks like sold a lot of otm Nvidia puts greater today it's starting at this time I would like to welcome everyone to second quarter earnings call the speaker remarks there will be a question and answer session if you would like to ask a question during that time simply press the star key followed by the number one on your telephone keypad if you would like to withdraw your question press star one a second time thank you and Mr Stuart stcker you may begin your conference thank you good afternoon everyone and welcome to nvidia's conference call for the second quarter of fiscal 2025 with me today from Nvidia are Jensen Wong president and chief executive officer and Colette cres Executive Vice President and Chief Financial Officer I would would like to remind you that our call is being webcast live on nvidia's investor relations website the webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2025 the content of today's call is invidious property it cannot be reproduced or transcribed without prior written consent during this call we may make forward-looking statements based on current expectations these are subject to a number risks significant risks and uncertainties and our actual results May differ materially for discussion of factors that could affect our future Financial results and business please refer to the disclosure in today's earnings release our most recent forms 10K and 10q and the reports that we may file on Form 8K with the Securities and Exchange Commission all our statements are made as of today August 28th 2024 based on information currently available to us except as required by law we assume no obligation to update any such statements during this call we will discuss non-gaap Financial measures you can find a Reconciliation of these non-gaap Financial measures to Gap Financial measures in our CFO commentary which is posted on our website let me highlight an upcoming event for the financial Community we will be attending the Goldman Sachs communic Opia and Technology conference on September 11th in San Francisco where Jensen will participate in a keynote fireside chat our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday November 20th 2024 with that let me turn the call over to Colette thanks Stuart Q2 was another record quarter revenue of 30 billion was up 15% sequentially and up 122% year on year and well above our Outlook of 28 billion starting with data center data center revenue of 26.3 billion was a record up 16% sequentially and up 154% year on year driven by strong demand for NVIDIA Hopper GPU Computing and their networking platforms compute Revenue grew more than 2.5x networking Revenue grew more than 2x from the last year cloud service providers represented roughly 45% of our data center revenue and more than 50% stem from the consumer internet and Enterprise companies customers continue to accelerate their Hopper architecture purchases while gearing up to adopt Blackwell Key workloads Driving our data center growth include generative AI model training and inferencing video image and Text data data pre- and postprocessing with Cuda and AI workloads synthetic data generation AI powered recommender systems SQL and Vector database processing as well next Generation models will require 10 to 20 times more compute to train with significantly more data the trend is expected to continue over the trailing four quarters we estimate that inference drove more than 40% of our data center Revenue csps consumer internet companies and Enterprises benefit from the incredible throughput and efficiency of nvidia's inference platform demand for NVIDIA is coming from Frontier Model makers consumer internet services and tens of thousands of companies and startups building generative AI applications for consumers advertising education Enterprise and Healthcare and robotics developers desire nvidia's Rich ecosystem and availability in every cloud csps appreciate the broad adoption of Nvidia and are growing their Nvidia capacity given the high demand Nvidia h200 platform began ramping in Q2 shipping to large csp's consumer internet and Enterprise company the Nvidia h200 bills upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the age 100 our data center Revenue in China grew sequentially in Q2 and is significant contributor to our data center Revenue as a percentage of total data center Revenue it remains below levels seen prior to the imposition of export controls we continue to expect the China Market to be very competitive going forward the latest round of M Mo perf inference benchmarks highlighted nvidia's inference leadership with both Nvidia Hopper and Blackwell platforms combining to win gold medals on all tasks at computex Nvidia with the top computer manufacturers unveiled an array of Blackwell architecture powered systems and Nvidia networking for building AI factories and data centers with the Nvidia mgx modular reference architecture our oems and odm partners are building more than 100 Blackwell based systems designed quickly and coste effectively the Nvidia Blackwell platform brings together multiple gpus CPU dpu Envy link Envy link switch and the networking chips systems and Nvidia Cuda software to power the the next generation of AI across the cases Industries and countries the Nvidia gb200 NBL 72 system with the fifth generation NV link enables all 72 gpus to act as a single GPU and deliver up to 30 times faster infant for llms workloads and unlocking the ability to run trillion parameter models in real time copper demand is strong and Blackwell is widely sampling we executed a change to the Blackwell GPU Mass to improve production yields Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year 26 in Q4 we expect to dip several billion dollars in Blackwell Revenue copper shipment are expected to increase in the second half of Fisco 2025 copper Supply and availability have improved demand for Blackwell platforms is well above Supply and we expect this to continue into next year networking Revenue increased 16% sequentially our ethernet for AI Revenue which includes our Spectrum X endn ethernet platform doubled sequentially with hundreds of customers adopting our ethernet offerings Spectrum X has broad Market support from OEM and odm partners and is being adopted by csps GPU Cloud providers and Enterprise including xai to connect the largest GPU compute cluster in the world Spectrum X supercharges ethernet for AI processing and delivers 1.6x the performance of traditional ethernet we plan to launch new Spectrum X products every year to support demand for scaling compute clusters from tens of thousands of gpus today to millions of gpus in the near future Spectrum X is well on track to begin a multi-billion dollar product line within a year our Sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at National imperatives for their society and industries Japan's National Institute of Advanced industrial science and technology is building its AI bridging Cloud infrastructure 3.0 supercomputer with Nvidia We Believe Sovereign AI Revenue will reach low double digigit billions this year the Enterprise AI wave is started Enterprises also drove sequential Revenue growth in the quarter we are working with most of the Fortune 100 companies on AI initiatives across Industries and geographies a range of applications are fueling our growth including AI powered chat boots generative Ai co-pilots and agents to build new monetizable business applications and enhance employee productivity amdo is using Nvidia generative AI for their smart agent transforming the customer experience and reducing customer service cost by 30% service now is using Nvidia for its now assist offering the fastest growing new product in the company's history sap is using Nvidia to build jewel co-pilots cohesity is using Nvidia to build their generative AI agent and lower generative AI development costs snowflake serves over three billion queries a day for over 10,000 Enterprise customers is working with Nvidia to build coal pilots and lastly withdrawn is using Nvidia AI Omniverse to reduce endtoend cycle times for their factories by 50% Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using aidia in their data centers Automotive will drive multi-billion dollars in Revenue across on-prem and Cloud consumption and will grow as Next gener Generation AB models require significant ific anly more compute Healthcare is also on its way to being a multi-billion Dollar business as AI revolutionizes Medical Imaging surgical robots patient care electronic health record processing and Drug Discovery during the quarter we announced a new Nvidia AI FL Foundry service to Super supercharge generative AI for the world's Enterprises with meta's llama 3.1 collection of models this marks a watershed moment for Enterprise AI companies for the first time can leverage the capabilities of an open-source Frontier level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business Accentra is the first to adopt the new service to build custom llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications Nvidia Nims accelerate and simplify model deployment companies across Healthcare energy Financial Services Retail transportation and Telecommunications are adopting Nims including aramco Lowe's and Uber &p realize 70% cost savings and eight times latency reduction after moving into Nims for generative Ai call transcription and classification over 150 partners are embedding Nims across every layer of the AI ecosystem we announc Nim agent blueprints a catalog of customizable reference applications that include a full Suite of software for building and deploying Enterprise generative AI applications with Nim agent blueprints Enterprises can refine their AI applications over time creating a datadriven AI flywell the first Nim agent blueprints include workloads for customer service computer aided drug Discovery and Enterprise retrieval augmented generation our system integrators technology solution providers and system Builders are bringing Nvidia Nim agent blueprints to Enterprises Nvidia Nim and Nim agent blueprints are available through the Nvidia AI enterprise software platform which has great momentum we expect our software SAS and support Revenue to approach a $2 billion annual run rate exiting this year with Nvidia AI Enterprise notably contributing to growth moving to gaming and AIP PCS Gaming revenue of 2.88 billion increased 9% sequentially and 16% year on-ear we saw sequential growth in console notebook and dtop revenue and demand is strong strong and growing and channel inventory remains healthy every PC with RTX is an aipc RTX PCS can deliver up to 1,300 AI tops and there are now over 200 RTX AI laptops designs from leading PC manufacturers with 600 AI powered applications and games and an installed base of 100 million devices RTX is set to revolutionize consumer experiences with generative AI Invidia Ace a suite of generative AI Technologies is available for RTX AI PCS meab break is the first game to use Nvidia Ace including our small small large small language model minitron 4B optimized on device inference the Nvidia gaming in ecosystem continues to grow recently added RTX and dlss titles including Indiana Jones and the great circle Dune Awakening and Dragon Age The Veil guard the GeForce now Library continues to expand with total catalog size of over 2,000 titles the most content of any cloud gaming service moving to Pro visualization revenue of 454 million was up 6% sequentially and 20% year onye demand is being driven by Ai and graphic use cases including model fine-tuning and Omniverse related workloads automotive and Manufacturing were among the key industry verticals driving growth this quarter companies are racing to digital at TI workflows to drive efficiency across their operations the world's largest electronics manufacturer foxcon is using Nvidia Omniverse to power digital twins of the physical plants that produce Nvidia black ho systems and several large Global Enterprises including Mercedes ben Mercedes-Benz signed multi-year contracts for NVIDIA Omniverse Cloud to build industrial digital twins of factories we announced new Nvidia USD Nims and connectors to open Omniverse to new Industries and enable developers to incorporate generative Ai co-pilots and agents into USD workloads accelerating their ability to build highly accurate Virtual Worlds wpp is implementing usdm microservices in its generative AI enabled content creation pipeline for customers such as the Coca-Cola company moving to automotive and Robotics Revenue was 346 million up 5% sequentially and up 37% year on-year year-on-year growth was driven by the new customer rants in self- thriving platforms and increased demand for AI cockpit Solutions at the consumer at the computer vision and pattern recognition conference Nvidia won the autonomous Grand Challenge in the end to end driving at scale category outperforming more than 400 entries worldwide Boston Dynamics byd Electronics figure intrinsic seens skilled ADI and paradine Robotics are using the Nvidia Isaac robotics platform for autonomous robot Farms humanoids and mobile Rob RS now moving to the rest of the p&l Gap gross margins were 75.1% and non-gap gross margins were 75.7% down sequentially due to a higher mix of new products within Data Center and inventory Provisions for low yielding black raw material sequentially Gap and non-gaap operating expenses were up 12% primarily reflecting higher compensation related costs cash flow from operations was 14.5 billion in Q2 we utilized cash of 7.4 billion toward shareholder returns in the form of share repurchases and cash dividends reflecting the increase in dividend per shower our board of directors recently approved a $50 billion share repurchase authorization to add to our remaining 7.5 billion of authorization at the end of Q2 let me turn the outlook for the third quarter total revenue is expected to be 32.5 billion plus or minus 2% our third quarter Revenue Outlook incorporates continued growth of our Hopper architecture and sampling of our Blackwell products we expect Blackwell production ramp in Q4 Gap and non-gap growth margins are expected to be 74.4% and 75 % respectively plus or minus 50 basis points as our data center mix continues to shift to new products we expect this trend to continue into the fourth quarter fiscal 2025 for the full year we expect gross margins to be in the mid 70% range Gap and non-gaap operating expenses are expected to be approximately 4.3 billion and 3.0 billion respectively full year operating expenses are expected to grow in the mid to Upper 40% range as we work on developing our next generation of products Gap and non-gaap other income and expenses are expected to be about 350 million including gains and losses from non-affiliated Investments and publicly held Equity Securities Gap and non-gaap tax rates are expected to be 177% plus or minus 1% excluding any discreete items further Financial detail are included in the CFO commentary and other information available on our IR website we are now going to open the call for questions operator would you please help us and pull for questions thank you and at this time I would like to remind everyone in order to ask a question press star and then the number one on your telephone keypad we will pause for just a moment to compile the Q&A roster and as a reminder we ask that you please limit yourself to one question and your first question comes from the line of vivec Arya with Bank of America Securities your line is open uh thanks for taking my question um jenton you mentioned um in the prepared uh comments that there's a change in the Blackwell GPU mask I'm curious are there any other incremental changes in backend packaging or any anything else and I think related um you suggested that you could ship several billion dollars of Blackwell in Q4 despite a change in in the design is it because all these issues will be solved by then just help us size what is the overall impact of any changes in in Blackwell cling uh what that means to your kind of Revenue profile and how are customers reacting to it yeah thanks V uh the change to the mask is complete uh there were no functional changes necessary and so we're sampling uh functional samples of uh Blackwell Grace Blackwell in a variety of system configurations as we speak uh there are something like a hundred different types of Blackwell based systems that are built that were shown at comput text and we're enabling uh our ecosystem to start sampling those uh the function ity of Blackwell is as it is and we expect to start production in Q4 and your next question comes from the line of toshia Hari with Goldman Sachs your line is open hi thank you so much for taking the question uh Jensen I had a relatively longer term question uh as you may know there's a pretty heated debate in in the market on you know your customers and customers customers return on investment um and what that means for the sustainability of of capex going forward uh internally at Nvidia like what what are you guys watching you know what's on your dashboard as you try to gauge customer return and and how that impacts capex uh and that a quick followup maybe for Colette um I think your Sovereign AI number for the full year went up uh maybe a couple billion uh what's driving the improved Outlook and and how should we think about fiscal 26 thank you thanks toshia uh first of all when I said ship production in Q4 I mean shipping out I don't mean starting to ship but I mean I don't mean starting production by shipping out uh on the longer term longer term question let's take a step back and and you've heard me say that we're going through two simultaneous platform transitions at the same time the first one is transitioning from accelerated Computing to from uh general purpose Computing to acceler Computing and the reason for that is because CPU scaling has been known to be slowing for some time and it is it is slow to a crawl and yet the amount of computing demand continues to grow quite significantly you could maybe even estimated to be doubling every single year and so if we don't have a new approach Computing inflation would be driving up the cost for every company and it would be driving up the energy consumption of data centers around the world uh in fact you're seeing that and so the answer is accelerated Computing we know that accelerated Computing of course speeds up applications it also enables you to uh do Computing at a much larger scale for example scientific simulations or database processing but what that translates directly to is lower cost and lower energy consumed and uh in fact this week uh we there's a Blog that came out that talked about a whole bunch of new libraries that we offer and that's really the core of the first platform transition going from general purpose Computing uh to accelerated Computing and it's not it's not unusual to see uh Someone Save 90% of their Computing cost and and um and the reason for that is of course you you just sped up an application 50x uh you would expect the Computing cost to to uh decline quite significantly the second was enabled by accelerated Computing because because we drove down the cost of training large language models or training deep learning so incredibly that it is now possible to have gigantic Scale Models multi-trillion parameter models and train it on pre-train it on just about the world's uh knowledge Corpus and let the model go figure out how to understand uh human represent human language representation and how to codify knowledge into its neural networks and how to learn reasoning and so so uh which which caused the generative AI Revolution now gener generative AI uh taking a step back about why it is that we went so deeply into it is because it's not just a feature it's not just a capability it's a fundamental new way of doing software instead of human engineered algorithms we now have uh data we tell the AI we tell the model we tell the computer what's the what are the expected answers what are our what are our previous observations and then for it to figure out what the algorithm is what's the function it learns a universal you know AI is a bit of a ival function approximator and it learns the function and so you could learn the function of almost anything you know and anything that you have that's predictable anything that has structure anything that um I uh that you have um previous examples of and so so now here we are with generative AI it's a fundamental new form of computer science it's affecting uh how every layer of computing is done from CPU to GPU from Human engineered algorithms to machine learn algorithms and the type of applications you could now develop and and um uh produce is uh fundamentally uh remarkable and there are several things that are happening in generative AI so the first thing that's happening is the frontier models are uh growing in quite substantial scale and they're still seeing we're still all seeing uh the benefits of scaling and whenever you double the size of a model you also have to more than double the size of the data set to go train it and so the amount of flops necessary in order to create that model U goes up quadratically and and so um it's not unus it's not unexpected to see that the Next Generation models could take 20 you know 10 20 40 times more compute uh than last generation so we have to continue to drive the generational um performance up quite significantly so we can drive down the energy consumed and drive down the cost necessary to do it so the first one is um there are larger Frontier models trained on more modalities and surprisingly there are more Frontier Model makers than last year and so you have more on more on more that's that's one of the Dynamics going on in gen generative AI the second is although it's below the tip of the iceberg you know what we see our chat GPT um uh image generators uh we see um uh coding uh we use we use uh generative AI for coding quite extensively here at Nvidia now uh we of course have a lot of digital designers and things like that um but those are kind of the tip of the iceberg what's below the iceberg are the largest systems largest Computing systems in the world today which are and you've heard me talk about this in the past which are recommender systems moving from CPUs it's now moving from CPUs to generative AI so recommender systems uh ad generation custom ad generation targeting ads at very very large scale and quite hyper targeting uh search and user generated content these are all very large scale applications have now uh evolved to generative AI of course the number of generative AI startups uh is generating tens of billions of dollars of uh Cloud renting uh opportunities for our Cloud Partners uh and uh Sovereign AI you know countries that are now realizing that uh their data is their natural and National resource and they have to use they have to use AI build their own AI infrastructure so that they could uh have their own digital intelligence uh Enterprise AI as Colette mentioned earlier is uh starting and uh you might have seen our announcement uh that uh the world's leading it uh companies are joining us to take the mvidia AI Enterprise platform to the world's Enterprises that the the comp companies that we're talking to uh so many of them are just so incredibly excited to drive uh more productivity out of their company and then I and then General robotics the the big the big um uh transformation last year as we uh are able to now learn uh physical AI from watching video and human demonstration and syn data generation from uh reinforcement learning uh from systems like Omniverse we are now able to uh work with just about every uh robotics companies now to start thinking about start building um General uh General Robotics and so you can see that there just so many different directions that generative AI is going and so we're we're actually seeing the the momentum of gener generative AI accelerating and toia to answer your question um regarding us Sovereign Ai and our our goals in terms of growth in terms of Revenue uh certainly is a unique um and growing opportunity uh something that uh surfaced uh with generative Ai and the desires of countries around the world to have their own uh generative AI that would be able to incorporate uh their own language incorporate their own culture incorporate their own data in that in that country uh so more and more um excitement around these U models and what they can be specific for those countries so yes we're see we are seeing some growth opportunity in front of us and your next question comes from the line of Joe Moore with Morgan Stanley your line is open great thank you um Jen in the fresh release she talked about Blackwell anticipation being incredible um but it seems like proper demand is also really strong I mean you're guiding it for a very strong quarter without Blackwell in October so you know how long do you see sort of coexisting strong demand for both and can you talk about the transition to Blackwell do you see people intermixing clusters do you think most of the Blackwell activities new clusters just some sense of what that transition looks like yeah thanks Joe the demand for Hopper is really strong and it's true the demand for uh Blackwell is incredible uh there there's a couple reasons for that the first reason is is um if you just look at look at the world's cloud service providers the amount of GPU capacity they have available it's basically none and the reason for that is because they're either being deployed internally for accelerating their own workloads data processing for example uh data processing you know we hardly ever talk about it because it's mundane you know it's not it's not very cool because it doesn't generate a picture or you know generate words but almost every single company in the world processes data in the background and and um uh Nvidia gpus are the only accelerators on the planet that process and accelerate data SQL data um pandas data data science uh toolkits like pandas and the new one polers uh these are the one most popular data processing Platforms in the world and aside from CPUs which as I've mentioned before really running out of steam uh nvidia's accelerated Computing is is really the only way to to get uh boosting performance out of that and so so that's number one is the primary the number one use case long before generate AI came along is the the migration of applications one after another uh to accelerated Computing the the second the second is of course rent the rentals they're they renting uh capacity uh to model makers are renting it to uh startup companies and a generative AI company uh spends the vast majority of their uh invested Capital uh into into infrastructure so that they could use an AI to help them create products and and so these companies need it now they they just simply can't afford you know you just raised money you uh they want you to put it to use now uh you have processing that you have to do you can't do it next year you got to do it today and so so there's a there's a fair that's one reason the second reason for Hopper Demand right now is because of the race to the next Plateau the first person to the next Plateau um uh gets to be you know a gets to introduce a revolutionary level of AI the second person who gets there is incrementally you know better or about the same and so so the ability to systematically and consistently race to the next plateau and be the first one there is how you establish leadership um you know Nvidia is constantly doing that and we show that uh to the world and the gpus we make and the AI factories that we make uh the networking systems that we make um the S so's we create I mean we we want we want to set the pace we want to be consistently the world's best and that's the reason why we drive ourselves so hard um of course we also want to see our dreams come true and and all of the the the capabilities that that we uh imagine in the future and the benefits that we can bring to society we want to see all that come true and and so these model makers are are um are the same they're they're of course they want to be the world's best they want to be the world's first um and and uh although Blackwell will start uh shipping out in billions of dollars at the end of this year um the the standing up of the capacity is still probably you know weeks and a month or so away and so between now and then is a lot of generative AI Market Dynamic and so everybody is just really in a hurry it's a it's either operational reasons that they need it they need accelerated Computing um they don't want to build any more uh general purpose Computing infrastructure and even Hopper uh you know of course H2 200 state-ofthe-art uh Hopper if you have a choice between building CPU infrastructure right now for business or Hopper uh infrastructure for business right now that decision is relatively clear and so I think people are just clamoring uh to uh transition the trillion dollars of uh uh established installed infrastructure to a modern infrastructure in Hopper state of the art and your next question comes from the line of MTH Ramsey with TD Cowan your line is open um thank you very much good afternoon everybody um in I wanted to kind of circle back to an earlier question about uh the debate that investors are having about I don't know the ROI on all of this capex and hopefully this question and the distinction will make some some sense but what I'm what I'm having discussions about is is which like the percentage of folks that you see that are spending all of this money um and looking to sort of push the frontier towards um AGI convergence and as you just said a new plateau and capability um and they're going to spend regardless to get to that level of capability because it opens up so many doors for for um the industry and for their company versus customers that are really really focused today on capex versus Roi I don't know if that distinction makes sense I'm just trying to get a sense of how you're seeing the priorities of people that are putting the dollars in the ground on on this new technology and and what their priorities are and and their time frames are for that investment thanks thanks man the people who are investing in uh Nvidia infrastructure are getting Returns on it right away it's the best Roi uh infrastructure Computing infrastructure investment you can make today and so so One Way To Think through it you know probably the most the easiest way to think through it is just go back to First principles you have a trillion dollars worth of general purpose Computing infrastructure and the question is do you want to build more of that or not and for every billion dollars worth of General purp CPU based infrastructure uh that you stand up you probably rent it for less than a billion and so um because it's it's commoditized there's already a trillion dollars on the ground what's the point of getting more and so so the the people who are who are clamoring to get this infrastructure one um when they build out Hopper based infrastructure and soon uh blackw based infrastructure they start saving money that's tremendous return on investment and the reason why they start saving money is because data processing saves money um you know data processing is price just a giant part of it already and so recommender system save money um so on so forth okay and so you start saving money the second thing is everything you stand up uh are going to get rented because so many companies are are being founded to create generative Ai and so your uh your uh uh capacity gets rented right away and the return on investment of that is really good and then the third reason is your own business you know you want to either create the next froner yourself or uh your your own internet services uh benefit from uh you know a a a Next Generation ad system or next Generation recommender system or next Generation search system uh so for your own Services uh for your own stores uh for your own user generated content social media platforms um you know for for for your own Services generative AI uh is also uh a um a fast Roi and so there's a lot of ways you could think through it um but at the core it's because it is the best best Computing infrastructure you could put in the ground today the world of general purpose Computing is Shifting to accelerated Computing the world of human engineered software is moving to generative AI software um if you were to build infrastructure to modernize your uh your your uh cloud and your data centers uh build it with accelerated Computing Nvidia that's the best way to do it and your next question comes from the line of Timothy arery with UBS your line is open thanks a lot um I had a question on the shape of the revenue growth both near and longer term I know Colette you did um you know increase Opex for the year and if I look at the increase in your purchase commitments and your supply obligations that's also quite bullish on the um uh other hand there's some you know school of thought that not that many customers really seem ready for liquid cooling and I do Rec ize that some of these racks can be air cooled but Jensen is that something to consider sort of on the shape of how blackw is going to ramp and and then I guess when you look Beyond uh you know next year which is obviously going to be a great year and you look into 26 do you worry about any other you know gating factors like say the power supply chain or uh at some point models start to get smaller I'm just wondering if you can speak to that thanks um I'm going to work backwards I really appreciate the question Tim uh so remember the world is moving from general purpose Computing to accelerated Computing and and the world builds about a trillion dollars worth of data centers um you know a trillion dollars worth of data centers in a few years will be all accelerated Computing in the past no gpus are in data centers just CPUs in the future every single data center will gpus and the reason for that is very clear because we need to accelerate workloads so that we can continue to be sustainable continue to drive down the cost of computing so that when we do more Computing our we don't experience uh Computing inflation uh second uh we need we need gpus for uh a new computer model called generative AI that we can all acknowledge uh is going to be quite transformative to the future of computing and so so I think I think um working backwards uh the way to think about that is is the next trillion dollars of the world's infrastructure will clearly be um different than the last trillion and it'll be vastly accelerated um with respect to to uh the shape of our ramp we offer multiple configurations of uh Blackwell Blackwell comes in either a you know blackwall classic if you will that uses the hgx form factor that we pioneered uh with uh with volte and I think it was Volta and so um uh we've been shipping the hgx hgx form factor for some time it is air cooled I the grace Blackwell um is liquid cooled however that the number of data centers that want to go liquid cooled is is quite significant and the reason for that is because we can uh in a liquid cool data center in any data center power limited data center whatever size data center you choose you could install and deploy a anywhere from three to five times the AI throughput compared to the past and so liquid cooling is cheaper liquid cooling uh TCO is better and liquid cooling allows you to have the benefit of this capability we call MV link which allows us to expand it to 72 Grace black wall packages which has essentially 144 gpus and so imagine 14 for gpus connected in mvlink and that when we're increasingly showing you the benefits of that and the next you know the next click is obviously uh very low latency very high throughput large language model inference and the large mvlink domain is going to be a game changer for that and so so I think I think people are uh are very comfortable deploying both and so almost every CSP we're working with are deploying uh some of both and so I uh I'm pretty confident that that we'll ramp it up just just fine uh your your second question out of the third is that looking forward yet next year is going to be a great year uh we expect to uh grow our data center business uh quite significantly next year uh Blackwell is going to be going to be a a complete uh game changer for the industry and um uh blackwall is going to carry into into the following year and as I mentioned earlier working backwards from first principles uh remember that Computing is going through two platform transitions at the same time and that's just really really important to keep your head on your your mind focused on which is uh general purpose Computing is Shifting to accelerated Computing and human engineered software is going to transition to generative AI or artificial intelligence learn software okay and your next question comes from the line of Stacy r with Bernstein research your line is open hi guys thanks for taking my questions I have two short questions for collect um the first uh several billion dollars of black o Revenue in Q4 is that additive you you said you expected Hopper demand to strengthen in the second eff does that mean Hopper strengthens Q3 to Q4 as well on top of Blackwell adding several billion dollars and the second question on gross margin is if I have mid mid 70s for the year dep where I want to draw that if I have 75 for the year I'd be something like 71 to 72 for Q4 somewhere in that range is that the kind of exit rate for gross margins that you're expecting and how should we think about the drivers of gross margin Evolution into next year um as Blackwell ramps and I mean hope hopefully I guess the yields and and and the inventory reserves and everything come up yes Stacy let's first take your uh question um that you had about Hopper and Blackwell uh so we believe our Hopper um will continue to grow into the second half we have many new products uh for Hopper our existing products for Hopper that we believe will start continuing to ramp um in the next uh uh quarters including our Q3 and um those new products moving to Q4 so let's say Hopper there for versus H1 is a growth opportunity for that additionally we have the black well on top of that and the Blackwell starting of um ramping in Q4 so hope that helps you on those two pieces uh your second piece is in terms of on our gross margin we provided gross margin uh for our Q3 we provided our gross margin on a nonap at about uh 75 um we'll work um with all the different uh transitions that we're uh going through but we do believe we can do that 75 and Q3 we provided that we're still on track for the full year also in the mid 70s or approximately the 75 so we're going to see some slight um uh difference possibly in Q4 um again with our Transitions and the different cost structures that we have on our new product introductions however I'm not in the same number that you are um there we don't have exactly guidance um but uh I do believe you're lower than where we are and your next question comes from the line of Ben rites with melas your line is open yeah hey um thanks a lot for the question Jensen and Colette um I wanted to ask about the geographies uh there was uh the tenq that came out and the United States was down sequentially while uh several Asian geographies were up a lot sequentially just wondering what the Dynamics are there um you know and um obviously China did very well you mention it in your remarks what are the puts and takes and then I just wanted to clarify from Stacy's question um if that means the sequential overall Revenue growth rates for the company accelerate in the fourth quarter given all those favorable Revenue Dynamics thanks let me talk about um a bit in terms of our disclosure in terms of the 10 q a required disclosure in uh a choice of geography very challenging sometimes to uh create that uh right disclosure as we have to come up with uh one key piece pieces in terms of we have in terms of who we sell to Andor specifically who we invoice to and so what you're seeing in terms of there is who we invoice that's not necessarily where the product will eventually be um uh and where it may even travel to the End customer these are just moving to our oems or odms and our system integrators for the most part across our product portfolio so what you're seeing there is sometimes just a Swift uh uh shift in terms of who they are using uh to complete their full configuration before those things are going into the data center going into notebooks and those pieces of it uh and that shift happens uh from time to time but yes uh our China number there are invoicing to China keep in mind that is incorporating both gaming also data center also Automotive in those uh numbers that we have going back to your statement in regarding gross margin um and um also what we're seeing in terms of uh what we're looking at for Hopper and Blackwell in terms of Revenue hoer will continue to grow in the second half um will continue to grow from what we are currently seeing during determining that exact mix um in each Q3 and Q4 we don't have here we are not here to guide uh yet in terms of Q4 but we do see right now the demand expectations we do see um the visibility that that will be a growth opportunity in Q4 on top of that we will have our Blackwell architecture and your next question comes from the line of CJ Muse with caner Fitzgerald your line is open yeah good afternoon thank you for taking the question um you've embarked on a remarkable annual product Cadence with with challenges only like becoming more and more given you know Rising complex complexity in a retical limit Advanced package world so curious you know if you take a step back H how does this backdrop alter your thinking around potentially greater vertical integration supply chain Partnerships and and then thinking through consequential impact to your margin Pro profile thank you yeah thanks uh let's see I think the uh the fir well the first the the first answer to your the answer to your first question is that the reason why our velocity is so high is simultaneously because uh the complexity of the model is growing and we want to continue to drive its cost down um it's growing so we want to continue to increase its scale and we believe that uh by continuing to scale the AI models that will reach a a level of of extraord ordinary usefulness and that would it would um open up realize the next Industrial Revolution we believe it and and so we're we're going to drive ourselves uh really hard to do to to continue to uh uh go up that scale um we have the ability uh fairly uniquely to integrate uh to design a um an AI Factory uh because we have all the parts it it's it's not possible to come up with a new AI Factory every year unless you have all the parts and so we have uh next year we're going to ship a lot more CPUs than we've ever had in the history of our company U more gpus of course uh but also mvlink switches um uh CX uh dpus connectx dpu for East and West uh Bluefield dpus for north and south and uh data and storage processing uh to um f a band for supercomputing centers to ethernet which is a brand new product for us which is well on its way to becoming a multi-billion doll business uh to to bring AI to ethernet and so the fact that we could build we have we have access to all of this we have one architectural stack as you know um it allows us to introduce new capabilities to the market you know as we complete it otherwise what happens you ship these parts you go find customers to sell it to and then you've got to build somebody's got to build up an AI Factory and the AI Factory has got a mountain of software and so it's not about it's not about who integrates it we love the fact that our supply chain is disintegrated in the sense that we could service um uh you know quanta foxcon HP Dell Lenovo uh super micro I uh we used to be able to serve as ZT um uh they were recently purchased and um and so on so forth and so the the number of ecosystem partners that we have uh gigabyte assus the number of ecosystem partners that we have that allows it allows us to allows them to take our architecture which all works but integrated in a bespoke way into all of the world's cloud service providers Enterprise data centers the scale and reach necessary from our o DMS and our integrators integr integrator supply chain is vast and gigantic because the world is huge and so that part we don't we don't want to do and we're not good at doing and um uh but we know how to design the AI infrastructure provided the way that customers would like it and lets the ecosystem integrated um well yeah so anyways that's the reason why and your final question comes from the the line of Aaron rakers with Wells Fargo your line is open yes thanks for taking the question I wanted to go back into the the Blackwell product cycle one of the questions that that we tend to get asked is is how you see the the rack scale system mix dynamic as as you think about leveraging NV link you think about GB you know nvl 72 and and how that go to market you know dynamic looks you know as far as the the blackw product ccle I guess put distinctly how do you see that mix of rack scale systems as we start to think about the black W black wall cycle playing out yeah eron thanks the um the blackwall rack system it's designed and architected as a rack but it's sold in a disag in disaggregated system components we don't sell the whole rack and the reason for that is because everybody's rack's a little different surprisingly you know some some of them are ocp standards some of them are not some of them are Enterprise uh and uh the the power limits for everybody could be a little different choice of cdus uh the choice of um uh Power bus bars the the the configuration and integration into people's data centers all different and so so the way we designed it we architected the whole rack the software is going to work perfectly across the whole rack and then we uh provide the system components like for example the uh CPU and GPU compute uh um board is then integrated into an mgx it's a modular system architecture mgx is is completely ingenious and uh we have mgx odms and integrators and oems uh all over the planet and so so just about you know any configuration you would like uh where you would like that 3,000 pound rack to be uh delivered you know it's got to be close to it's it has to be integrated and assembled close to the data center because it's fairly heavy and so everything from the supply chain from the moment that we Shi the GPU CPUs uh the switches the nycks from that point forward the integration is done quite close to the location of the csps and the locations of the the data centers and so you could imagine how many data centers in the world there are and how many Logistics hubs uh We've uh scaled out to with our odm partners and so I think that because we we show it as one rack and because it's always you know rendered that way and and shown that way we we might have left the impression that we're doing the integration our customers hate that we do integration the supp supply chain hates us doing integration they want to do the integration that's their value added um there's a final design design in if you will you know it's not quite as simple as shimmy into a data center but that design fit in is really complicated and so the install the design fit in the installation the bring up the um uh uh repair uh repair and replace that entire cycle is done all over the world and we have a sprawling network of odm and OEM partners that does this incredibly well so uh integration is not the reason why we're doing uh racks it it's it's the anti-reason of doing it um the way we don't want to be an integrator we want to be a a technology provider and I will now turn the call back over to Jensen Hong for closing remarks thank you let me make a couple more make a couple of comments that I made earlier again that data center worldwide are in Full Steam to modernize the entire Computing stack with accelerated Computing and generative AI Hopper demand remains strong and the anticipation for black well is incredible let me highlight the top five things the top five things of our company accelerated Computing has reached the Tipping Point CPU scaling slows Developers must must accelerate everything possible accelerated Computing starts with Cuda X libraries new libraries open new markets for NVIDIA we released many new libraries including could accelerated polers pandas and Spark the leading data science and data processing libraries CVS for Vector Pro Vector databases this is incredibly hot right now Ariel and chiona for 5G wireless base station a whole Suite of a whole world of data centers that we can go into now parab bricks for Gene sequencing and alphao 2 for protein structure prediction is now K accelerated we are at the beginning of our journey to modernize a trillion dollars worth of data centers from general purpose Computing to accelerated Computing that's number one number two Blackwell is a step function leap over Hopper Blackwell is an AI infrastructure platform not just a GPU also happens to be in the name of our GPU but it's an AI infrastructure platform as we reveal more of Blackwell and sample systems to our partners and customers the extent of Blackwell's leap becomes clear the Blackwell Vision took nearly five years and seven one-of-a-kind chips to realize the gray CPU the Blackwell dual GPU and a Coos package connectx dpu for eastwest traffic blue field dpu for north north north south and storage traffic mvlink switch for all to all GPU Communications and Quantum and Spectrum X for both infin band ethernet can support the massive burst traffic of AI Blackwell AI factories are building siiz computers Nvidia designed and optimized the Blackwell platform full stack end to end from chips systems networking even structured cables power and Cooling and mountains of software to make it fast for customers to build AI factories these are very Capital intensive infrastructures customers want to deploy it as soon as they get their hands on the equipment and deliver the best performance and TCO Blackwell provides 3 to five times more AI throughput in a power limited data center than Hopper the third is MV link this is is a very big deal with its all to all GPU switch is gamechanging the blackwall system lets us connect 144 gpus in 72 gb200 packages into one MV link domain with an aggregate aggregate MV link bandwidth of 259 terabytes per second in one rack just put that in perspective that's about 10 times higher than Hopper 259 terabytes per second kind of makes sense because you need to boost the training of multi-trillion parameter models on trillions of tokens and so that natural amount of data needs to be moved around from GPU to GPU for inference MV link is vital for low latency High throughput large language model token generation we now have three networking platforms MV link for GPU scale up Quantum infiniband for supercomputing and dedicated AI factories and Spectrum X for AI on ethernet mvs networking footprint is much bigger than before generative AI momentum is accelerating generative AI Frontier Model makers are racing to scale to the next AI Plateau to increase model safety and IQ we're also scaling to understand more modalities from text images and video to 3D physics chemistry and biology Chad Bots coding AIS and image generators are growing fast but it's just a tip of the iceberg internet services are deploying generative AI for large-scale recommenders add targeting and search systems AI startups are consuming tens of billions of dollars yearly of csp's cloud capacity and countries are recognizing the importance of AI and investing in Sovereign AI infrastructure and Nvidia Ai and Nvidia Omniverse is opening up the next era of AI General Robotics and now the Enterprise AI wave has started and were poised to help companies transform their businesses the Nvidia AI Enterprise platform consists of Nemo Nims Nim agent Blueprints and AI Foundry that are ecosystem Partners the world leading it companies used to help customer C companies customize AI models and build bespoke AI applications Enterprises can then Deploy on Nvidia AI Enterprise runtime and at $4,500 per GPU per year Nvidia AI Enterprise is an exceptional value for deploying AI anywhere and for NVIDIA software Tam can be significant as the Cuda compatible GPU install base grows from Millions to tens of millions and as Colette mentioned Nvidia software will exit the year at a $2 billion run rate thank you all for joining us today and ladies and gentlemen this concludes today's call and we thank you for your participation you may now disconnect

Share your thoughts

Related Transcripts

NVIDIA CEO Jen-Hsun Huang's Closing Remarks - NVDA Q2 2025 Earnings Conference Call thumbnail
NVIDIA CEO Jen-Hsun Huang's Closing Remarks - NVDA Q2 2025 Earnings Conference Call

Category: Science & Technology

Thank you let me make a couple more make a couple of comments that i made earlier again that data center worldwide are in full steam to modernize the entire computing stack with accelerated computing and generative ai hopper demand remains strong and the anticipation for blackw is incredible let me... Read more

SMCI Suspected Of Accounting Fraud; Nvidia FLOPS! thumbnail
SMCI Suspected Of Accounting Fraud; Nvidia FLOPS!

Category: News & Politics

Happy nvidia day and welcome back to your market news smci continued to get hammered this week as popular short seller hindenberg research filed a short report saying that they believe there's accounting manipulation and multiple red flags within the company's financials since this the company has delayed... Read more

What to expect from Nvidia's shareholder meeting thumbnail
What to expect from Nvidia's shareholder meeting

Category: News & Politics

Our big story is nvidia on track for third straight session of declines the slump nearing $300 billion in losses for the tech giants market cap now this comes ahead of the company's shareholder meeting this week one of those shareholders joining us now we've got paul meeks he's co-cio of harvest portfolio... Read more

Cramer Today BEFORE NVIDIA Earnings - NVIDIA Stock, NVDA Update thumbnail
Cramer Today BEFORE NVIDIA Earnings - NVIDIA Stock, NVDA Update

Category: Education

Cramer & faber all right we'll start though with nvidia of course given it's do out after the bell and we got a number of other earnings after the bell as well jim that are not insignificant including salesforce tonight is huge um give me your thoughts as we head in here nvidia shares you can kind of... Read more

CNBC Today AFTER NVIDIA Earnings On NVIDIA, NVIDIA Stock - NVDA Update thumbnail
CNBC Today AFTER NVIDIA Earnings On NVIDIA, NVIDIA Stock - NVDA Update

Category: Education

Cnbc hold on let's get those numbers simo modi has them nvidia simo what does it look like john second quarter earnings are in for nvidia 68 cents adjusted versus the estimate of 64 cent so a beat on its bottom line revenue at 3.04 billion also higher than wall street consensus of 28.7 billion we have... Read more

Cramer & AMD CEO Today On AMD, NVIDIA, Jensen Huang - NVDA Update thumbnail
Cramer & AMD CEO Today On AMD, NVIDIA, Jensen Huang - NVDA Update

Category: Education

Cramer & amd ceo amg reporting a strong no super strong quarter uh sending shares much higher it's course when it was first announced the stock was actually down a lot of this is around ai adoption we're also going have to pc ceo lisa sue joins us now first on cmbc to discuss the quarter hey leis it's... Read more

Nvidia CFO expects 'several billion dollars in Blackwell revenue' in Q4 #shorts thumbnail
Nvidia CFO expects 'several billion dollars in Blackwell revenue' in Q4 #shorts

Category: News & Politics

We executed a change to the blackwell gpu mass to improve production yields blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year 26 in q4 we expect to dip several billion dollars in blackwall revenue Read more

SoFi Growth Strategy - SOFI Stock Analysis thumbnail
SoFi Growth Strategy - SOFI Stock Analysis

Category: People & Blogs

All in all i think we're setting up for a pretty strong uh back half of the year in 2025 if we get the right right uh fiscal policy that is soi ceo anthony notto commenting on the potential rate cuts in the back half of the year and now a little over a month after that interview we have gotten the announcement... Read more

President Trump in Mosinee, WI thumbnail
President Trump in Mosinee, WI

Category: News & Politics

Ah thank you very much everybody this is an honor lot of crowd this is look where that crowd goes this is a big crowd a very special hello to wisconsin we really appreciate it we've had great success here we've had tremendous success and i'm thrilled to be back in this incredible state with the thousands... Read more

CNN's Kaitlan Collins IMPLODES After Bill Maher Asks a GOOD Question thumbnail
CNN's Kaitlan Collins IMPLODES After Bill Maher Asks a GOOD Question

Category: News & Politics

I mean how how do you guys think you are doing is in that arena of like this is a terribly divided country we're not only politicized a lot of people just hate the other side and cnn in my view should be the place where both sides can watch how do you think you're doing with that how is c cnn is the... Read more

Adobe is 'much further along' in AI race than others: Analyst thumbnail
Adobe is 'much further along' in AI race than others: Analyst

Category: News & Politics

Adobe sales guidance for its fiscal fourth quarter falling short of the streets estimates and that's overshadowing the company's top and bottom line beats in its latest quarter now looking at losses of just about 10% that disappointing outlook bolstering some anxiety on wall street the software companies... Read more

President Trump in Potterville, MI thumbnail
President Trump in Potterville, MI

Category: News & Politics

With the hardworking patriots of the great state of michigan nice state and i also want to thank all steel ceo randy glick just met randy and he's fantastic the job the family is done he's done it's beautiful and all of the talented workers here at all rose steel i hope you're all happy are you happy... Read more