AI News: OpenAI Finally Released Their New Model!

Intro I just spent the last week at Disneyland and of course the week that I'm gone turns out to be an insane week with tons of big announcements I'm a day later than normal on getting this AI news video out so I'm not going to waste your time let's just jump right in there was really two major major things that happened this week and then a whole bunch of little things the two major things the new release from open Ai and the Apple iPhone event that happened so OpenAI's new o1-preview model let's start by talking about this new open a i01 preview model that's been made available we've been getting lots of teases about it over the last several months from open AI originally it was called qar then it was called strawberry and now they're calling it open ai1 it kind of sounds to me like all future models are going to follow this naming scheme we're probably not going to be getting you know GPT 5 GPT 6 gpt7 Etc this is 01 and in their blog post about it they said we are resetting the counter back to one and naming this series open ai1 so I'm guessing their next models that they show off are going to be open ai2 open ai3 Etc and maybe you know some decimal points in there as well now in order to use these new models you do have to be on a Pro Plan or an Enterprise plan you basically have to be paying for chat GPT but when you log into your account every paid member should now have these new options it still defaults to chat GPT 40 but if we click on this little drop down you can see we now have 01 preview and 01 mini with their older models falling under this more models additional drop down now for the most part open AI still recommends you use GPT 40 for most things if you're going to use the 01 models it's more for advanced reasoning mathematics logic and things that need to be really thought out more complicated tasks now if you're wondering what makes this model different or better than previous models well it essentially thinks through its response before responding so when you ask it a question it's going to be a lot slower to respond but it's going to really think through its response this is called Chain of Thought prompting so if I select 01 preview here you can see some of the suggested prompts and one of them is how many RS are in Strawberry which you know the previous model of GPT 4 kept on telling you there was two RS in Strawberry let's go ahead and just use one of the demo prompts let's tell it to create a puzzle to solve for me and here you can see it says generate a 6x6 nonogram puzzle for me to solve where the solved grid looks like the letter q and you can see it's actually thinking and there's a little drop down here if I open this drop down you can actually see it's thought process creating the puzzle I'm working on a 6x6 nonogram that forms the letter Q sketching the grid layout formulating an O shape filling the grid examining cell patterns assessing cell patterns creating the puzzle you can see the thought process C of chat GPT thinking through all of the steps that it needs to complete this and then we can see down here when it's finally done it says here's a 6 by6 nanogram puzzle for you to solve if I scroll up to the top of this prompt here we can see that it thought about it for 30 seconds so it spent 30 seconds going through that sort of Chain of Thought logic thinking through everything it needed to do one of the other example prompts solve an advanced math problem you can see it wrote up this complicated math problem that I don't even know how to read let dollar sign SL math Cal B dollar sign B the set of rectangular boxes with surface area I I don't even know what this means basically what it's going to do is is going to try to respond and then sort of double check itself as it goes finally outputting what it thinks is the best possible response we can see here it's still thinking it through and it's still working out the math problem as I'm talking it came to an answer of 721 is that right I have no idea but it took 33 seconds to think it all through the press release over on the open AI blog has all sorts of examples of how they've been using it open ai1 in economics in cognition that's the company that makes Devon quantum physics genetics Etc now I did mention that there was the 01 preview and the 01 mini well it looks like the 01 mini will become available to chat GPT free users eventually but right now both models are only for paid members now they've done a lot of testing on it and apparently 01 ranks in the 89th percentile on competitive programming questions places among the top 500 students in the US in qualifier for the USA math Olympiad and exceeds human PhD level accuracy on a benchmark of physics biology and chemistry problems so using that Chain of Thought reasoning it's definitely gotten a lot better at solving technical problems now in all of these various benchmarks that they show off on their website it looks like there's another 01 model that we don't have access to yet as we can see in these various charts here we have a 01 model and we also have the 01 preview model the 01 preview model is what we're seeing in chat GPT so in competition math it's got 83.3% accuracy but the version that we seem to be using has about a 56.7% accuracy competition code their main 01 model is in the 89th percentile for competition code while the one that we have access to is in the 62nd percentile and then under PhD level science questions you can see that 01 scores a 78 where the 01 preview the one we do have actually scored a little bit better at 78.3 compare that to an expert human which is about 69.7% accurate we can see some other benchmarks here with GPT 40 in pink and 01 improvements in this blue color the blue being how far it exceeds the previous GPT 40 model so for example in math GPT 40 got a 60.3 while 01 got a 94 .8 so about a 34% Improvement their website has a whole bunch of examples of how it was used there's been a ton of YouTube videos already put out where people are demoing this and walking you through it but pretty much the 01 preview model outperformed GPT 40 in Cipher coding math crossword English science safety and health science so for each of these up here you can see the initial prompt as well as the response and in pretty much every single one of these 01 outperform GPT 40 now again they also released open AI 01 mini which is actually 80% cheaper than open AI 01 preview and it's optimized for stem reasoning now one of the big complaints that people have had about this new open AI 01 model has been the pricing if you want to use it through the API for one of your software products it's quite a bit more expensive than what's already out there it's also quite a bit slower cuz you saw how it thinks through the process so this open AI 01 Mini model is supposed to offset some of that bring the cost down and it's quite a bit faster than the preview model we can see here the 01 Mini model took 9 seconds the 01 preview model as of me talking right now is already at 30 seconds here in this comparison but I really like this tweet from Jim fan here where he breaks it down a little bit more clearly about what's happening and I especially like this graphic that he shared right here where most large language models spend most of the time and money on pre-training training so they get a wide amount of data from all over the web they pretty much scrape everything pre-train that into their model then they do what's called posttraining which is a little bit more like fine-tuning and putting up guard rails and telling the model how to respond and then inference is a teeny tiny sliver of what happens when you use these AI models inference being when you give it a prompt and it gives you a response where GPT 40 is really fast at inference you give it a question it responds within 3 seconds it appears they're spending less time on the pre-training the scraping every possible thing it can possibly get spending roughly the same amount of time on post trainining sort of dialing it in fine-tuning it and then inference where you actually give it the prompt and receive your response back is where a lot more time is being spent so theoretically we should be able to get new more improved models being released even quicker because less time is being spent on pre-training and more of that time might be shifting to the inference phase when you actually prompt it companies like open AI are actively slowing down the inference phase so that they can minimize the most expensive part which is the pre-training phase now this is a super oversimplification I will link up to all of the Articles all of the posts all of the tweets I've mentioned in this video below so you can dive a little bit deeper but I do want to address one thing here that my buddy David Shapiro says he mentioned that Claude Sonet 3.5 can do strawberry with the right prompting there's no secret sauce we can synthesize the data with any model basically the point that he's making here is that Chain of Thought prompting has been around for a while you can use Chain of Thought prompting in any large language model to basically tell it to Think Through step by step and give it additional prompts to get to the right conclusion all open AI is doing is putting all of that stuff in sort of like the system prompt and telling it to think it through step by step and and basically look at its own response evaluate its response update its response based on its own evaluation and continue to do that in the past you would just do that through additional prompting now open AI is doing that for you immediately after the prompt but that's the biggest news of the week that's the news that has the AI World buzzing the most the other thing that happened this week was Apple had their glowup event this event was pretty much Apple's iPhone event and AI features designed to update people on the latest iPhone the latest Apple watch as well as some updates around the latest airpod Innovation and things like that most of the AI features that they talked about during the Apple keynote we actually already got at WWDC there wasn't a ton of new AI features that we didn't already see at that we just got new info that it's going to be rolling out into the iPhone 16 Apple themselves even put out a blog post here solely focused on Apple intelligence and what's going into their new devices related to Apple intell elligence including features like being able to rewrite and proofread and summarize emails and documents the ability to clean up photos by removing things in the background the ability to prioritize your notifications the new glowing box around the phone for when you're using Siri their new image playground which lets you generate AI art and even generate directly inside of notes and again for the most part this is all stuff they previewed at WWDC this wasn't new exciting announcements during this specific Apple event there was however a few interesting things from the Apple event that I want to highlight so I'm just going to jump to those parts of the keynote here including the fact that Apple watches are now going to have ai translation built into them and the translate app comes to Apple watch using machine learning for speech recognition and Rapid translation the thing that I found interesting about the new airpods was the ability to nod or shake your head to respond to Siri when interacting with Siri you can simply nod your head yes or shake your head no in response to Siri announcements they talked about their private Cloud compute which is essentially Cloud compute that you can use but it's private so they're not storing or keeping or training on any of your data it's also going to allow you to use larger models that won't actually work on a mobile phone by sending them to a cloud GPU to run the processing they also once again showed off some of the features to rewrite things like emails for you creating your own images and Emojis with text to image generation summarizing your notifications for you as well as prioritizing what it thinks are the most important notifications to the top as well as the new visual intelligence which doesn't sound like we're going to get right away this is coming it sounds like March of next year 2025 but this is a feature where you can take a picture of something and what you take a picture of it will actually then give you information around so this person took a picture of a restaurant and then it gave a bunch of information around what time it closes the cost the reviews Etc it's also getting some updates to the photo editing which we looked at earlier and again pretty much everything they announced was stuff that they announced during WWDC they just showed it off in the context of the new iPhone 16 The Verge put out this article afterwards called the iPhone 16 will ship as a work in progress basically saying that when you buy an iPhone 16 if you bought it this month in September or early October you're not going to have any Apple intelligence features supposedly the features are going to start to roll out in iOS 18.1 in October sometime with more and more of these AI features rolling out over the coming months and we also know that the sort of visual intelligence feature isn't going to be coming until around March of next year even if you went out and rushed out to buy it so you could be the first one to use these AI features you're not going to have them on day one unfortunately Adobe shared some interesting information this week as Adobe's new text-to-video Firefly well with their new text to video generation version of firefly now from the previews I've seen this looks like it could actually compete with Sora and they also claim that it's all ethically sourced video so it's only trained on openly licensed public domain and Adobe stock content they're calling it commercially safe here's a thread from Pier showing some examples of some of the videos that have come out of this new Adobe Firefly video model looks like they generate about 5-second videos here's one of a galaxy that zooms out to reveal an eyeball we have one of a detailed portrait of a reindeer slow motion fiery volcanic landscape miniature adorable Monsters made out of wool and felt footage of a camera on a drone flying over a desert with wind blowing over the dunes creating waves in the sand below detailed extremely macro close-up view of a white dandelion viewed through a large red magnifying glass drone shot going between the trees of a snowy Forest at Sunset golden hour stop motion 2D animation made of felt of an egg cooking in a frying pan now this one looks really cool to me to make this style of Animation I could really see that working well for like shorts and like little explainer type videos handdrawn simple line art of a young kid looking up into space with a wondrous expression on his face an adorable Kawaii cheese ball on the moon smiling 3D render octane Etc macro detailed shot of water splashing and freezing to spell the word ice so it looks like it actually spells words inside the video as well now we don't have access to this yet but it looks pretty promising of course these videos are likely all some of the cherry-picked videos they probably took multiple prompts picked the best one and shared those but this is what Adobe has shared with us so far of what the Adobe Firefly video model can do but a lot of smaller sort of Mistral's Pixtral 12B multimodal model interesting things happened so let's go ahead and Rapid Fire them I still have a lot of tabs up here I want to share but I'm going to try to get through them quickly and just give you the quick recap of what's going on with a lot of this stuff starting with the fact that mistol released pixol 12b now mistol has both open source and Clos sourc large language models available but this new pixol model is their first model that can accept images as an input so something we've been able to do with most of the models now we can now do with mol's 12b model and the best part is this is an open source model so developers can build on it iterate off of it fine-tune it do whatever they want to it to prove it and make it an even better model for people to use now this is really really cool now Google has this tool called notebook LM and it's Google's Notebook LM and audio overview feature really actually super helpful you can upload a whole bunch of documents into it and then have conversations about those documents so for example here's a notebook about the invention of the light bulb and you can see a bunch of different articles have been uploaded about the invention of the light bulb and then we can actually chat with all of these sources so it will will look across all these sources with every question we ask and respond based on the information within the sources that are available that in itself is really cool and really helpful kind of similar to what we could do with Claud projects where we upload a bunch of sources there and have conversations about it however they just rolled out a brand new feature that basically generates a podcast about your notes if I was to come down here and click on Notebook guide you can see we have this new audio overview button here if I click load it'll say this may take a few minutes and it will create a podcast of two people discussing back and forth with each other the invention of the light bulb so here's what that sounds like light bulbs right they're so normal now like we just kind of expect them to be there so we can actually see what we're doing when it gets dark it really is funny when you think about it like flick a switch bam instant Sunshine no matter what time it is now this is one of the examples that Google loaded in for us I was curious what would happen if I loaded in a complex research paper so I actually took a document from archive.org about Lin Fusion a new architecture for text to image generation that uses a linear attention mechanism to address the computational limitations of traditional diffusion models I was wondering if I just take this complex PDF with all of this technical jargon in it and I throw it into notebook LM and then create an audio overview would I actually be able to understand what the hell the paper was talking about when I was done and the answer is yeah actually did a pretty dang good job ever get that creative itch you that feeling when you're absolutely bursting with ideas for some awesome crazy detailed AI art but then you remember what it's actually like waiting forever for images to render your poor computer sounding like it's about to explode yeah can really kill the vibe you know now I'm not going to play the whole thing but it has that podcast style where they're having a conversation back and forth about the contents of the document that I just uploaded it's a super cool feature I highly recommend people play around with it it's available over at notebook lm. goole.com while we're on the topic of audio Amazon is allowing audible narrators to clone themselves with AI Amazon's AI voice cloning for Audible narrators Amazon will begin inviting a small group of audible narrators to train AI generated voice clones of themselves this week with the aim of speeding up audiobook production for the platform now it's a uson beta test and will be extended to rights holders like authors agents and Publishers later this year narrators can also use Amazon's production tools to edit the pronunciation and pacing of their AI voice replica Amazon says that narrators will be compensated via a royalty share model on a title by title basis but didn't go into more details than that Suno's new "Covers" feature sunno rolled out a cool new feature this week called covers they put out this x poost here reimagine the music you love with covers covers can transform anything from a simple voice recording to a fully produced track into an entirely new style all while preserving the original Melody that's Uniquely Yours I initially learned about this from Nick St Pierre AKA Nick floats over on X where he showed off what it can do but I wanted to jump in and play around with it myself and so here's what I did now don't judge I can't sing at all but I recorded this little audio clip here subscribe to Matt wolf on YouTube and don't forget to check out futur tools. so that was my brilliant singing basically in order to do that I went into sunno clicked on create clicked on upload audio clicked the audio button and then it'll allowed me to record my voice once I recorded my voice it put that little audio clip of my voice right here in line with the rest of the songs that I've generated on sunno if I jump over to these three dots over here on the right side and then come down to create we can see we now have cover song beta as one of the options when you click on that it will create a couple cover songs of using the audio that you just put in now it's not going to use your voice but it will use the same words and try to match the melody so here's what it made for [Music] me. YouTube don'tget Che outs. there you go pretty cool sounds a lot better than me trying to sing it honestly now I should note this is only available to paid members of sunno right now and there is a limit to the amount of covers you can make you can see I have 198 free cover songs remaining there is a finite number per month and it is only available on one of the paid plans moving on over to Facebook news Facebook and Instagram's AI content labeling changes Facebook and Instagram are making AI labels less prominent on AI edited content a lot of people were all up in arms because when you post an image to Facebook or Instagram it would have a little note saying generated with AI or something like that and a lot of people were saying my image wasn't generated with AI why is it putting that on there now they're making it a little less prominent you actually have to click into a menu to find the AI info ideally less people will be frustrated by the sort of tagging of AI content when it's not really AI content Facebook also admitted this week that they are Facebook's data scraping admission scraping pretty much everybody's photos and post to train their Ai and there is no opt out option this was from some sort of Facebook Hearing in Australia with meta's global privacy director Melinda Clau the truth of the matter is that unless you consciously had set those posts to private since 200 7 meta has just decided that you will scrape all of the photos and all of the text from every public post on Instagram or Facebook that Australians have shared since 2007 unless there was a conscious decision to set them on private but that's actually the reality isn't it correct we use that's thanks thanks for the answer to now I'm sure Facebook and meta aren't going to get in too much trouble for this because I'm sure it's buried in the terms and conditions somewhere that you're giving them the right to use and train on your data when you upload it to Facebook unless you explicitly set it to private if you signed up for one of these platforms and you're using it you probably unknowingly already agreed to letting them do this I'm just guessing though I haven't read the policies myself there's some really cool AI generative game type stuff Roblox's 3D generative AI for game creation coming up like for example Roblox announced late last week that you're going to be able to use AI to create sort of 3D worlds inside of Roblox Roblox announced that it's working on a 3D foundational model that will power generative creation on its platform the model will be open source and multimodal and allow creators to generate 3D content using text video and prompts a Creator can say I want to create a world that is in the Scottish Highlands with castles and a stormy day with a dragon background and I want all of this in a steampunk steam style and then the output will be the full scene creation they did say Roblox isn't trying to replace the creative process but instead focuses on enabling more people to develop and create games here's a little screenshot not a whole heck of a lot to go off of but you can see a little before where they're in like green grass in a road and then after there's a little bit more texture and a little bit more scenery going on on the road that's all we're working with right now this Cybever's 3D world creation platform one looks even cooler I think it's called cever they just unveiled their 3D World creation platform now this isn't something that we have access to yet but here's what this looks like you can generate a map through text it creates a real basic map but then you can adjust it by drawing you can see they're drawing like a little river here into the map they can adjust the terrain adjust the World style you can see they have some like templates you can use like a water Village industry Zone Grand bizaar and it'll create some generated Town layouts and it'll create a 3D preview in less than a minute with some additional assets to kind of give you an idea of what that world's going to look like and then this was the output that it shows now this is one that to me feels like I'll believe it when I see it this looks like too good to be real but you can see this is the 3D environment they created here load marketplaces of your own assets there's a deer there's the ocean mountains in the background and it looks like you can pay to add assets into the game as well to me it looks so good that I have a hard time actually seeing this work as well as it says it does in the real world until I actually put my hands on it myself while Daz 3D's character generation plugin we're on the topic of game assets this company Daz 3D showed off this new plugin which allows you to generate character mes shapes from text prompts so if we take a look at their video here young female African warrior and it generates the character there muscular dwarf with a big beard belly and a big nose large head etc etc and it made that character a pregnant woman it made that character male tall lean pale vampire tall lanky alien male so we can see you can plug in whatever you want your character to look like and it will create various models that you can use and it gives you a nice start for your game assets I'm unclear if it actually does the texturing for you cuz it goes on to show this image where she's got tattoos and pants and a shirt and a gun and everything's colored in I don't know if it does that part for you I'm guessing it does cuz they're showing it in the preview but I'm not super clear on it yet but that's from a company called Daz 3D in collaboration with yellow 3D another tool that can help Meshy v4 for 3D object generation with game assets is a tool called meshy well they just announced meshy version 4 where you can enter any textt prompt and it will generate these 3D objects from the text prompt this you can actually use for free right now you get a certain amount of credits use it for free if you head on over to mesh. a you've got text to 3D image to 3D AI text string and text to voxel I tested this a little bit I did an image to 3D I uploaded an image of my head here and I did it with quad and triangle topology to see which one looks better here's the quad topology this is what it made me look like um I mean it's got the beard and hair color right I guess here's what it did with the triangle topology and once again I mean I've got a beard got that part right so far not super realistic when you upload real face images if we head over to text to 3D I've played with this a little bit more I just generated this one today with their new Mesi 4 where I gave it the prompt a wolf howling at the moon and this is what it generated and it's actually pretty impressive when you zoom in and look pretty close you can see that the wolf doesn't have eyes and it's got a really long snout which seems a little off but if you ignore the fact that it has no eyes and a really long snout from this angle these angles it all looks really actually pretty dang solid I'm really impressed with the like automated text string that it did it just could use some work on getting the face now it did give me some other options to choose between and the one that I selected was the best you can see this one looks more like some sort of weird Beast than a wolf this one looks like a wolf or something with a giant tumor on its head I don't know what's going on with some of these this one came out pretty decent looking I use the texture feature and that's what I got I think it looks pretty good slight AI news with the new PS5 there's a new PS5 PS5 Pro's AI upscaling for video quality Pro coming out it's going to use AI to upscale the video quality and make it look better I personally think this new PS5 Pro is a total joke they're releasing this at what 700 bucks or something like that and it doesn't even come with a disc reader so if you have A PS5 or a PS4 or already own some PlayStation discs that you'd want to play in it well you're not going to be able to play it without buying an external disc drive because it doesn't come with it with the PS5 that sucks but it is going to use AI to upscale the quality and try to get even better video quality out of the games that you're playing and finally some new information DeepMind's dexterous robots out of deep Minds robotics lab they have a robot now that can actually tie a shoe so we can see in this video the robot grabs both the strings and is able to accurately tie the shoe robots have not been able to do that before here's another one where it picks up a shirt off a table and actually puts it on a coat hanger and manages to hang it up and another one where it's sort of repairing and putting some pieces on another robot pretty cool that we're seeing this kind of dexterity with a two-handed robot where it's going to get better and better at being able to do everyday tasks for us that's ultimately where we want these robots to get to anyway right we want them to be able to do our dishes and do our laundry and I don't know if I need a robot to tie my shoes for me but we wanted to do day-to-day tasks around our house for us and it needs this additional dexterity like being able to tie a shoe to be able to accurately do that kind of stuff and there you have it there's a breakdown of all the AI news that I caught this week again I was at Disneyland pretty much the entire week got home did a super cram session to try to catch myself up that's why this video is coming out a day later than normal I normally put these videos out on Fridays this one's going out on Saturday but I needed that extra time to catch up on all the news make my notes figure out what I felt was worth showing you and put this video together so I apologize for the day late in publishing this video but if you do want to keep up with the news on a pretty daily basis check out futur tools. there's an AI News section where I keep that up to date regularly all the news that didn't make this video is on that page and and all the cool AI tools that I come across I share here on Future tools homepage there's a free newsletter check it out future tools. you're going to like it it's really cool I'm not biased at all it's just the best website on the entire internet if you like videos like this you want to stay looped in on the latest AI news the latest AI tools the latest AI research and get tutorials on how to actually use some of this stuff in a meaningful helpful way in your daily life make sure you like this video And subscribe to this Channel and I'll make sure more videos like this show up in your YouTube feed thank you so much much for tuning in and nerding out with me today I really appreciate you I'll see you in the next video bye-bye

Share your thoughts

Related Transcripts

Government shutdown deadline: House expected to vote on key government funding legislation thumbnail
Government shutdown deadline: House expected to vote on key government funding legislation

Category: Science & Technology

As the house rushes to vote on crucial government spending tensions rise with the looming threat of a shutdown the bill covers defense homeland security and key departments vital for national security and public safety if passed the bill moves to the senate for approval adding pressure to the tight... Read more

Congress rushes to approve final package of spending bills before shutdown deadline thumbnail
Congress rushes to approve final package of spending bills before shutdown deadline

Category: Science & Technology

As the clock ticks down lawmakers scramble to pass the final spending package for the current budget year avoiding a potential government shutdown the $1.2 trillion measure combines six annual spending bills with over 70% allocated to defense sparking intense debate and negotiation the house and senate... Read more

Congress races to pass $1.2 trillion in spending before shutdown deadline thumbnail
Congress races to pass $1.2 trillion in spending before shutdown deadline

Category: Science & Technology

As congress prepares to vote on a $1.2 trillion spending package tensions rise as the deadline looms will they avert a shutdown the senate has limited time to vote on the spending package risking a partial shutdown will they beat the clock some republican senators are posing challenges to the bill causing... Read more

OPENAI'S NEWEST GPT-O1 AI MODEL DEMOS THESE 6 NEW INTELLIGENCE UPGRADES | TECH NEWS thumbnail
OPENAI'S NEWEST GPT-O1 AI MODEL DEMOS THESE 6 NEW INTELLIGENCE UPGRADES | TECH NEWS

Category: Education

Today we'll break down open ai's new 01 ai model as we compare its six newest abilities so how smart is it now Read more

08/28/24 | Cerebras' New AI Wafer, A Game-Changer in AI Hardware | Daily AI News by GAI Insights thumbnail
08/28/24 | Cerebras' New AI Wafer, A Game-Changer in AI Hardware | Daily AI News by GAI Insights

Category: News & Politics

[music] morning everybody john spiy here for g insights with our gen news briefing on the essential important or optional news today we're going to go through a number of things anthropic publishing some new system prompts we have some fantastic new stuff from nim and um at nvidia in terms of building... Read more

The iPhone 16 Is Here! CNET Editors React to Apple's 'Glowtime' Event thumbnail
The iPhone 16 Is Here! CNET Editors React to Apple's 'Glowtime' Event

Category: Science & Technology

Intro [music] welcome back to cet live coverage of the apple iphone 16 event the glow time event or now i guess we're in the afterglow um i am here with my co-host scott stein and abar alii and we have a lot to dive into so we're just going to go around the room and you know what let's sh our feeling... Read more

New ways to search: Beetlejuice Beetlejuice (ft. Bob) thumbnail
New ways to search: Beetlejuice Beetlejuice (ft. Bob)

Category: Science & Technology

[upbeat music] [grunts to speak] Read more

Strawberry Q* SOON, Apple Intelligence Updates, $2,000/mo ChatGPT, Replit Agents (AI News) thumbnail
Strawberry Q* SOON, Apple Intelligence Updates, $2,000/mo ChatGPT, Replit Agents (AI News)

Category: Science & Technology

Openai strawberry model imminent open ai strawberry model is imminent that's our first story for today according to reuters we have open ai plans to release strawberry for chachi pt in 2 weeks and jimmy apples the only reliable leaker has mentioned it as well jimmy apples last week all quiet on the... Read more

Multi-LoRA with NVIDIA RTX AI Toolkit - Fine-tuning Goodness thumbnail
Multi-LoRA with NVIDIA RTX AI Toolkit - Fine-tuning Goodness

Category: Science & Technology

Imagine this you're an ai application developer and you need to fine-tune your model for your use case but you need to fine-tune multiple models a new technique called multil laura is now available in the nvidia rtx ai toolkit it allows you to create multiple fine-tuned variants of a single model without... Read more

The iPhone 16 Will Tear Apple Fans Apart thumbnail
The iPhone 16 Will Tear Apple Fans Apart

Category: Science & Technology

Intro [music] apple's next iphone reveal is on monday and this is going to be a controversial apple event of course there's always a little drama when we get a new iphone but this time apple's generative ai software push is making things extra spicy and unpredictable not everyone who buys an iphone... Read more

that event was interesting... || September Apple Event Recap and Impressions thumbnail
that event was interesting... || September Apple Event Recap and Impressions

Category: Science & Technology

Intro it's another september and with that comes yet another iphone launch and with that comes yet another night where i'm up until 4:30 in my bed watching a stream maybe one day i'll be in the spaceship at least once it was a pretty average september event we got a new apple watch we got some new airpods... Read more

Unleashing AI Power: Cerebras' Giant Chip Meets Meta's LLaMA 3.1 Revolution! #shorts #viralreels thumbnail
Unleashing AI Power: Cerebras' Giant Chip Meets Meta's LLaMA 3.1 Revolution! #shorts #viralreels

Category: Science & Technology

Cerebra systems is revolutionizing the world of artificial intelligence with its massive wafer scale computer chip roughly the size of a dinner plate this innovative technology is about to take a significant leap forward as it prepares to integrate me's open source llama 3.1 onto the chip by putting... Read more