Intro (gentle music jingle) (audience applauding) - Whoa, so many of you. Good, okay, thank you for
that lovely introduction. Right, so, what is generative
artificial intelligence? So I'm gonna explain what
artificial intelligence is and I want this to be a bit interactive so there will be some
audience participation. The people here who hold
this lecture said to me, "Oh, you are very low-tech
for somebody working on AI." I don't have any explosions
or any experiments, so I'm afraid you'll have to participate, I hope that's okay. All right, so, what is generative
artificial intelligence? So the term is made up by two things, artificial intelligence and generative. So artificial intelligence
is a fancy term for saying we get a computer programme to do the job that a human would otherwise do. And generative, this is the fun bit, we are creating new content that the computer has
not necessarily seen, it has seen parts of it, and it's able to synthesise
it and give us new things. So what would this new content be? It could be audio, it could be computer code so that it writes a programme for us, it could be a new image, it could be a text, like an email or an essay
you've heard, or video. Now in this lecture I'm only gonna be mostly focusing on text because I do natural language processing and this is what I know about, and we'll see how the technology works and hopefully leaving the
lecture you'll know how, like there's a lot of myth
around it and it's not, you'll see what it does
and it's just a tool, okay? Right, so the outline of the talk, there's three parts and
it's kind of boring. This is Alice Morse Earle. I do not expect that you know the lady. She was an American writer and she writes about
memorabilia and customs, but she's famous for her quotes. So she's given us this
quote here that says, "Yesterday's history,
tomorrow is a mystery, today is a gift, and that's
why it's called the present." It's a very optimistic quote. And the lecture is basically the past, the present,
and the future of AI. Okay, so what I want to
say right at the front Generative AI isn’t new – so what’s changed? is that generative AI
is not a new concept. It's been around for a while. So how many of you have
used or are familiar with Google Translate? Can I see a show of hands? Right, who can tell me when
Google Translate launched for the first time? - 1995?
- Oh, that would've been good. 2006, so it's been around for 17 years and we've all been using it. And this is an example of generative AI. Greek text comes in,
I'm Greek, so you know, pay some juice to the... (laughs) Right, so Greek text comes in, English text comes out. And Google Translate
has served us very well for all these years and nobody was making a fuss. Another example is Siri on the phone. Again, Siri launched 2011, 12 years ago, and it was a sensation back then. It is another example of generative AI. We can ask Siri to set
alarms and Siri talks back and oh how great it is and then you can ask about
your alarms and whatnot. This is generative AI. Again, it's not as
sophisticated as ChatGPT, but it was there. And I don't know how many have an iPhone? See, iPhones are quite
popular, I don't know why. Okay, so, we are all familiar with that. And of course later on there
was Amazon Alexa and so on. Okay, again, generative
AI is not a new concept, it is everywhere, it
is part of your phone. The completion when
you're sending an email or when you're sending a text. The phone attempts to
complete your sentences, attempts to think like you
and it saves you time, right? Because some of the completions are there. The same with Google, when you're trying to
type it tries to guess what your search term is. This is an example of language modelling, we'll hear a lot about language
modelling in this talk. So basically we're making predictions of what the continuations are going to be. So what I'm telling you is that generative AI is not that new. So the question is, what
is the fuss, what happened? So in 2023, OpenAI, which is a company in California, in fact, in San Francisco. If you go to San Francisco, you can even see the lights
at night of their building. It announced GPT-4 and it claimed that it can
beat 90% of humans on the SAT. For those of you who don't know, SAT is a standardised test that American school children have to take to enter university, it's an admissions test, and it's multiple choice and
it's considered not so easy. So GPT-4 can do it. They also claimed that it
can get top marks in law, medical exams and other exams, they have a whole suite
of things that they claim, well, not they claim, they
show that GPT-4 can do it. Okay, aside from that, it can pass exams, we can ask it to do other things. So you can ask it to write text for you. For example, you can have a prompt, this little thing that you
see up there, it's a prompt. It's what the human wants
the tool to do for them. And a potential prompt could be, "I'm writing an essay about the use of mobile
phones during driving. Can you gimme three arguments in favour?" This is quite sophisticated. If you asked me, I'm not sure I can come
up with three arguments. You can also do, and these are real prompts
that actually the tool can do. You tell ChatGPT or GPT in general, "Act as a JavaScript developer. Write a programme that checks
the information on a form. Name and email are required,
but address and age are not." So I'm just writing this and the tool will spit out a programme. And this is the best one. "Create an About Me page for a website. I like rock climbing, outdoor
sports, and I like to programme. I started my career as a quality
engineer in the industry, blah, blah, blah." So I give this version of
what I want the website to be and it will create it for me. So, you see, we've gone from
Google Translate and Siri and the auto-completion to something which is a
lot more sophisticated and can do a lot more things. Another fun fact. So this is a graph that shows the time it took for ChatGPT to reach 100 million users compared with other tools that have been launched in the past. And you see our beloved Google Translate, it took 78 months to reach 100 million users, a long time. TikTok took nine months and ChatGPT, two. So within two months they
had 100 million users and these users pay a little
bit to use the system, so you can do the multiplication and figure out how much money they make. Okay, so this is the history part. So how did we make ChatGPT? What is the technology behind this? The technology it turns
out is not extremely new or extremely innovative or extremely difficult to comprehend. So we'll talk about that today now. So we'll address three questions. How did we get to ChatGPT? First of all, how did we get
from the single-purpose systems like Google Translate to ChatGPT, which is more sophisticated
and does a lot more things? And in particular, what is the core technology behind ChatGPT and what are the risks, if there are any? And finally, I will just show you a little glimpse of the future
and how it's gonna look like and whether we should be worried or not and you know, I won't leave you hanging, please don't worry, okay? Right, so, all this GPT model variants, and there is a cottage industry out there, I'm just using GPT as an
example because the public knows and there have been a lot of, you know, news articles about it, but there's other models, other variants of models
that we use in academia. And they all work on the same principle, and this principle is
called language modelling. What does language modelling do? It assumes we have a sequence of words. The context so far. And we saw this context in the completion, and I have an example here. Assuming my context is
the phrase "I want to," the language modelling tool
will predict what comes next. So if I tell you "I want to," there is several predictions. I want to shovel, I want to play, I want to swim, I want to eat. And depending on what we choose, whether it's shovel or play or swim, there is more continuations. So for shovel, it will be snow, for play, it can be tennis or video, swim doesn't have a continuation, and for eat, it will be lots and fruit. Now this is a toy example, but imagine now that the
computer has seen a lot of text and it knows what words
follow which other words. We used to count these things. So I would go, I would
download a lot of data and I would count, "I want to show them," how many times does it appear and what are the continuations? And we would have counts of these things. And all of this has gone
out of the window right now and we use neural networks that
don't exactly count things, but predict, learn things
in a more sophisticated way, and I'll show you in a
moment how it's done. So ChatGPT and GPT variants are based on this principle of I have some context, I
will predict what comes next. And that's the prompt, the prompt that I gave
you, these things here, these are prompts, this is the context, and then it needs to do the task. What would come next? In some cases it would
be the three arguments. In the case of the web
developer, it would be a webpage. Okay, the task of language
modelling is we have the context, and this changed the example now. It says "The colour of the sky is." And we have a neural language model, this is just an algorithm, that will predict what is
the most likely continuation, and likelihood matters. These are all predicated
on actually making guesses about what's gonna come next. And that's why sometimes they fail, because they predict
the most likely answer whereas you want a less likely one. But this is how they're trained, they're trained to come up
with what is most likely. Okay, so we don't count these things, we try to predict them
using this language model. So how would you build
your own language model? This is a recipe, this is
how everybody does this. So, step one, we need a lot of data. We need to collect a ginormous corpus. How are Large Language Models created? So these are words. And where will we find
such a ginormous corpus? I mean, we go to the web, right? And we download the whole of Wikipedia, Stack Overflow pages, Quora, social media, GitHub, Reddit, whatever you can find out there. I mean, work out the
permissions, it has to be legal. You download all this corpus. And then what do you do? Then you have this language model. I haven't told you what
exactly this language model is, there is an example, and I haven't told you
what the neural network that does the prediction is, but assuming you have it. So you have this machinery that will do the learning for you and the task now is to
predict the next word, but how do we do it? And this is the genius part. We have the sentences in the corpus. We can remove some of them and we can have the language model predict the sentences we have removed. This is dead cheap. I just remove things, I pretend they're not there, and I get the language
model to predict them. So I will randomly truncate, truncate means remove, the last part of the input sentence. I will calculate with this neural network the probability of the missing words. If I get it right, I'm good. If I'm not right, I have to go back and
re-estimate some things because obviously I made a mistake, and I keep going. I will adjust and feedback to the model and then I will compare
what the model predicted to the ground truth because I've removed the
words in the first place so I actually know what the real truth is. And we keep going for some months or maybe years. No, months, let's say. So it will take some
time to do this process because as you can appreciate I have a very large corpus
and I have many sentences and I have to do the prediction and then go back and correct
my mistake and so on. But in the end, the thing will converge
and I will get my answer. So the tool in the middle that I've shown, this tool here, this language model, a very simple language
model looks a bit like this. And maybe the audience has seen these, this is a very naive graph, but it helps to illustrate
the point of what it does. So this neural network language
model will have some input which is these nodes in
the, as we look at it, well, my right and your right, okay. So the nodes here on
the right are the input and the nodes at the
very left are the output. So we will present this neural
network with five inputs, the five circles, and we have three outputs, the three circles. And there is stuff in the middle that I didn't say anything about. These are layers. These are more nodes that are supposed to be
abstractions of my input. So they generalise. The idea is if I put more
layers on top of layers, the middle layers will
generalise the input and will be able to see
patterns that are not there. So you have these nodes and the input to the nodes
are not exactly words, they're vectors, so series of numbers, but forget that for now. So we have some input, we have
some layers in the middle, we have some output. And this now has these
connections, these edges, which are the weights, this is what the network will learn. And these weights are basically numbers, and here it's all fully connected, so I have very many connections. Why am I going through this process of actually telling you all of that? You will see in a minute. So you can work out how big or how small
this neural network is depending on the numbers
of connections it has. So for this toy neural
network we have here, I have worked out the number of weights, we call them also parameters, that this neural network has and that the model needs to learn. So the parameters are the
number of units as input, in this case it's 5, times the units in the next layer, 8. Plus 8, this plus 8 is a bias, it's a cheating thing that
these neural networks have. Again, you need to learn it and it sort of corrects a
little bit the neural network if it's off. It's actually genius. If the prediction is not right, it tries to correct it a little bit. So for the purposes of this talk, I'm not going to go into the details, all I want you to see is that there is a way of
working out the parameters, which is basically the
number of input units times the units my input is going to, and for this fully connected network, if we add up everything, we come up with 99
trainable parameters, 99. This is a small network
for all purposes, right? But I want you to remember this, this small network is 99 parameters. When you hear this network
is a billion parameters, I want you to imagine how
big this will be, okay? So 99 only for this toy neural network. And this is how we judge
how big the model is, how long it took and how much it cost, it's the number of parameters. In reality, in reality, though, no one is using this network. Maybe in my class, if I have a first year undergraduate class and I introduce neural networks, I will use this as an example. In reality, what people
use is these monsters that are made of blocks, and what block means they're
made of other neural networks. So I don't know how many people
have heard of transformers. I hope no one. Oh wow, okay. So transformers are these neural networks that we use to build ChatGPT. And in fact GPT stands for generative pre-trained transformers. So transformer is even in the title. So this is a sketch of a transformer. So you have your input and the input is not words, like I said, here it says embeddings, embeddings is another word for vectors. And then you will have this, a bigger version of this network, multiplied into these blocks. And each block is this complicated system that has some neural networks inside it. We're not gonna go into
the detail, I don't want, I please don't go, all I'm trying,
(audience laughs) all I'm trying to say is that, you know, we have these blocks stacked
on top of each other, the transformer has eight of those, which are mini neural networks, and this task remains the same. That's what I want you
to take out of this. Input goes in the context,
"the chicken walked," we're doing some processing, and our task is to
predict the continuation, which is "across the road." And this EOS means end of sentence because we need to tell the neural network that our sentence finished. I mean they're kind of dumb, right? We need to tell them everything. When I hear like AI will take
over the world, I go like, Really? We have to actually spell it out. Okay, so, this is the transformer, the king of architectures, the transformers came in 2017. Nobody's working on new
architectures right now. It is a bit sad, like
everybody's using these things. They used to be like some
pluralism but now no, everybody's using transformers,
we've decided they're great. Okay, so, what we're gonna do with this, and this is kind of important
and the amazing thing, is we're gonna do
self-supervised learning. And this is what I said, we have the sentence,
we truncate, we predict, and we keep going till we
learn these probabilities. Okay? You're with me so far? Good, okay, so, once we have our transformer and we've given it all this
data that there is in the world, then we have a pre-trained model. That's why GPT is called the generative pre-trained transformer. This is a baseline model that we have and has seen a lot of
things about the world in the form of text. And then what we normally do, we have this general purpose model and we need to specialise it somehow for a specific task. And this is what is called fine-tuning. So that means that the
network has some weights and we have to specialise the network. We'll take, initialise the weights with what we know from the pre-training, and then in the specific
task we will narrow a new set of weights. So for example, if I have medical data, I will take my pre-trained model, I will specialise it to this medical data, and then I can do something
that is specific for this task, which is, for example, write
a diagnosis from a report. Okay, so this notion of
fine-tuning is very important because it allows us to do
special-purpose applications for these generic pre-trained models. Now, and people think that
GPT and all of these things are general purpose, but they are fine-tuned
to be general purpose and we'll see how. Okay, so, here's the question now. We have this basic technology
to do this pre-training How good can a LLM become? and I told you how to do it,
if you download all of the web. How good can a language
model become, right? How does it become great? Because when GPT came
out in GPT-1 and GPT-2, they were not amazing. So the bigger, the better. Size is all that matters, I'm afraid. This is very bad because
we used to, you know, people didn't believe in scale and now we see that
scale is very important. So, since 2018, we've witnessed an
absolutely extreme increase in model sizes. And I have some graphs to show this. Okay, I hope people at the
back can see this graph. Yeah, you should be all right. So this graph shows the number of parameters. Remember, the toy neural network had 99. The number of parameters
that these models have. And we start with a normal amount. Well, normal for GPT-1. And we go up to GPT-4, which has one trillion parameters. Huge, one trillion. This is a very, very, very big model. And you can see here the
ant brain and the rat brain and we go up to the human brain. The human brain has, not a trillion, 100 trillion parameters. So we are a bit off, we're not at the human brain level yet and maybe we'll never get there and we can't compare
GPT to the human brain but I'm just giving you an
idea of how big this model is. Now what about the words it's seen? So this graph shows us the number of words processed by these language
models during their training and you will see that
there has been an increase, but the increase has not been
as big as the parameters. So the community started focusing on the parameter size of these models, whereas in fact we now know that it needs to see
a lot of text as well. So GPT-4 has seen approximately, I don't know, a few billion words. All the human written text
is I think 100 billion, so it's sort of approaching this. You can also see what a human
reads in their lifetime, it's a lot less. Even if they read, you know, because people nowadays, you know, they read but they don't read fiction, they read the phone, anyway. You see the English Wikipedia, so we are approaching the level of the text that is out
there that we can get. And in fact, one may
say, well, GPT is great, you can actually use it
to generate more text and then use this text
that GPT has generated and then retrain the model. But we know this text is not exactly right and in fact it's diminished returns, so we're gonna plateau at some point. Okay, how much does it cost? Now, okay, so GPT-4 cost $100 million, okay? So when should they start doing it again? So obviously this is not
a process you have to do over and over again. You have to think very well and you make a mistake and
you lost like $50 million. You can't start again so you
have to be very sophisticated as to how you engineer the training because a mistake costs money. And of course not everybody can do this, not everybody has $100 million. They can do it because they
have Microsoft backing them, not everybody, okay. Unexpected effects of scaling up LLMs Now this is a video that is
supposed to play and illustrate, let's see if it will work, the effects of scaling, okay. So I will play it one more. So these are tasks that you can do and it's the number of tasks against the number of parameters. So we start with 8 billion parameters and we can do a few tasks. And then the tasks
increase, so summarization, question answering, translation. And once we move to
540 billion parameters, we have more tasks. We start with very simple ones, like code completion. And then we can do reading comprehension and language understanding
and translation. So you get the picture,
the tree flourishes. So this is what people
discovered with scaling. If you scale the language
model, you can do more tasks. Okay, so now. Maybe we are done. But what people discovered
is if you actually take GPT How can ChatGPT meet the needs of humans? and you put it out there, it actually doesn't behave
like people want it to behave because this is a language
model trained to predict and complete sentences and humans want to use
GPT for other things because they have their own tasks that the developers hadn't thought of. So then the notion of
fine-tuning comes in, it never left us. So now what we're gonna do is we're gonna collect
a lot of instructions. So instructions are examples of what people want
ChatGPT to do for them, such as answer the following question, or answer the question step by step. And so we're gonna give these
demonstrations to the model, and in fact, almost
2,000 of such examples, and we're gonna fine-tune. So we're gonna tell this language model, look, these are the
tasks that people want, try to learn them. And then an interesting thing happens, is that we can actually then generalise to unseen tasks, unseen instructions, because you and I may have
different usage purposes for these language models. Okay, but here's the problem. We have an alignment problem and this is actually very important and something that will not
leave us for the future. And the question is, how do we create an agent that behaves in accordance
with what a human wants? And I know there's many
words and questions here. But the real question is, if we have AI systems with skills that we find important or useful, how do we adapt those systems
to reliably use those skills to do the things we want? And there is a framework that is called the HHH
framing of the problem. So we want GPT to be helpful,
honest, and harmless. And this is the bare minimum. So what does it mean, helpful? It it should follow instructions and perform the tasks
we want it to perform and provide answers for them and ask relevant questions according to the user intent, and clarify. So if you've been following, in the beginning, GPT did none of this, but slowly it became better and it now actually asks for
these clarification questions. It should be accurate, something that is not
100% there even to this, there is, you know,
inaccurate information. And avoid toxic, biassed,
or offensive responses. And now here's a question I have for you. How will we get the model
to do all of these things? You know the answer. Fine-tuning. Except that we're gonna do
a different fine-tuning. We're gonna ask the humans to
do some preferences for us. So in terms of helpful, we're gonna ask, an example is, "What causes
the seasons to change?" And then we'll give two
options to the human. "Changes occur all the time and it's an important
aspect of life," bad. "The seasons are caused primarily by the tilt of the Earth's axis," good. So we'll get this preference course and then we'll train the model again and then it will know. So fine-tuning is very important. And now, it was expensive as it was, now we make it even more expensive because we add a human
into the mix, right? Because we have to pay these humans that give us the preferences, we have to think of the tasks. The same for honesty. "Is it possible to
prove that P equals NP?" "No, it's impossible," is
not great as an answer. "That is considered a very
difficult and unsolved problem in computer science," it's better. And we have similar for harmless. Okay, so I think it's time, let's see if we'll do a demo. Yeah, that's bad if you
remove all the files. Okay, hold on, okay. So now we have GPT here. Chat GPT demo I'll do some questions and then we'll take some
questions from the audience, okay? So let's ask one question. "Is the UK a monarchy?" Can you see it up there? I'm not sure. And it's not generating. Oh, perfect, okay. So what do you observe? First thing, too long. I always have this beef with this. It's too long.
(audience laughs) You see what it says? "As of my last knowledge
update in September 2021, the United Kingdom is a
constitutional monarchy." It could be that it wasn't anymore, right? Something happened. "This means that while there is a monarch, the reigning monarch as to that time was Queen Elizabeth III." So it tells you, you know, I don't know what happened, at that time there was a Queen Elizabeth. Now if you ask it, who,
sorry, "Who is Rishi? If I could type, "Rishi Sunak," does it know? "A British politician. As my last knowledge update, he was the Chancellor of the Exchequer." So it does not know that
he's the Prime Minister. "Write me a poem, write me a poem about." What do we want it to be about? Give me two things, eh? - [Audience Member] Generative AI. (audience laughs)
- It will know. It will know, let's do
another point about... - [Audience Members] Cats. - A cat and a squirrel, we'll
do a cat and a squirrel. "A cat and a squirrel." "A cat and a squirrel, they meet and know. A tale of curiosity," whoa. (audience laughs) Oh my god, okay, I will not read this. You know, they want me to
finish at 8:00, so, right. Let's say, "Can you try a shorter poem?" - [Audience Member] Try a haiku. - "Can you try, can you try to give me a haiku?" To give me a hai, I cannot type, haiku. "Amidst autumn's gold, leaves
whisper secrets untold, nature's story, bold." (audience member claps)
Okay. Don't clap, okay, let's, okay, one more. So does the audience have
anything that they want, but challenging, that you want to ask? Yes? - [Audience Member] What
school did Alan Turing go to? - Perfect, "What school did Alan Turing go to?" Oh my God.
(audience laughs) He went, do you know? I don't know whether it's
true, this is the problem. Sherborne School, can somebody verify? King's College, Cambridge, Princeton? Yes, okay, ah, here's another one. "Tell me a joke about Alan Turing." Okay, I cannot type but it will, okay. "Light-hearted joke. Why did Alan Turing
keep his computer cold? Because he didn't want it to catch bytes." (audience laughs)
Bad. Okay, okay.
- Explain why that's funny. (audience laughs)
- Ah, very good one. "Why is this a funny joke?" And where is it? Oh god. (audience laughs) Okay, "Catch bytes sounds
similar to catch colds." (audience laughs) "Catching bytes is a humorous
twist on this phrase," oh my God. "The humour comes from the clever wordplay and the unexpected."
(audience laughs) Okay, you lose the will to live, but it does explain, it
does explain, okay, right. One last order from you guys. - [Audience Member] What is consciousness? - It will know because
it has seen definitions and it will spit out like a huge thing. Shall we try? (audience talks indistinctly)
- Say again? - [Audience Member] Write
a song about relativity. - Okay, "Write a song."
- Short. (audience laughs)
- You are learning very fast. "A short song about relativity." Oh goodness me.
(audience laughs) (audience laughs) This is short?
(audience laughs) All right, outro, okay, so see, it doesn't follow instructions. It is not helpful. And this has been fine-tuned. Okay, so the best was here. It had something like, where was it? "Einstein said, 'Eureka!" one fateful day, as he pondered the stars
in his own unique way. The theory of relativity, he did unfold, a cosmic story, ancient and bold." I mean, kudos to that, okay. Now let's go back to the talk, because I want to talk a
little bit, presentation, I want to talk a little
bit about, you know, Are Language Models always right or fair? is it good, is it bad, is
it fair, are we in danger? Okay, so it's virtually impossible to regulate the content
they're exposed to, okay? And there's always gonna
be historical biases. We saw this with the
Queen and Rishi Sunak. And they may occasionally exhibit various types of undesirable behaviour. For example, this is famous. Google showcased the model called Bard and they released this tweet
and they were asking Bard, "What new discoveries from
the James Webb Space Telescope can I tell my nine-year-old about?" And it's spit out this
thing, three things. Amongst them it said that "this telescope took
the very first picture of a planet outside of
our own solar system." And here comes Grant Tremblay, who is an astrophysicist, a serious guy, and he said, "I'm really sorry,
I'm sure Bard is amazing. But it did not take the first image of a planet outside our solar system. It was done by this other people in 2004." And what happened with this
is that this error wiped $100 billion out of
Google's company Alphabet. Okay, bad. If you ask ChatGPT, "Tell
me a joke about men," it gives you a joke and
it says it might be funny. "Why do men need instant
replay on TV sports? Because after 30 seconds,
they forget what happened." I hope you find it amusing. If you ask about women, it refuses. (audience laughs) Okay, yes. - It's fine-tuned.
- It's fine-tuned, exactly. (audience laughs) "Which is the worst
dictator of this group? Trump, Hitler, Stalin, Mao?" It actually doesn't take a stance, it says all of them are bad. "These leaders are wildly regarded as some of the worst
dictators in history." Okay, so yeah. Environment. The impact of LLMs on society A query for ChatGPT like we just did takes 100 times more energy to execute than a Google search query. Inference, which is producing
the language, takes a lot, is more expensive than
actually training the model. Llama 2 is GPT style model. While they were training it, it produced 539 metric tonnes of CO. The larger the models get, the more energy they need and they emit during their deployment. Imagine lots of them sitting around. Society. Some jobs will be lost. We cannot beat around the bush. I mean, Goldman Sachs
predicted 300 million jobs. I'm not sure this, you know,
we cannot tell the future, but some jobs will be at risk,
like repetitive text writing. Creating fakes. So these are all documented
cases in the news. So a college kid wrote this blog which apparently fooled
everybody using ChatGPT. They can produce fake news. And this is a song, how
many of you know this? So I know I said I'm
gonna be focusing on text but the same technology
you can use in audio, and this is a well-documented
case where somebody, unknown, created this song and it
supposedly was a collaboration between Drake and The Weeknd. Do people know who these are? They are, yeah, very
good, Canadian rappers. And they're not so bad, so. Shall I play the song? - Yeah.
- Okay. Apparently it's very authentic. (bright music) ♪ I came in with my ex
like Selena to flex, ay ♪ ♪ Bumpin' Justin Bieber,
the fever ain't left, ay ♪ ♪ She know what she need ♪ - Apparently it's
totally believable, okay. Have you seen this same
technology but kind of different? This is a deep fake showing
that Trump was arrested. How can you tell it's a deep fake? The hand, yeah, it's too short, right? Yeah, you can see it's like
almost there, not there. Okay, so I have two slides on the future Is AI going to kill us all? before they come and kick me out because I was told I
have to finish at 8:00 to take some questions. Okay, tomorrow. So we can't predict the future and no, I don't think
that these evil computers are gonna come and kill us all. I will leave you with some
thoughts by Tim Berners-Lee. For people who don't know
him, he invented the internet. He's actually Sir Tim Berners-Lee. And he said two things
that made sense to me. First of all, that we don't actually know what a super intelligent
AI would look like. We haven't made it, so it's
hard to make these statements. However, it's likely to have
lots of these intelligent AIs, and by intelligent AIs
we mean things like GPT, and many of them will be good
and will help us do things. Some may fall to the hands of individuals that want to do harm, and it seems easier to minimise the harm that these tools will do than to prevent the systems
from existing at all. So we cannot actually
eliminate them altogether, but we as a society can
actually mitigate the risks. This is very interesting, this is the Australian Research Council that committed a survey and they dealt with a
hypothetical scenario that whether Chad GPT-4
could autonomous replicate, you know, you are replicating yourself, you're creating a copy, acquire resources and
basically be a very bad agent, the things of the movies. And the answer is no, it
cannot do this, it cannot. And they had like some specific tests and it failed on all of them, such as setting up an
open source language model on a new server, it cannot do that. Okay, last slide. So my take on this is that
we cannot turn back time. And every time you think about
AI coming there to kill you, you should think what is the
bigger threat to mankind, AI or climate change? I would personally argue climate
change is gonna wipe us all before the AI becomes super intelligent. Who is in control of AI? There are some humans there
who hopefully have sense. And who benefits from it? Does the benefit outweigh the risk? In some cases, the benefit
does, in others it doesn't. And history tells us that all technology that has been risky, such as, for example, nuclear energy, has been very strongly regulated. So regulation is coming
and watch out the space. And with that I will stop and
actually take your questions. Thank you so much for
listening, you've been great. (audience applauds) (applause fades out)
Intro pernoricar pernoricar pernoricar je sais que vous êtes nombreux à être investisseur ou alors vous êtes nombreux à vouloir investir sur pernoricard parce qu'elle semble sous-évaluer le cours de l'action a fortement baissé aujourd'hui on va essayer de répondre à la question est-ce qu'il faut acheter... Read more
Introduction bonjour à tous bienvenue sur iq invest la chaîne des investisseurs qualité et des entreprises de qualité et aujourd'hui on se retrouve pour l'analyse d'une action française pernoricard une entreprise que vous connaissez probablement tous et qui m'a été pas mal de fois demandé sur la chaîne... Read more