5 things you really need to know about AI | BBC Ideas


Every day, it seems,
there’s a new, bewildering or frightening story
about AI in the news – how it’s going to steal our jobs,
spread internet fakery on a colossal scale
and generally take over the world.
But what exactly is AI – artificial intelligence –
and are the scare stories even true?
So, the first thing to know is that AI has been around
a lot longer than you might think.
Its roots lie in an idea known as an “artificial neural network”
from the 1940s.
A neural network is a bit like a team of interconnected workers
that learn to solve problems.
Each time they come up with a possible solution, it’s marked.
If there is room for improvement,
they adjust and change their connections.
Over time the network becomes more efficient.
And technology powered by neural networks is all around us right now.
It suggests movies and music we might like.
It recognizes faces and objects when taking photos on smartphones,
enabling features like facial recognition.
It’s heavily used by social media platforms to personalize our feeds.
More recently, a form of AI known as “generative AI”
is powering applications that can seemingly create new data.
It can also power chatbots like Open AI’s ChatGPT and Google’s Bard,
which give humanlike responses to questions.
These are getting better at interacting with us,
and seemingly more humanlike.
This can seem scary, but it’s worth knowing point two.
If you ask ChatGPT a question like this one –
“Why should we be concerned about AI?” –
it does a pretty good job of providing a response
that appears logical.
And with that convincing humanlike response,
it’s easy for us to believe it understands what it’s saying,
that it has feelings and motivations.
It’s understandable that we do this,
but it’s worth remembering, right now AI can’t think or feel,
can’t love or hate.
ChatGPT and its counterparts
are sophisticated sentence completion apps
that analyzes our patterns of communication
and provide responses similar to the way humans would typically reply.
A bit like a “talking” parrot.
Which leads us to point number three.
Chatbots can have an awkward relationship with the truth,
technically known as “AI hallucinations”.
You could also describe it as “making stuff up”.
The core of the technology is a model that uses probability
to predict the next word, sentence or paragraph.
It can generate seemingly plausible replies, but lacks the ability
to assess truthfulness or the accuracy of its responses.
So anyone thinking of using chatbots to write content
needs to be careful
they’re not incorporating credible-sounding BS
that can be easily spotted by someone
who’s actually done the research.
The idea of ​​a racist machine might seem far-fetched.
But if AI is trained on data that’s racist, biased or hateful,
then its output will be too.
As we all know, racism, bias and hateful content can be found
in abundance online.
In 2016, Microsoft launched an experimental chatbot called Tay,
but quickly pulled the plug
after he made racist and offensive remarks.
It learned to do this from interacting with users
on social media.
Microsoft apologized
and promised to implement improved safety features in the future.
This is why the ethical framework that governs any AI application
is incredibly important, and why many are calling for safeguards
to prevent bias and hate speech to be built into AI systems.
For all the notes of caution, it can be easy to forget
there are many potential benefits to AI.
It’s set to truly revolutionize healthcare.
AI has already discovered new drugs
and is being used to identify cancer cells
much more reliably than humans.
And AI chatbots can behave like patient teachers
when we struggle to understand a complex topic,
summarizing huge volumes of information for us.
The AI ​​revolution has the potential to enhance and speed up work
in many fields, from software programming, to animation,
to law enforcement and journalism.
This has pluses and minuses, of course,
but could this extra capacity free us up to do other things?
Like tackling climate change
or looking after ourselves and each other better?
As AI advances, governments and regulators will of course
need to make sure it’s being used ethically and legally –
no easy feat.
But will AI take over the world?
Don’t forget, AI is a tool,
and even a powerful tool can’t take over the world on its own.
It’s up to us to decide how we use it –
or even if we should use it at all.
2 views

답글 남기기

Shopping Cart
/classroom/
https://us06web.zoom.us/j/8929527091?pwd=VUpTS3ZvNWFSbnpWSXVodmJJQkpLdz09
https://talkingedu.com/product/course-registration/
https://us05web.zoom.us/j/8639616933?pwd=RuL0uabHDWy3mU7gjbTmmNJN5yigwA.1
https://open.kakao.com/o/sbFCSNEf