This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
From “The New York Times,” I’m Sabrina Tavernise. And this is “The Daily.”
[THEME MUSIC]
As the world begins to experiment with the power of artificial intelligence, a debate has begun about how to contain its risks. One of the sharpest and most urgent warnings has come from the man who helped invent the technology. Today my colleague Cade Metz speaks to Geoffrey Hinton, who many consider to be the godfather of AI.
It’s Tuesday, May 30.
Cade, welcome to the show.
Glad to be here.
So a few weeks ago you interviewed Geoffrey Hinton, a man who many people as the godfather of AI. And aside from the obvious fact that AI is really taking over all conversations at all times, why talk to Geoff now?
I’ve known Geoff a long time. I wrote a book about the 50-year rise of the ideas that are now driving chat bots like ChatGPT and Google Bard. And you could argue that he is the most important person to the rise of AI over the past 50 years. And amidst all this that’s happening with these chat bots, he sent me an email and said, I’m leaving Google and I want to talk to you, and that he wants to discuss where this technology is going, including some serious concerns.
Who better to talk to than the Godfather of AI?
Exactly. So, naturally, I got on a plane and I went to Toronto —
- cade metz
Geoff, come on. Come on in. It’s great to see you.
- geoffrey hinton
Nice to see you.
— to sit down at his dinner table and discuss.
- geoffrey hinton
Would you like a cup of coffee, a cup of tea, a beer, some whiskey?
- cade metz
Well, if you’ve made some coffee I’ll have some coffee.
Geoff is a 75-year-old Cambridge-educated British man who now lives in Toronto. He’s been there since the late ‘80s. He’s a professor at the university.
- cade metz
My question is, somewhere along the way people started calling you the Godfather of AI.
- geoffrey hinton
And I’m not sure it was meant as a compliment.
- cade metz
And do I researchers come to your door and kneel before you, and kiss your hand? How does it work?
- geoffrey hinton
No. No, they don’t.
- cade metz
They don’t?
- geoffrey hinton
No. And I never get to ask them for favors.
- cade metz
[LAUGHS]:
So how does Geoff become the Godfather of AI? Where does his story start?
It starts in high school. He grew up the son of an academic, but he always tells a story about a friend describing a theory of how the brain works.
- geoffrey hinton
And he wrote about holograms. And he got interested in the idea that memory in the brain might be like a hologram.
This friend talked about the way the brain stores memories, and that he felt it stored these memories like a hologram. A hologram isn’t stored in a single spot. It’s divided into tiny pieces and then spread across a piece of film. And this friend felt that the brain stored memories in the same way — that it broke these memories into pieces and stored them across the network of neurons in the brain.
It’s quite beautiful, actually.
It is.
- geoffrey hinton
And we talked about that. And I’ve been interested in how the brain works ever since.
That sparked Geoff’s interest. And from there on, he spent his life in pursuit of trying to understand how the brain worked.
So how does Geoff start to answer the question of how the brain works?
So he goes to Cambridge and he studies physiology, looking for answers from his professors. Can you tell me how the brain works? And his physiology professors can’t tell him.
- geoffrey hinton
And so I switched to philosophy. And then I switched to psychology, in the hopes that psychology would tell me more about the mind. And it didn’t.
And no one can tell him how the brain works.
- cade metz
The layperson might ask, don’t we understand how the brain works.
- geoffrey hinton
No, we don’t. We understand some things about how it works. I mean, we understand that when you’re thinking or when you’re perceiving, there’s neurons, brain cells. And the brain cells fire. They go ping, and send the ping along in axon to other brain cells.
We still don’t the details of how the neurons in our brains communicate with one another as we think and learn.
- geoffrey hinton
So all you need to know now is, well, how does it decide on the strengths of the connections between neurons. If you could figure that out, you understand how the brain works. And we haven’t figured it out yet.
He then moves into a relatively new field called artificial intelligence.
[MUSIC PLAYING]
The field of artificial intelligence was created in the late ‘50s by a small group of scientists in the United States. Their aim was to create a machine that could do anything the human brain could do. And in the beginning, many of them thought they could build machines that operated like the network of neurons in the brain — what they called artificial neural networks.
But 10 years into this work, progress was so slow that they assumed it was too difficult to build a machine that operated like the neurons in the brain. And they gave up on the idea.
So they embraced a very different way of thinking about artificial intelligence. They embraced something they called symbolic AI.
You would take everything that you and I know about the world and put them into a list of rules — things like you can’t be in two places at the same time or when you hold a coffee cup you hold the open end up. The idea was that you would list all these rules step by step, line of code by line of code, and then feed that into a machine. And then that would give it the power that you and I have in our own brains.
So, essentially, tell the computer every rule that governs reality, and the computer makes the decisions based on all of those rules.
Right.
But then Geoff Hinton comes along, in 1972, as a graduate student in Edinburgh. And he says, wait, wait, wait. That is never going to happen.
[CHUCKLES]: That’s a lot of rules.
You will never have the time, and the patience, and the person power to write all those rules and feed them into a machine. I don’t care how long you take, he says, it is not going to happen. And, by the way, the human brain doesn’t work like that. That’s not how we learn.
So he returns to the old idea of a neural network that was discarded earlier by other AI researchers. And he says, that is the way that we should build machines that think. We have them learn from the world like humans learn.
So instead of feeding the computer a bunch of rules, like the other guys were doing, you actually feed it a bunch of information. And the idea was that the computer would gradually sort out how to make sense of it all, like a human brain.
You would give it examples of what is happening in the world. And it would analyze those examples, and look for patterns in what happens in the world and learn from those patterns.
But Geoff is taking up an idea that had been largely discarded by the majority of the AI community. Did he have any evidence that his approach was actually going to work.
- geoffrey hinton
The only reason to believe it might work at all was because the brain works. And that was the main reason for believing there was any hope at all.
His only evidence was that, basically, this is how the human brain worked.
- geoffrey hinton
It was widely dismissed as just a crazy idea that was never going to work.
And, at the time, many of his colleagues thought he was silly for even trying.
- cade metz
How did that feel, to have most of your colleagues tell you that you were working on a crazy idea that would never work?
- geoffrey hinton
It felt very like when I was at school, when I was 9 and 10. I came from an atheist family and I went to a Christian school. And everybody was saying, of course, God exists. I was saying, no, he doesn’t. And where is he at? So I was very used to being the outsider, and believing in something that was obviously true that nobody else believed in. And I think that was a very good training.
OK, so what happened next?
So after graduate school, Geoff moves to the United States. He’s a post-doc at a university in California. And he starts to work on an algorithm, a piece of math that can realize his idea.
And what exactly does this algorithm do?
Geoff essentially builds an algorithm in the image of the human brain. Remember, the brain is a network of neurons that trade signals. That’s how we learn. That’s how we see. That’s how we hear. What Geoff did that was so revolutionary was he recreated that system in a computer. He created a network of digital neurons that traded information, much like the neurons in the brain.
So that question he set out to answer all those years ago — how do brains work — he answered it, only for computers, not for humans.
Right. He built a system that allowed computers to learn on their own. In the ‘80s, this type of system could learn in small ways. It couldn’t learn in the complex ways that could really change our world. But fast forward a good three decades. Geoff and two of his students built a system that really opened up the eyes of a lot of people to what this type of technology was capable of.
He and two of his students at the University of Toronto built a system that could identify objects in photos. The classic example is a cat. What they did was take thousands of cat photos and feed them into a neural network and in analyzing those photos, the system learned how to identify a cat.
It identified patterns in those photos that define what a cat looks like — the edge of a whisker, the curve of a tail. And over time, by analyzing all those photos, the system could learn to recognize a cat in a photo it had never seen before. They could do this not only with cats but with other objects — flowers, cars. They built a system that could identify objects with an accuracy that no one thought was possible.
So it’s basically image recognition. Right? It’s presumably why my phone can sort pictures of my family and deliver whole albums of pictures just of my husband or just of my dog, and photographs of a hug or a beach.
Right. So in 2012, all Geoff and his students did was publish a research paper describing this technology, showing what it could do.
- cade metz
What happens is that idea, in the large sense, over the next decade?
- geoffrey hinton
It took off.
[MUSIC PLAYING]
That set off a race for this technology in the tech industry.
- geoffrey hinton
So we decided what we would do is just take the big companies that were interested in us and we would sell ourselves.
There was a literal auction for Geoff and his two students and their services.
- geoffrey hinton
We’d sell the intellectual property plus the three of us.
Google was part of the auction, Microsoft — another giant of the tech world — Baidu, often called the Google of China.
Over two days, they bid for the services of Geoff and his two students to the point where Google paid $44 million, essentially, for these three people who had never worked in the tech industry.
- geoffrey hinton
And that worked out very nicely by Cade Metz.
- cade metz
[LAUGHS]:
So what does Geoff do at Google after this bidding war for his services?
He works on increasingly powerful neural networks. And you see this technology move into all sorts of products not only at Google, but across the industry.
- geoffrey hinton
But all the big companies like Facebook, and Microsoft, and Amazon, and the Chinese companies all develop big teams in that area. And it was just sort of used everywhere.
This is what drives Siri and other digital assistants. When you speak commands into your cell phone, it’s able to recognize what you say because of a neural network. When you use Google Translate, it uses a neural network to do that. There are all sorts of things that we use today that use neural networks to operate.
So we see Geoff’s idea really transforming the world, powering things that we use all the time in our daily lives without even thinking about it.
Absolutely. But this idea, at Google and at other places, is also applied in situations that make Geoff a little uneasy. The prime example is what’s called Project Maven. Google went to work for the Department of Defense, and it applied this idea to an effort to identify objects in drone footage.
Hmm.
If you can identify objects in drone footage, you can build a targeting system. If you pair that technology with a weapon, you have an autonomous weapon. That raised the concerns of people across Google at the time.
- geoffrey hinton
I was upset, too. But I was a vice president at that point, so I was sort of executive of Google. And so rather than publicly criticizing the company I was doing stuff behind the scenes.
Geoff never wanted his work applied to military use. He raised these concerns with Sergey Brin, one of the founders of Google, and Google eventually pulled out of the project. And Geoff continued to work at the company.
- geoffrey hinton
Maybe I should have gone public with it, but I thought it wasn’t — it’s somehow not right to bite the hand that feeds you, even if it’s a corporation.
But around the same time, the industry started to work on a new application for the technology that eventually made him even more concerned. It began applying neural networks to what we now call chat bots.
Essentially, companies like Google started feeding massive amounts of text into neural networks, including Wikipedia articles, chat logs, digital books. These systems started to learn how to put language together in the way you and I put this language together.
The auto completion on my email, for example.
Absolutely, but taken up to an enormous scale.
As they fed more and more digital text into these systems, they learned to write like a human. This is what has resulted in chat bots like ChatGPT and Bard.
And what gave Geoff pause about all of this? Why was he so concerned?
- geoffrey hinton
What’s happened to me over the last year is I’ve changed my mind completely about whether these are just not yet adequate attempts to model what’s going on in the brain. That’s how they started off.
Well, he still feels like these systems are not as powerful as the human brain. And they’re not.
- geoffrey hinton
They’re still not adequate to model what’s going on in the brain. They’re doing something different and better.
But in other ways, he realizes they’re far more powerful.
More powerful how, exactly?
Geoff thinks about it like this.
- geoffrey hinton
If you learn something complicated, like a new bit of physics, and you want to explain it to me — you know, our brains, all our brains are a bit different. And it’s going to take a while and be an inefficient process.
You and I have a brain that can learn a certain amount of information. And after I learn that information, I can convey that to you. But that’s a slow process.
- geoffrey hinton
Imagine if you had a million people. And when any one of them learned something, all the others automatically know it. That’s a huge advantage. And to do that, you need to go digital.
With these neural networks, Geoff points out, you can piece them together. A small network that can learn a little bit of information can be connected to all sorts of other neural networks that have learned from other parts of the internet. And those can be connected to still other neural networks that learn from additional parts.
- geoffrey hinton
So these digital agents — as soon as one of them’s learned something, all the others know it.
They can all learn in tandem, and they can trade what they have learned with each other in an instant.
- geoffrey hinton
It means that many, many copies of a digital agent can read the whole internet in only a month. We can’t do that.
That’s what allows them to learn from the entire internet. You and I cannot do that individually, and we can’t do it collectively. Even if each of us learns a piece of the internet, we can’t trade what we have learned so easily with each other. But machines can. Machines can operate in ways that humans cannot.
[OMINOUS MUSIC]
So what does all this add up to for Geoff?
Well, in a sense, he sees this as a culmination of his 50 years of work. He always assumed that if you threw more data at these systems they would learn more and more. He didn’t think they would learn this much this quickly and become this powerful.
- geoffrey hinton
Look at how it was five years ago and look at how it is now. And take that difference and propagate it forwards. And that’s scary.
We’ll be right back.
OK. So what exactly is Geoff afraid of when he realizes that AI has this turbocharge capability?
There’s a wide range of things that he’s concerned about. At the small end of the scale are things like hallucinations and bias. Scientists talk about these systems hallucinating, meaning they make stuff up. If you ask a chat bot for a fact, it doesn’t always tell you the truth. And it can respond in ways that are biased against women and people of color.
But, as Geoff says, those issues are just a byproduct of the way chat bots mimic human behavior. We can fabulate. We can be biased. And he believes all that will soon be ironed out.
- geoffrey hinton
So I don’t — I mean, bias is a horrible problem, but it’s a problem that comes from people. And it’s easier to fix in a neural net than it is in a person.
Where he starts to say that these systems get scary are, first and foremost, with the problem of disinformation.
- geoffrey hinton
I see that as a huge problem — not being able to know what’s true anymore.
These are systems that allow organizations, nation states, other bad actors, to spread disinformation at a scale and an efficiency that was not possible in the past.
- geoffrey hinton
These chat bots are going to make it easier for them to manipulate and be able to make very good fake videos.
They can also produce photorealistic images and videos.
Deepfakes.
Right.
- geoffrey hinton
But they’re getting better quite quickly.
He, like a lot of people, is worried that the internet will soon be flooded with fake text, fake images, and fake videos, to the point where we won’t be able to trust anything we see online. So that’s the short-term concern. Then there’s a concern in the medium-term, and that’s job loss. Today these systems tend to complement human workers. But he’s worried that, as these systems get more and more powerful, they will actually start replacing jobs in large numbers.
And what are some examples?
- geoffrey hinton
A place where it can obviously take away all the drudge work and maybe more besides is in computer programming.
None too surprisingly, Geoff — a computer scientist — points to the example of computer programmers. These are systems that can write computer programs on their own.
- geoffrey hinton
So it may be that computer programming, you don’t need so many programs anymore. Because you can tell one of these chat bots what you want the program to do.
(Video) 'Godfather of AI' warns that AI may figure out how to kill people
Those programs are not perfect today. Programmers tend to use what they produce and incorporate the code into larger programs. But as time goes on, these systems will get better, and better, and better at doing a job that humans do today.
And you’re talking about jobs that aren’t really seen as being vulnerable because of tech up until this point. Right?
Exactly. The thinking for years was that artificial intelligence would replace blue collar jobs — that robots, physical robots, would do manufacturing jobs and sorting jobs in warehouses. But what we’re seeing is the rise of technology that can replace white collar workers, people that do office work.
Mm-hmm.
So that’s the medium-term. Then there are more long-term concerns. And let’s remember that, as these systems get more and more powerful, Geoff is increasingly concerned about how this technology will be used on the battlefield.
- geoffrey hinton
The US Defense Department would like to make robot soldiers. And robot soldiers are going to be pretty scary.
In an off-handed way he refers to this as robot soldiers.
Like, actually soldiers that are robots?
Yes, actually soldiers that are robots.
- cade metz
And the relationship between a robot soldier and your idea is pretty simple. You were working on computer vision. If you have computer vision, you give that to a robot. It can identify what’s going on in the world around it. If it can identify what’s going on, it can target those things.
- geoffrey hinton
Yes. Also, you could make it agile. So you can have things that can move over rough ground and can shoot people. And the worst thing about robot soldiers is if a large country wants to invade a small country, they have to worry a bit about how many Marines are going to die.
But if they’re sending robot soldiers, instead of worrying about how many Marines are going to die the people who fund the politicians are going to say, great. You’re going to send these expensive weapons that are going to get used up. The military industrial complex would just love robot soldiers.
What he talks about is potentially this technology lowering the bar to entry for war — that it becomes easier for nation states to wage war.
So it’s kind of like drones. The people doing the killing are sitting in an office with a remote control, really far away from the people doing the dying.
No, it’s actually a step beyond that. It’s not people controlling the machines. It’s the machines making decisions on their own, increasingly. That is what Geoff is concerned about.
- geoffrey hinton
And then there’s the sort of existential nightmare of this stuff getting to be much more in charge of this and just taking over.
His concern is that as we give machines certain goals — as we ask them to do things for us — that in service of trying to reach those goals they will do things we don’t expect them to do.
So he’s worried about unintended consequences.
Unintended consequences. And this is where we start to venture into the realm of science fiction.
- archived recording
Hello, HAL. Do you read me? Do you read me, HAL?
For decades, we’ve watched this play out in books and movies.
- archived recording
Affirmative, Dave. I read you.
If anyone has seen Stanley Kubrick’s great film “2001”—
Mm-hmm.
- archived recording
Open the pod bay doors, Hal.
I’m sorry, Dave. I’m afraid I can’t do that. This mission is too important for me to allow you to jeopardize it.
We’ve watched the HAL 9000 spin outside the control of the people who created it.
- archived recording
I know that you and Frank were planning to disconnect me. Where the hell did you get that idea, HAL?
Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
That is a scenario, believe it or not, that Geoff is concerned about. And he is not alone.
Basically, robots taking over.
Exactly.
- geoffrey hinton
If you give one of these superintelligent agents a goal, it’s going to very quickly realize that a good sub-goal for more or less any goal is to get more power.
Whether these technologies are deployed on the battlefield or in an office or in a computer data center, Geoff is worried about humans ceding more and more control to these systems.
- geoffrey hinton
We love to get control. And that’s a very sensible goal to have, because if you’ve got control you can get more done. But these things are going to want to get control too for the same reason, just in order to get more done. And so that’s a scary direction.
So this sounds pretty far fetched, honestly. But, OK, let’s play it out as if it wasn’t. What would be that doomsday scenario? Paint the picture for me.
Think about it in simple terms. If you ask a system to make money for you — which people, by the way, are already starting to do — can you use ChatGPT to make money on the stock market? As people do that, think of all the ways that you can make money. And think of all the ways that could go wrong. That is what he’s talking about.
Remember, these are machines. Machines are psychopaths. They don’t have emotions. They don’t have a moral compass. They do what you ask them to do. Make us money? OK, we’ll make you money. Perhaps you break into a computer system in order to steal that money.
If you own oil futures in Central Africa, perhaps you foment a revolution to increase the price of those futures to make money from it. Those are the kind of scenarios that Geoff and many other people I’ve talked to relate. What I should say at this point, though, is that this is hypothetical as we stand today. A system like ChatGPT is not going to destroy humanity.
[LAUGHS]:
Full stop.
Good.
And if you bring this up with a lot of experts in the field, they get angry that you even bring it up. And they point out that this is not possible today. And I really pushed Geoff on this.
- cade metz
But how do you see that existential risk relative to what we have today? Today you have GPT-4, and it does a lot of things that you don’t necessarily expect. But it doesn’t have the resources it needs to write computer programs and run them. It doesn’t have everything that you would need.
- geoffrey hinton
Right, but supposedly you gave it a high level goal like be really good at summarizing text or something. And it then realizes it’s, OK, to be really good at that I need to do more learning. How am I going to do more learning? Well, if I could grab more hardware and run more copies of myself —
(Video) Full interview: "Godfather of artificial intelligence" talks impact and potential of AI- cade metz
But it doesn’t work that way today, though. Right? It requires someone to say, have all the hardware you want. It can’t do that today because it doesn’t have access to the hardware. And it cannot replicate itself.
- geoffrey hinton
But suppose it’s connected to the internet. Suppose it can get into a data center and modify what’s happening there.
- cade metz
Right. But it cannot do that today.
- geoffrey hinton
I don’t think that’s going to last. And the reason I don’t think it’s going to last is because you make it more efficient by giving it the ability to do that. And there will be bad actors who would just want to make it more efficient.
- cade metz
So what you’re basically saying is that because humans are flawed, and because they’re going to want to push this stuff forward, they’re going to continue to push it forward in ways that do push it into those danger areas.
- geoffrey hinton
Yes.
So he’s basically arguing that this is a Pandora’s box, that it’s been opened, and that because people are people they’re going to want to use what’s inside of it. But I guess I’m wondering, I mean, much like you’re reflecting here — how much weight should we give to his warnings? Yes, he has a certain level of authority, Godfather of AI and all of that. But he has been surprised by its evolution in the past, and he might not be right.
Right. There are reasons to trust Geoff and there are reasons not to trust him. About five years ago, he predicted that all radiologists would be obsolete by now, and that is not the case. You cannot take everything he says at face value. I want to underscore that.
But you’ve got to remember, this is someone who lives in the future. He’s been living in the future since he was in his mid-20s. He saw then where these systems would go, and he was right. Now, once again, he’s looking into the future to see where these systems are headed. And he fears they’re headed to places that we don’t want them to go.
Cade, what steps does he suggest we take to make sure that these doomsday scenarios never happen?
Well, he doesn’t believe that people will just stop developing the technology.
- geoffrey hinton
If you look at what the financial commentators say, they’re saying Google’s behind Microsoft. Don’t buy Google stock.
This technology is being built by some of the biggest companies on Earth — public companies who are designed to make money. They are now in competition.
- geoffrey hinton
Basically, if you think of it as a company whose aim is to make profits — I don’t work for Google anymore, so I can say this now. As a company, they’ve got to compete with that.
And he sees this continuing not just with companies, but with governments in other parts of the world.
So, in a way, it’s kind of like nuclear weapons. Right? We knew that they would destroy the world, yet we mounted an arms race to get them anyway.
Absolutely. He uses that analogy. Others in the field use that analogy. This is a powerful technology.
- geoffrey hinton
So I think there’s zero chance — I shouldn’t say zero, but minuscule — minuscule chance of getting people to agree not to develop it further.
He wants to make sure we get the balance right between using this technology for good and using it for ill.
- geoffrey hinton
The best hope is that you take the leading scientists and you get them to think very seriously about are we going to be able to control this stuff. And if so, how? That’s what the leading minds should be working on. And that’s why I’m doing this podcast.
So, Cade, you’ve laid out a pretty complicated puzzle here. On the one hand, there’s this technology that works a lot differently, and perhaps a lot better, than one of its key inventors anticipated. But on the other hand, it’s a technology that’s also left this inventor and others worried about the future because of those very surprising and sudden evolutions. Did you ask Geoff if, looking back, he would have done anything differently?
I asked him that question multiple times.
- cade metz
Is there part of you, at least, or maybe all of you who regrets what you have done? I mean, you could argue that you are the most important person in the progress of this idea over the past 50 years. And now you’re saying that this idea could be a serious problem for the planet.
- geoffrey hinton
For our species.
- cade metz
For our species.
- geoffrey hinton
Yep. And various people have been saying this for a while, and I didn’t believe them because I thought it was a long way off. What’s happened to me is understanding there might be a big difference between this kind of intelligence and biological intelligence.
- cade metz
Right.
- geoffrey hinton
That’s made me completely revise my opinions.
It’s a complicated situation for him to be in.
- cade metz
But, again, do you regret your role in all this?
- geoffrey hinton
So the question is, looking back 50 years would I have done something different. Given the choice that I made 50 years ago, I think they were reasonable choices to make. It’s just turned out very recently that this is going somewhere I didn’t expect. And so I regret the fact that it’s as advanced as it is now, and my part in doing that. But it’s a distinction Bertrand Russell made between wise decisions and fortunate decisions.
He paraphrased the British philosopher Bertrand Russell —
- geoffrey hinton
You can make a wise decision that turns out to be unfortunate.
— saying that you can make a wise decision that still turns out to be unfortunate. And that’s basically how he feels.
- geoffrey hinton
And I think it was a wise decision to try and figure out how the brain worked. And part of my motivation was to make human society more sensible. But it turns out that maybe it was unfortunate.
It’s reminding me, Cade, of Andrei Sakharov who was, of course, the Soviet scientist who invented the hydrogen bomb, and witnessed his invention, and became horrified, and spent the rest of his life trying to fight against it. Do you see him that way?
I do.
[MUSIC PLAYING]
He’s someone who has helped build a powerful technology. And now he is extremely concerned about the consequences. Even if you think the doomsday scenario is ridiculous or implausible, there are so many other possible outcomes that Geoff points to. And that is reason enough to be concerned.
Cade, thank you for coming on the show.
Glad to be here.
We’ll be right back.
[MUSIC PLAYING] Here’s what else you should know today. After a marathon set of crisis talks, President Biden and House Speaker Kevin McCarthy reached an agreement on Saturday Night to lift the government’s debt limit for two years — enough to get it past the next presidential election. The agreement still needs to pass Congress. And both McCarthy and Democratic leaders spent the rest of the weekend making an all-out sales pitch to members of their own parties. The House plans to consider the agreement on Wednesday, less than a week before the June 5 deadline, when the government will no longer be able to pay its bills.
And in Turkey on Sunday, President Recep Tayyip Erdogan beat back the greatest political challenge of his career, securing victory in a presidential runoff that granted him five more years in power. Erdogan, a Mercurial leader who has vexed his Western allies while tightening his grip on the Turkish state, will deepen his conservative imprint on Turkish society in what will be, at the end of this term, a quarter century in power.
Today’s episode was produced by Stella Tan, Rikki Novetsky, and Luke Vander Ploeg, with help from Mary Wilson. It was edited by Michael Benoist, with help from Anita Badejo and Lisa Chow, contains original music by Marion Lozano, Dan Powell, Rowan Niemisto, and Elisheba Ittoop, and was engineered by Chris Wood. Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly.
That’s it for “The Daily.” I’m Sabrina Tavernise. See you tomorrow.
FAQs
Who is the godfather of AI? ›
A computer scientist known as "the godfather of AI" has been warning about the potential dangers of AI. Geoffrey Hinton recently left Google so he could sound the alarm about AI outperforming humans. SCOTT SIMON, HOST: Geoffrey Hinton is known as the godfather of artificial intelligence.
What are the negative impacts of artificial intelligence? ›No Creativity. A big disadvantage of AI is that it cannot learn to think outside the box. AI is capable of learning over time with pre-fed data and past experiences, but cannot be creative in its approach. A classic example is the bot Quill who can write Forbes earning reports.
Will AI help the world or hurt it? ›The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change. The opposite, however, is not true: we might see transformative AI without developing human-level AI.
Who coined the term AI for the first time? ›The term "AI" could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level ...
Who is the most powerful AI? ›GPT-3 was released in 2020 and is the largest and most powerful AI model to date. It has 175 billion parameters, which is more than ten times larger than its predecessor, GPT-2.
Who is king of AI? ›The King of Ai, as leader and representative of his people, bears the full punishment of God due to all who oppose Him. The name “Ai” means “ruins” and their king's body remains under a ruin of stones, a permanent testimony to the everlasting ruin that falls upon anyone who would stand against God and His people.