Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Eric Schmidt says artificial intelligence is becoming dangerously powerful as he creates his own AI defense startup


Whenever a tech leader publicly warns about the potential dangers of artificial intelligence, or perhaps “superintelligence,” it’s important to remember that they’re also selling a solution from the other side Washington on the need for AI security regulations while raising expensive ChatGPT enterprise subscriptions. “AI is so powerful it can be dangerous, just imagine what it could do for your company.”

We have another example of this type of case with Eric Schmidt, the 69-year-old former CEO of Google, who recently became known for dating women less than half his age. lavish them with money to create their own tech investment funds.Schmidt has been making the rounds on Saturday news shows to warn of the potential dangers of AI as it advances to the point where “we’ll soon be able to let computers decide what they want to do.” every man will have the equivalent of polymaths in his pocket.”

Schmidt: made comments on ABCs “This week”. He also performed PBS: Last Friday, he talked about how the future of warfare will see more AI-powered drones, warning that humans must stay in the loop and maintain “meaningful” control. Drones have become much more commonplace in Ukraine -in the Russian war, as they are used for surveillance and dropping explosives without the need for people to get close to the front line.

“The right model, and obviously war is terrible, is to have the humans well in the back and the weapons in the front, and they’re networked and controlled by AI,” Schmidt. said. “The Future of War Is Artificial Intelligence: Different Kinds of Networked Drones.”

Schmidt, conveniently, was new company development by its name White Stork, which has provided Ukraine with drones that use artificial intelligence in “sophisticated, powerful ways.”

Putting aside that generative AI is deeply flawed and almost certainly not close to surpassing humans, he is probably right in one respect. AI tends to behave in ways that its creators do not understand or could not predict. Social media provides a perfect case study for this. When algorithms only know how to optimize for maximum engagement and don’t care about ethics, they will encourage anti-social behavior like extremist views designed to upset and attract attention Since companies like Google introduce “agent” bots that can navigate the web on their own, it’s possible for them to behave unethically or otherwise just plain harmful. :

But Schmidt talks about his book in these interviews. In him ABC: In the interview, he says that when artificial intelligence systems start to “self-improve,” it might be worth considering pulling the plug. But he goes on to say: “In theory, it’s better to have someone in the loop.” Schmidt has spent a lot of money while also investing in AI startups Lobbying Washington on AI laws. He certainly hopes that the companies he invests in will be ones that encourage.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *